id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
13779296
pes2o/s2orc
v3-fos-license
Strengthening the accumbal indirect pathway promotes resilience to compulsive cocaine use A hallmark of addiction is the loss of control over drug intake, which is seen only in a fraction of those exposed to stimulant drugs like cocaine. The cellular mechanisms underlying vulnerability or resistance to compulsive drug use are still unknown. Here we show that individual variability in the development of highly motivated and perseverative behavior toward cocaine is associated with synaptic plasticity in medium spiny neurons expressing dopamine D2 receptors (D2-MSNs) in the nucleus accumbens of mice. Potentiation of glutamatergic inputs onto indirect pathway D2-MSNs was associated with resilience towards compulsive cocaine seeking. Inhibition of D2-MSNs using a chemicogenetic approach enhanced the motivation to obtain cocaine while optogenetic activation of D2-MSNs suppressed cocaine self-administration. These results indicate that recruitment of D2-MSNs in nucleus accumbens functions to restrain cocaine self-administration and serves as a natural protective mechanism in drug-exposed individuals. VOLUME 16 | NUMBER 5 | MAY 2013 nature neurOSCIenCe a r t I C l e S Individuals suffering from addiction endure large personal and financial losses to maintain drug use. Among other addictive behaviors, they show a strong perseverance and an extraordinary motivation to obtain the drug. These behaviors are expressed by only a fraction of those exposed to the drug, revealing a substantial degree of individual variability and the existence of predisposing traits and conditions that may serve as risk or protective factors in the development of addiction. In humans, the vulnerability to develop compulsive behaviors toward stimulant drugs has been linked to deficits in cortico-striatal processing and low levels of dopamine D2 receptors in the striatum [1][2][3] . Moreover, impulsivity traits and low levels of dopamine D2 receptors have been associated with compulsive cocaine use in both rodents and nonhuman primates 4,5 . Furthermore, rodents also show natural individual variability in the motivational properties of cocaine and the development of compulsive behaviors [6][7][8] . Dopamine D2 receptors are expressed in the subpopulation of medium spiny neurons (D2-MSNs) in the striatum that form indirect projections to midbrain regions via pallidum and subthalamic nuclei (indirect pathway). The other subpopulation of MSNs expresses dopamine D1 receptors (D1-MSNs) and forms direct projections to midbrain neurons (direct pathway). Activation of dopamine receptors on each subpopulation of MSNs triggers different intracellular signaling cascades. Activation of D2 receptors inhibits PKA activity via G i signaling in D2-MSNs, whereas activation of D1 receptors stimulates PKA activity via G s/olf signaling in D1-MSNs 9 . It is thought that these two MSN subtypes and their parallel pathways exert complementary, and sometimes opposing, actions on behaviors that are controlled by the cortico-striatal system 10 . The use of pharmacological tools that target D1 and D2 receptors has helped to determine the relative contribution of the direct and indirect pathways in behavior. However, the complex expression pattern of dopamine D2 receptors present in both pre-and postsynaptic compartments in different neuronal types in the mesolimbic circuit has complicated the interpretation of these experimental results. Cell type-specific approaches have recently been used to aid in this quest. In the dorsal striatum, optogenetic activation of direct pathway D1-MSNs increases locomotion and serves as a reinforcer, whereas activation of indirect pathway D2-MSN increases freezing behaviors and is not a reinforcer 11,12 . In the NAc, a region involved in cue-induced reward learning, D1-MSNs and D2-MSNs have opposite effects on cocaine-related behaviors 9 . Activation of D2-MSNs reduces conditioned place preference for cocaine, whereas activation of D1-MSN increases it 13 . In addition, the ablation or inhibition of D2-MSNs in the NAc induces an increase in amphetamine conditioned place preference and facilitates locomotor sensitization to cocaine, suggesting a tonic role of D2-MSNs on limiting the actions of stimulant drugs 14,15 . However, despite these findings, the role of the indirect pathway and D2-MSNs in voluntary cocaine self-administration and compulsive drug seeking remains unclear. We predicted that indirect-pathway D2-MSNs would exert an inhibitory influence on behavioral output of this circuitry and limit drug seeking, and that weakening this pathway would remove the inhibitory control and render individuals more susceptible to develop compulsive drug seeking. We found individual variability in vulnerability to compulsive cocaine use that was correlated to the synaptic strength of inputs to D2-MSNs. Moreover, inhibition or activation of the accumbal indirect pathway enhanced or suppressed cocaine self-administration behavior, respectively. a r t I C l e S RESULTS Individual variability in behaviors toward cocaine Intravenous cocaine self-administration was established in an outbred strain of mice using a cued-operant task that required naive mice to nose poke in an active hole to earn an intravenous infusion of cocaine (Supplementary Fig. 1). Approximately 55% of mice acquired the behavior within 5-10 d and the rest were removed from the study. Mice were then given access to cocaine during daily 2-h sessions for 6-7 weeks. Two behaviors were measured to determine the degree of compulsive drug use: the difficulty of stopping or limiting drug use, measured as perseverative drug seeking, and high motivation to obtain the drug, measured as the effort exerted to obtain it. These behaviors were adopted from the diagnostic criteria for drug dependence in humans described in the DSM-IV and have been successfully applied in a model developed in rats 4,7,16 . To measure perseverative responding, we interrupted sessions with two drug-off periods (15 min long), during which cocaine was not available (Fig. 1a-c). Initially, mice showed a typical extinction response during the drug-off periods that was characterized by a spike in responding on days 1 and 2, followed by a decrease in responding selectively in the drug-off periods during subsequent days. Although most mice maintained minimal responding for the rest of the sessions, a few developed perseverative responding after weeks of selfadministration ( Supplementary Fig. 2). The perseverance value was calculated as mean active pokes during the drug-off periods over the last ten sessions (Fig. 1c). During this time, perseverative responding during the drug-off period was selective to the active nose pokes and the number of inactive pokes was similar between mice with high and low perseverance values (0.13 ± 0.07 inactive pokes per day for mice with perseverance value > 2, n = 3 mice; 0.27 ± 0.08 inactive per day for mice with values < 2, n = 24 mice; unpaired t test, t 24 = 0.5, P = 0.62). Motivation to obtain cocaine was assessed in two progressive ratio sessions in which the number of pokes required to earn each consecutive cocaine reward was increased exponentially (Online Methods and Fig. 1d,e). The breakpoint, the number of pokes performed to earn the last reward, was defined as the motivation value. Similar to the measure of perseverative behavior, most mice showed low motivation values, but a few mice had high values (Supplementary Fig. 2). Notably, the ratio of inactive to active pokes did not change during these progressive ratio sessions (session, 0.27 ± 0.04; progressive ratio 0.48 ± 0.17; paired t test, t 26 = 1.2, P = 0.23, n = 27 mice). A combined behavior score was calculated for each mouse by adding the z score of perseverance and motivation values. By definition, z scores have a mean of zero and a s.d. of 1, which ensures equal contribution of each behavior and eliminates inherent differences in the variance of each measurement. Mice varied on the degree of perseverance, motivation and combined behavior score, as well as on their average cocaine intake through the study (5-24 mg per kg of body weight per d). However, there was no correlation between the perseverance scores and cocaine intake (r P = 0.27, r 2 = 0.08, P = 0.16, n = 27 mice; Fig. 1f). Although the highest perseverance score was observed in a mouse with a daily intake of 20 mg per kg of cocaine, high intake was not sufficient to predict positive perseverance scores. As expected, there was a positive correlation between the motivation and the combined behavior scores and daily cocaine intake (motivation score: r P = 0.57, r 2 = 0.31, P < 0.01, n = 27 mice; combined score: r P = 0.49, r 2 = 0.24, P < 0.01, n = 27 mice; Fig. 1g,h). A mean daily intake of 10 mg per kg per d, a dose that increases locomotion in this mouse strain 17 , was used as the threshold to identify mice with high and low cocaine intake. Mice with low daily cocaine intake displayed b c mean negative motivation scores (−0.5 ± 0.1) and behavior scores (−0.7 ± 0.2, n = 13 mice), whereas mice with high cocaine intake showed mean positive motivation (0.5 ± 0.3) and behavior scores (0.6 ± 0.6, n = 14 mice) (Fig. 1g,h). Despite the correlation between intake and the behavior scores, there was a considerable degree of variability in the scores among mice with high intake. High intake did not always result in the development of compulsive cocaine use, and some mice maintained negative behavior scores. We asked whether specific synaptic mechanisms are associated with and responsible for the vulnerability or resistance to develop compulsive behaviors among cocaine users. D2-MSN inhibition enhances motivation for cocaine These findings indicate that excitatory inputs with higher AMPA/ NMDA ratios are present in D2-MSNs of individuals that did not develop compulsive behaviors toward cocaine, suggesting that potentiation of excitatory inputs onto D2-MSNs might confer protection against cocaine-related behaviors. We reasoned that weakening of the indirect pathway may render individuals more vulnerable to the expression of compulsive behavior toward cocaine. To test this hypothesis, we manipulated the activity of D2-MSNs using a chemicogenetic approach with designer receptor exclusively activated by a designer drug (DREADD) 18,19 . hM4D i is a G i -coupled DREADD that has been used to inhibit MSN activity in the striatum 15,20 . A conditional viral vector expressing hM4D i was injected bilaterally in the NAc core of mice expressing Cre recombinase selectively in D2-MSNs (Adora2a-cre +/− mice). Fluorescently tagged DREADDs were expressed at the soma of MSNs in the NAc core region around the anterior commissure and on axonal projections to the ventral pallidum (Fig. 4a,b). No labeling was seen in other brain regions. The inhibitory action of hM4D i on the indirect pathway was confirmed with recordings from ventral pallidum neurons. Synaptic responses were triggered by selective activation of fibers from D2-MSNs in the NAc that coexpressed channelrhodopsin-2 (ChR2) and hM4D i (injected at a 1:1 ratio). A laser pulse evoked GABAergic inhibitory postsynaptic currents (IPSCs) that were depressed by 52.5 ± 3.7% when the synthetic agonist for hM4D i , clozapine N-oxide (CNO, 10 µM), was applied (paired t test, t 6 = 4.0, P < 0.01, n = 7 cells; Fig. 4c,d). Application of CNO had no effect on the amplitudes of IPSCs in slices expressing only ChR2 in D2-MSN (data not shown). These results indicate that activation of hM4D i expressed in accumbal D2-MSNs inhibits the output of the accumbal-tegmental indirect pathway. Mice expressing hM4D i in D2-MSNs in the NAc were allowed to acquire cocaine self-administration behavior for 9 d. We first examined the effect of D2-MSN inhibition on sessions in which the drug was easily accessible under a fixed poke:reward ratio of 1 (FR1). Mice alternately received CNO (1 mg per kg) and saline intravenously immediately before the session (Fig. 4e). The two groups showed similar daily cocaine intake (saline = 31 ± 2.9 mg per kg per d; CNO = 34 ± 2.3 mg per kg per d, paired t test, t 13 = 0.86, P = 0.24, n = 6 mice; Fig. 4f), suggesting that inhibition of the D2-MSNs output did not alter cocaine intake when the drug was available with minimum effort. The motivation to obtain cocaine was tested in 4-6 consecutive progressive ratio sessions in which the pokes:reward ratio was increased exponentially with each subsequent reward. Notably, during these sessions, drug intake dropped to 10-40% of the normal intake during sessions with low-effort access to cocaine (FR1). Following a latin-square design, CNO or saline was administered intravenously immediately before the session. Mice had different breakpoint values on control saline days (Fig. 4g), which reflected the individual differences in the motivation for cocaine (Fig. 1). However, independently of their baseline breakpoint, mice had more active pokes and achieved higher breakpoint on days in which they were treated with CNO (breakpoint: saline = 18.7 ± 6.1, CNO = 35.4 ± 12.3, n = 6, paired t test, t 5 = 2.6, P < 0.05). When normalized to saline, the breakpoint doubled on CNO days, indicating a higher motivation to obtain cocaine following inhibition of D2-MSNs (normalized breakpoint in CNO = 2 ± 0.2, one-sample t test, t 5 = 2.7, P < 0.02; Fig. 4g). Supplementary Fig. 4b,c). Activation of G i signaling in D2-MSNs can also be achieved by administration of D2 receptor agonists. However, D2 receptors are also expressed in other cell types, including striatal cholinergic interneurons and midbrain dopaminergic neurons, where activation of these receptors inhibits dopamine release 21 . Systemic administration of the D2-like agonist quinelorane failed to increase breakpoint values for cocaine during progressive responding sessions (saline = 35 ± 9.5, quinelorane = 28 ± 13.4, n = 5 mice, paired t test, t 4 = 1.1, P > 0.3; Fig. 4h). In contrast, a trend to decreased responding for cocaine was observed in these experiments when a low dose of quinelorane (30 µg per kg) was administered intravenously before the start of the session (normalized breakpoint in quinelorane = 0.59 ± 0.24%, one-sample t test, t 4 = 1.67, P = 0.17, n = 5 mice; Fig. 4h). This dose was chosen because higher doses of D2 receptor agonist depress locomotor activity in mice 22 , which would interfere with the task. This result is consistent with previous reports that D2-like agonists produce a leftward shift in the dose-response curve for cocaine self-administration and decrease overall cocaine intake in rats and nonhuman primates [23][24][25] . D2-MSN activation suppresses cocaine self-administration In vivo optogenetic stimulation was used to selectively activate D2-MSNs in the NAc core and test the effect of accumbal indirect pathway stimulation on cocaine self-administration. We bilaterally injected a npg a r t I C l e S conditional viral vector expressing ChR2 (EYFP tagged) into the NAc core region of Adora2a-cre +/− mice and implanted fiber optics in the injection area. Pathway-specific expression of ChR2 was confirmed using fluorescent microscopy, which revealed labeled cell bodies in the NAc core and labeled projections in the ventral pallidum, but not in other brain regions (Fig. 5a). Electrophysiological recordings in brain slices containing NAc core showed that brief laser stimulation (0.5-ms duration pulse at 16.6 Hz) reliably triggered the firing of action potentials in D2-MSNs expressing ChR2 and, as expected, the firing was sensitive to tetrodotoxin (TTX; n = 2 cells; Fig. 5b). The same brief laser stimulation elicited an IPSC in two types of neurons that are known to receive inputs from indirect pathway D2-MSNs: neighboring striatal MSNs (ChR2-negative, putative D1-MSNs, n = 3 cells) and ventral pallidum neurons (n = 9 cells; Fig. 5c,d). Application of the GABA A receptor antagonist gabazine (5 µM) abolished the laser-evoked IPSCs in both cell types (n = 2 for both cell types). These results confirm that optogenetic stimulation activates accumbal indirect pathway neurons and, in turn, inhibits the postsynaptic targets of D2-MSNs. Mice expressing ChR2 in accumbal D2-MSNs were allowed to acquire cocaine self-administration behavior (5-9 d) and then habituated to perform with the optic fibers until responding was stable. Testing occurred over four consecutive daily sessions in which the laser power was OFF or ON, following the same pattern for all mice (OFF-ON-OFF-ON; Fig. 5e). In the first set of experiments, optogenetic stimulation (10-ms pulses at 16.6 Hz, 10 min OFF followed by 5 min ON throughout the session) was delivered to the NAc core. Under these conditions, the mean number of earned rewards dropped significantly during ON sessions for all mice (OFF rewards per session = 21.8 ± 4.3, ON rewards per session = 7.3 ± 2.1, paired t test, t 7 = 4.5, P < 0.01, n = 8 mice; Fig. 5f). When normalized to the OFF sessions, optogenetic stimulation suppressed cocaine self-administration by 69.1 ± 5.3% (one-sample t test, t 7 = 12.6, P < 0.001, n = 8 mice; Fig. 5f). Furthermore, stimulation resulted in a similar decrease in mice trained on a FR1, FR2 and FR3 schedule (FR1: off = 18 ± 7.5, on = 6.5 ± 4, n = 2 mice; FR2: off = 25.7 ± 8.5, on = 7.8 ± 1.2, n = 3 mice; FR3: off = 20.5 ± 8.3, on = 7.3 ± 5.8, n = 3 mice; two-way repeated-measure ANOVA main effect of stimulation, F 1,5 = 15.25, P < 0.02; no effect of fixed ratio schedule or interaction, both F's < 0.3, P's > 0.7). These results indicate that recruitment of the indirect pathway reduces cocaine self-administration independent of the ratio requirement. DISCUSSION We found that resilience to compulsive cocaine use was accompanied by potentiation of glutamatergic inputs to D2-MSNs in the NAc, suggesting that this synaptic potentiation serves a protective mechanism. We challenged this conclusion by manipulating the activity of indirect-pathway D2-MSNs. Inhibition of D2-MSNs during selfadministration enhanced responding and increased motivation to obtain cocaine, rendering mice more vulnerable to the expression of compulsive behaviors. In contrast, activation of D2-MSNs decreased cocaine self-administration. These experiments provide evidence of a causal link between the synaptic strength at D2-MSN in the NAc core and behavioral control over reward-motivated actions. We also found synaptic potentiation of glutamatergic inputs onto direct-pathway D1-MSNs after cocaine self-administration. Enhanced AMPA/NMDA ratios and other evidence of synaptic potentiation have been seen in D1-MSNs of the NAc after passive cocaine administration 26,27 . Our results add to previous findings by demonstrating that potentiation of D1-MSNs inputs is a generalized response to cocaine exposure and that it develops in the majority of the mice, independently of whether or not they show compulsive behaviors toward cocaine. In contrast, potentiation at D2-MSN inputs was observed only in the resilient individuals that did not develop compulsive behaviors. It remains unclear whether resilient individuals have stronger inputs to accumbal D2-MSNs, which precede the cocaine exposure, or whether their resilience developed as a consequence of repeated cocaine self-administration. Judging by the range of AMPA/ NMDA ratios that we recorded in sham mice, it is unlikely that the potentiation of D2-MSNs inputs preceded drug exposure. However, further investigation is required to determine the predisposing factors to this resilience. Manipulations of the activity of indirect-pathway neurons in the NAc, even when introduced after several weeks of drug taking, substantially increased or decreased cocaine seeking and self-administration. Systemic administration of a D2-like agonist did not enhance seeking and failed to reproduce the effects of inhibiting D2-MSNs via cell-specific expression of hM4Di. This is possibly a result of the different effects of activation of dopamine D2 receptors expressed on other cell types, such as dopaminergic neurons and cholinergic interneurons. These results highlight the importance of using cell-specific approaches and demonstrate that selective activation of G i signaling in D2-MSNs can increase drug-seeking behavior. The same manipulation did not alter responding for food reward, suggesting that the effect is specific for cocaine seeking. Dorsal regions of the striatum are more likely to be involved, as they regulate cuedinstrumental responding for food 28 . As expected for a strong reinforcer, breakpoint values achieved for food were 10-30-fold higher than those achieved for cocaine. Thus, it is possible that the activity of indirect-pathway neurons is already depressed when responding for such a potent reinforcer and the effect of hM4Di activation might be occluded. When D2-MSNs in the dorsal striatum were inhibited after 3 weeks of cocaine self-administration, responding for cocaine was not increased, suggesting that indirect-pathway neurons in the dorsal striatum are not involved in cued drug seeking at this early stage and under these experimental conditions. D2-MSNs in the NAc core send GABAergic projections to the dorsolateral subregion of the ventral pallidum, where neurons display sustained changes in firing rate that are time locked to responding during a cocaine self-administration task 29 . Reinstatement of cocaine seeking is associated with decreased levels of extracellular GABA in the ventral pallidum, and blockers of glutamateric transmission in the NAc core prevent the change 30,31 . These findings implicate indirect D2-MSNs in NAc and its projections to the ventral pallidum in the processing of drug-seeking behavior and inhibitory control of the output. It is tempting to speculate that selective silencing of ventral pallidum neurons would also decrease the motivation to seek cocaine. Using therapeutic brain stimulation to enhance indirect pathway or silence the output of ventral pallidum in patients could enhance selfcontrol in those fighting a dependence on stimulant drugs. In conclusion, our results suggest that synaptic potentiation in D2-MSN inputs is a critical mechanism for controlling the expression of compulsive behaviors toward cocaine. We propose that this cell-specific synaptic potentiation facilitates the recruitment of indirect-pathway neurons and protects against the development of addictive behaviors. METhODS Methods and any associated references are available in the online version of the paper. mice. Experiments were performed in accordance with guidelines from the National Institute on Alcohol Abuse and Alcoholism's Animal Care and Use Committee. Drd1-EGFP BAC transgenic mice (GENSAT, Swiss Webster background) and Adora2a-cre BAC transgenic mice (GENSAT, KG139Gsat/Mmcd, SW/B6 background) were used. Mice were housed in groups until surgery and individually afterwards on reversed light cycle (lights on 18:30-06:30) with food and water ad libitum. Intravenous cocaine self-administration. Mice (~40 d old) were implanted with a chronically indwelling catheter (CamCath) in the right jugular vein as described previously 32 . Mice received oral antibiotics and recovered for 5-7 d before behavioral testing was started. Catheters were maintained by daily flushes with heparinized saline (100-275 U ml −1 ). Sham surgeries permanently occluded blood flow through the jugular vein. Sham control mice (n = 20) were left undisturbed in their home cage for at least 3 weeks in the reversed light cycle before experiments. Modified operant boxes (Med-Associates) (size = 11 × 18 × 13 cm) were used and behavioral testing was performed during the dark phase of the light cycle. Pokes in active hole delivered intravenous cocaine infusions (1-2 mg per kg) paired with the cue light ON for 4 s and a 10-s time out. The house light signaled the drug-off periods and the end of the sessions. Naive mice were trained in 2-6-h sessions for 5-13 d on a fixed poke:reward ratio of 1. The criteria for acquisition of self-administration behavior was a ratio of active:inactive pokes of at least 2.5 and cocaine intake higher than 5 mg per kg per d for the last two training session. An estimated 55% of Swiss Webster Drd1-EGFP mice (27 males, 9 females) reached criteria within 8.5 ± 0.5 d of training and were moved to sessions (2:30 h long, cocaine = 2 mg per kg per infusion) and self-administered cocaine for an average of 31 more days (min = 17, max = 41; Supplementary Table 1). The other 45% of the mice (31 mice) did not acquire selfadministration behavior. The behavioral analysis was completed in 28 out of the 36 mice. 8 of 36 mice lost catheter patency right before the last breakpoint session and were used for physiological recordings, but lacked behavioral measurements (Supplementary Table 1). Sessions consisted of three drug-on periods (40 min each) interspersed by two 15-min drug-off periods. Perseverance was measured as the number of active pokes during the drug-off periods over the last five sessions. Motivation was measured in two exponentially progressing responding sessions (after days 15 and 30) and the breakpoint was assigned as the motivation value 33 . The motivation and perseverance scores for individual mice were calculated using the z score, where X i is the behavior value for the individual mouse, X is the mean behavior value for all of the mice included in the study, and s.d. is the standard deviation of the population. The combined behavior score was equal to the sum of the motivation and perseverance scores for each individual mouse. In experiments using Adora2a-cre +/− mice, sessions were 3:40 h long, the single infusion dose was 1 mg per kg of cocaine, and mice were trained on a fixed ratio of 1, 2 or 3 as mentioned and depending on the experiment. CNO and quinelorane were dissolved in saline and administered intravenously (1 ml kg −1 ) at doses of 1 mg per kg and 0.03 mg per kg, respectively, through the catheter right before the start of the session. Catheter patency was determined every 10 d by anesthetic cocktail (1.5 mg per kg ketamine, 0.75 mg ml −1 midazepan). Mice that failed patency before reaching session 25 were removed from the study. Behavioral data was analyzed and plotted in Igor Pro. The log normal equation Intracranial viral gene transfer and in vivo optogenetic stimulation. Cre-inducible AAV-hSyn-DIO-hM4Di-mCherry (serotype 1 or 2, 6 × 10 12 virus molecules per ml, Optogenetics and Transgenic Technology Core at the National Institute on Drug Abuse and Gene Therapy Vector Core at University of North Carolina) and AAV5-EF1a-DIO-ChR2(H134R)-EYFP (4 × 10 12 VM ml −1 , UNC Vector Core) were used. Stereotaxic bilateral injections (250-400 nl at 100 nl min −1 ) were performed into the NAc core (anterior-posterior, +1.4 mm; medial-lateral, ±0.1 mm; dorsal-ventral, −4.8 mm, from Bregma) or the dorsal striatum (anterior-posterior, +0.8 mm; medial-lateral, ±1.75 mm; dorsal-ventral, −3.2 mm, from Bregma) of Adora2a-cre +/− and Adora2a-cre −/− mice at 5-6 weeks of age. For self-administration experiments, mice recovered for 3-5 d before indwelling catheters were implanted in the jugular vein. To confirm hM4Di activity using electrophysiological recordings, we injected both Cre-inducible hM4Di-mCherry and ChR2-EYFP vectors (1:1) into NAc core of Adora2a-cre +/− . For in vivo optogenetic stimulation of indirect pathway, a two-ferrule cannula with fiber optics (200 µm/0.22 NA, 4.0 mm, Doric Lenses) was implanted immediately after the viral injection right above the injection area and secured to the skull with metabond and dental cement. Diode-pumped blue lasers (473 nm, 25 mW, CrystaLaser) was used and connected via optic fiber (200 µm/0.22 NA, M25L02, ThorLabs) to a 1in-2out fiberoptic rotary joint (Doric Lenses) to split laser power and facilitate movement during the task. Stimulation consisted of 10-ms pulses delivered at 16.6 Hz for 5 min (2 min for one mouse), mean power output of 5 mW (3.5-6 mW) at each fiber tip of the two-ferrule cannula implanted in the NAc core every 10 min throughout the session. The control mice for these experiments were Adora2a-cre +/− mice injected with AAV-hSyn-DIO-mCherry (serotype 5, titer = 6 × 10 12 virus molecules per ml, Gene Therapy Vector Core at University of North Carolina). Recordings were excluded if access resistance was greater than 25 MΩ or if input resistance was greater than the range expected for MSNs (>350 MΩ). eEPSCs were elicited by a pair of current pulses (0.2-ms width, 50-ms interval, every 20 s) using an ACSF-filled glass pipette placed ~250 µm rostro-dorsally. AMPA receptor-mediated eEPSCs were recorded when holding cells at −70 mV. eEPSCs were then recorded at +40 mV in the presence or absence of the NMDA receptor antagonist (R)-CPP (10 µM) and the NMDA receptor response was calculated by subtraction off-line. For currentclamp recording from D2-MSNs, potassium-based internal solution was used. Measurements of D2-MSN output in either core or the ventral pallidum were carried out by whole-cell voltage-clamp recordings using CsCl-based internal solution (all other components were kept the same as described above) while holding the cells at −20 mV in the presence of the AMPA receptor antagonist NBQX (5 µM) to isolate GABAergic IPSCs. IPSCs were evoked by activation of ChR2 and triggered by a brief light pulse (0.2-ms width, 20-s interval) produced by a diode-pumped blue laser (473 nm, 25 mW, CrystaLaser) and delivered via a fiber optic (200 µm/0.22 NA, ThorLabs). All data was analyzed and plotted in Igor. CNO stock was prepared in water and tested at a final concentration of 10 µM in ACSF. Data was acquired using Multiclamp 700B (Molecular Devices), filtered at 1 kHz and digitized at 5 kHz. Electrophysiological recordings were made blind to the behavior score of the mice, except for sham-surgery mice that did not carry an intravenous port and were easily distinguishable to the experimenter. Data was collected from 16 sham-surgery mice and 27 mice after cocaine self-administration within 1-38 d of the last cocaine exposure. Behavioral data was available for 22 mice (5 mice had no behavior; Supplementary Table 1). Recordings from D1-MSNs were collected in 9 mice, from D2-MSNs in 3 mice, and from D1-MSNs and D2-MSNs in 14 mice. Note that there is no behavioral data for all of them and refer to Supplementary Table 1 for detail. Two photon glutamate uncaging. A custom-built two-photon laser-scanning microscope equipped with two Ti-Sapphire lasers was used to simultaneously image dendritic spines using a 840-nm laser and photo-release caged glutamate npg next to the spines using a 720-nm laser. Slices and solutions were prepared as described above. Whole-cell recordings were obtained from GFP-positive MSNs in the NAc core region at 25 °C. Alexa Fluor 594 (10 µM) was added to the internal solution to visualize second-and third-order dendrites and spines within 200 µm of the soma were randomly selected for imaging. For all experiments, MNI-glutamate (5 mM), GABAzine (5 µM), TTX (1 µM) and the Ca 2+ channel blockers mibefradil (20 µM) and conotoxin GVIA (1 µM) were added to ACSF. For recordings of AMPA receptor uEPSCs, cells were held at −80 mV and AP5 (5 µM) was added to the external solution. For recordings of NMDA receptor uEPSCs, cells were held at −50 mV and low Mg 2+ (50 µM) ACSF containing NBQX (5 µM) was used. Glutamate uncaging was achieved with a 1-2-ms pulse of 720-nm light delivered next to the spine head every 30 s, and the laser power was standardized in advance by measuring the fluorescence recovery from photobleaching of the red dye in the spine head 34 . Data were acquired using National Instruments data acquisition boards and custom software written in MATLAB 35 . We averaged 10-15 uEPSCs per spine and calculated the peak amplitude in a 1-ms window around the peak. Recordings were collected from D1-MSNs in three cocaine mice and four sham-surgery mice (Supplementary Table 1). operant food self-administration. Adora2a-cre +/− and sham Adora2a-cre −/− littermate mice were stereotaxically injected into NAc core with Cre-inducible AAV vectors expressing hM4Di-mCherry and allowed to recover for 9 d. Mice were placed in operant chambers in sound-attenuating boxes (Med Associates) in which they pressed a cued-active lever (left or right, counterbalanced across mice) for an outcome of regular chow pellets (20-mg pellets). Presses on an inactive lever did not result in reinforcement. Mice were food restricted 2 d before training commenced to 90% of their baseline weight, which they were maintained at for the duration of the experiments. On the first day, mice were trained to approach the food magazine (no levers or cue light present) on a random time schedule, with a reinforcer delivered, on average, every 60 s for a total of 30 min. All other training sessions commenced with illumination of the house light, active and inactive lever extension, and illumination of a cue light located directly above the active lever, and ended after 60 min with the levers retracting and the cue and house light turning off. Mice were trained on a FR1 schedule for 3-5 d and, once lever-pressing behavior was acquired, the fixed ratio requirement was increased to FR10. One hour before each session, mice received an injection of saline (10 ml kg −1 , intraperitoneal) during training for habituation and they received either saline or CNO (1 mg per kg dose, 10 ml kg −1 , intraperitoneal) during testing. Testing of FR10 responding began after 3 d of baseline FR10 responding and consisted of 2 d of CNO and 2 d of saline injections. The order of drug test (saline first versus CNO first) was counterbalanced across mice and kept constant across CNO testing. Progressive ratio testing followed in which mice were given CNO or saline on alternating days (counterbalanced across mice) and with a FR10 training session in between progressive ratio test days. The same progressive ratio schedule used for cocaine self-administration was used here 33 with an initial breakpoint of 10, and maximum breakpoint to be achieved of 2012. locomotor assessment. Locomotor activity was recorded in cages (10.25 × 6 inches) constructed out of clear polycarbonate walls and floors under dim illumination (150 lx). Horizontal activity was detected as infrared beam crosses (1-inch spacing, ten beams per cage) made on consecutive beams (ambulatory counts) using Opto M3 activity monitors (Columbus Instruments). The response to the D1 agonist SKF81297 was assessed using a previously described protocol 22 . Briefly, adult mice (8-16 weeks old) were allowed to run freely for 60 min, during which baseline locomotor activity was determined. They received an intraperitoneal injection of either saline or 2.5 mg per kg SKF81297, a dose that increases locomotor activity in wild-type C57BL/6J mice 36 , were immediately returned to the cage and horizontal locomotion was measured for the next 150 min. In all cases, locomotor activity is expressed as the number of ambulatory counts per 10 min, 1 h or normalized to the baseline value obtained for each mouse before drug administration. Quantitative PcR. Striatal tissue was microdissected from two adult mice (9-16 weeks old) of each genotype (Adora2a-cre +/− and control Adora2a-cre −/− ). All mice received an intra-striatal injection of a viral vector that conditionally expresses hM4Di and 2-4 d of CNO (1 mg per kg) treatment. Total RNA was purified using RNeasy Micro (Qiagen), and cDNA was synthesized using iScript cDNA Synthesis Kit (Bio-Rad). mRNA expression of dopamine D1 receptor Drd1a and the endogenous control gene Actb were determined using TaqMan Gene Expression Assays (Applied Biosystems). Quantitative PCR runs were performed using TaqMan Fast Polymerase (Applied Biosystems) in a StepOnePlus Real-Time PCR system. Cycling conditions were as follows: initial hold at 95 °C for 20 s, 40 cycles of step 1 (95 °C for 1 s) and set 2 (60 °C for 20 s). Samples were run in quadruplicates, and negative controls were run in parallel. cDNA synthesis and quantitative PCR experiments were repeated three times. Relative quantification was calculated using the ∆∆Ct method (StepOne System Software, Applied Biosystems). Statistics. All statistical comparisons were two-sided and were performed with IGOR or Prism. Paired and unpaired Student's t tests, one-way and repeatedmeasures two-way ANOVA were used when appropriate. Bonferroni tests were used for multiple comparisons post hoc analysis. One sample t test was used to compare normalized breakpoint values of all rewards earned when mentioned. F test was used to compare the variance among groups and test the assumption of normality for each statistical test. In the case of AMPA/NMDA ratios and uncaged-evoked responses at single spines (uEPSCs), the normality assumption was not met and the nonparametric Mann-Whitney test was used instead. All replicates were biological and the errors were calculated as s.e.m. Sample size was not justified with statistical power.
2016-05-12T22:15:10.714Z
2013-03-31T00:00:00.000
{ "year": 2013, "sha1": "b68818b5b036a7f29cd07e7ef266c4dee8558a4f", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3637872?pdf=render", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "037eac0fa1a2c350bd55e3c49493118053ec76ca", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
18188052
pes2o/s2orc
v3-fos-license
Aluminum Potassium Sulfate and Tannic Acid Injection for Hemorrhoids A quick hemostatic effect, as well as sclerosing and shrinkage of hemorrhoids, can be attained when internal hemorrhoids are treated by using injection therapy with aluminum potassium sulfate and tannic acid (ALTA), the outcomes of treatment may be similar to those of a hemorrhoidectomy. However, if the type of hemorrhoid or the method of injection is not appropriate for ALTA treatment, complications peculiar to ALTA or recurrence may develop. Accordingly, sufficient understanding of the treatment mechanism of ALTA injection and repeated training for injection are required for effective use of the ALTA treatment. INTRODUCTION In the past, 5% phenol almond oil (PAO) was most widely used for sclerotherapy for hemorrhoids [1]. The treatment mechanism of PAO injection is that PAO administered into the submucosal layer of a hemorrhoid and the hemorrhoidal pedicle induces inflammation in the areas surrounding the blood vessels in the hemorrhoids and the fibrosis resulting from the inflammation reduces blood flow into hemorrhoid and fixes the hemorrhoid to the mucosa. Accordingly, effect of PAO injection on internal hemorrhoids with bleeding last for about 1 year and, thus, is not permanent. In addition, because positive treatment outcomes are not expected for prolapsed hemorrhoids, PAO injection is not applicable to internal hemorrhoids of grade 3 to 4 [1]. However, aluminum potassium sulfate and tannic acid (ALTA) is a scientifically-improved ingredient of Xiaozhiling that has been used in China as Chinese government in 1979; it was then introduced to Korea and Japan. The composition of ALTA was finalized after some of the additives of Xiaozhiling were modified in Japan. Experimental treatment started in 1998, and the Japanese government approved commercial use of ALTA in 2005. In Korea, ALTA is imported from Japanese pharmaceutical companies and has been used since 2007. MECHANISM OF ALTA INJECTION Once ALTA is injected into hemorrhoids, blood flow to the hemorrhoids is interrupted, and a quick hemostatic effect and shrinkage of the hemorrhoids develop. With time, persistent fibrosis develops due to sterile inflammation; then, adhesion and fixation of the mucosa and the submucosal layer to the muscular layer is promoted. Finally, bleeding and prolapse of the hemorrhoid disappear [7] (Fig. 1). INDICATIONS OF ALTA INJECTION ALTA injection is effective only for internal hemorrhoids. In the case of mixed hemorrhoids having features of external piles, the application of ALTA is not appropriate. In addition, ALTA is applicable neither in cases of hemorrhoids in an acute stage, such as external thrombosed hemorrhoids and strangulated hemorrhoids, nor in cases accompanied by fibrosis and complications, such as fissures, hemorrhoidal fistulae and anal polyps. Further, in cases of large hemorrhoids that require more than 40 mL of ALTA administration, surgical treatments should be considered because radical treatment outcomes through ALTA injection are hardly expected. METHODS OF ALTA INJECTION Observance of the 4-step injection is required. The 4-step injection is a procedure of administrating appropriate amounts of ALTA into 4 parts of the hemorrhoids: the upper, shallow middle, deep middle and lower parts [4,5] (Fig. 2). At step 1, about 3 mL of ALTA are injected into the submucosal layer of the upper part of Recovery from inflammation the hemorrhoid. At step 2, ALTA is injected in an amount of 1 mL more than the volume of the hemorrhoid into the submucous layer of the middle part of the hemorrhoid. At step 3, about 1 to 2 mL of ALTA, which is about half the amount injected in step 2, are injected when the needle passes the lamina propria while retreating from the muscularis mucosae, the point the needle reached at step 2. At step 4, about 2 to 3 mL of ALTA are injected at the deep submucosal layer 0.1 to 0.2 cm adoral to the dentate line of the lower part of the hemorrhoid; then, an additional 1 mL is injected while withdrawing the needle. In cases of hemorrhoids with volumes of 1 cm 3 or less and accessory hemorrhoids, only steps 2 and 3 are required. In all steps, ALTA should not be injected into the muscular layer. Use of an ALTA-only needle is advantageous because resistance from the muscular layer is easily detected, and injection into the muscular layer can be prevented. In addition, when a Ztype ALTA-only anoscope, not an ordinary anoscope, is used, hemorrhoids can be observed in front, the upper areas of the hemorrhoids can be observed, and application of fixation or compression to the injection area is easy, so the ALTA solution does not leak from the dentate line. Upon completion of ALTA injection, massage is done over the entire injected area by using the fingers in order to diffuse the ALTA solution evenly into all areas of the hemorrhoids. Regarding the timing of the massage, a one-time massage after finishing the injection for all hemorrhoids was recommended in the past, but nowadays, immediate massage is recommended upon completion of the 4 steps for every hemorrhoid in order to prevent complications such as rectal ulcers caused by local stagnation of the ALTA solution in the hemorrhoid [9]. COMPLICATIONS OF ALTA Complications that may develop after ALTA injection are shown in Table 1. Causes of complications may include inappropriate injection site, inappropriate dose, and excessive local stagnation. Therefore, strict observance of the 4-step injection process is required. An accurate injection site and dose should be selected, and immediate massage after ALTA is necessary [9,10]. Additional compli-cations may include low abdominal pain, bradycardia and hypotension after ALTA injection, which are considered to be caused by a vagovagal nerve stimulation reflex in the pelvic cavity. Therefore, use of ALTA solution mixed with lidocaine anesthetic agent may prevent symptoms of vagovagal nerve stimulation reflex from developing [6]. PERFORMANCE OF ALTA INJECTION As a result of a multi-center study conducted by 10 institutions in Japan on prolapsed internal hemorrhoid cases in order to compare the performance of ALTA injection therapy with that of a hemorrhoidectomy, the disappearance rates at postoperative 28 days for the ALTA injection therapy group and the hemorrhoidectomy group were 94% (75/80 cases) and 99% (84/85 cases), respectively, showing no significant difference from each other, in the case of grade 3 to 4 hemorrhoids. At postoperative 1 year, the recurrence rate for the ALTA injection therapy group was 16% (12/73 cases), which was slightly higher than the 2% (2/81 cases) for the surgery group. Considering that ALTA injection is a less invasive procedure with less pain compared with surgery and that the injection technique had not been fully mastered by the surgeons during the early periods of its use, the results obtained with ALTA treatment are considered satisfactory [5]. A study conducted by Kunimoto et al. [11] of 16 recurrent cases after ALTA injection found that the recurrence rate was high when ALTA injection therapy was first introduced because a sufficient amount of ALTA could not be injected due to the development of symptoms such as bradycardia, pain in the lower abdomen and hypotension during ALTA injection. Since then, use of a mixture of ALTA solution and lidocaine was introduced to prevent symptoms of vagovagal nerve stimulation reflex; consequently, a sufficient amount of ALTA injection was administered, and the recurrence rate decreased accordingly. According to a study on recurrence after ALTA injection, cases of internal-pile recurrence were treated with repeated ALTA injection whereas cases of external-pile recurrence were treated with a hemorrhoidectomy. When ALTA injection is performed for hemorrhoids with components of ex- ternal piles, the components of the external piles still remain after treatment. Therefore, screening of inappropriate cases should be conducted in order to select cases of internal piles for ALTA injection and to reduce the recurrence rate [11]. COMBINED MODALITY THERAPY OF A HEMORRHOIDECTOMY AND ALTA INJECTION ALTA injection is effective only for internal hemorrhoids. How- Type A Type B Type C ever, even though most mixed hemorrhoids are practically not internal piles but include external piles, if a combined modality therapy of a hemorrhoidectomy and ALTA injection is applied to mixed hemorrhoids, the volume of excision can be less than that of surgery alone; thus, the development of pain and complications such as anal stricture can be reduced, and the hospital stay and the wound healing time can be shortened, helping patients to have a faster return to normal life. We named the ALTA injection combined modality therapy as the "hybrid hemorrhoidectomy," which has three types of injection methods (Fig. 3). Type A was named after the accessory pile. The main pile of the hemorrhoids is removed by surgery; then, ALTA injection is performed on the internal hemorrhoids included in the remaining accessory pile. Type B was named after big-sized piles. The external pile of mixed hemorrhoids is removed by surgery; then, ALTA injection is performed on the remaining internal hemorrhoids. Type C was named after complete injection. ALTA injection is performed for hemorrhoids composed of internal hemorrhoids only. In particular, type B is a low ligation with ALTA injection ligating and excising external piles at the dentate line and with ALTA injection into the areas of the internal piles (Fig. 4). Because the extent of excision in low ligation with ALTA injection is less than that in a conventional hemorrhoidectomy, anal stricture can be prevented when multiple piles are removed from circumferential hemorrhoids, and postoperative secondary bleeding rarely occurs due to the site of ligation and excision being close to the dentate line [8]. CONCLUSION When ALTA injection is applied to cases of internal piles only, the outcomes of treatment are almost the same as those of a hemorrhoidectomy, but with almost no pain. In cases of mixed hemorrhoids containing external piles, a combined modality therapy of a hemorrhoidectomy and ALTA injection can be used. Compared with the results obtained when using only a hemorrhoidectomy, those obtained when using hybrid therapy of a hemorrhoidectomy and ALTA injection include less postoperative pain, fewer complications such as anal stricture, a shortened period of wound healing and a faster return to normal life. Thus, ALTA injection may be used with less pain for the radical treatment of hemorrhoids.
2016-05-04T20:20:58.661Z
2012-04-01T00:00:00.000
{ "year": 2012, "sha1": "dee9e8ffe3b2548b8f4a5933aabece5f78679b04", "oa_license": "CCBYNC", "oa_url": "http://coloproctol.org/upload/pdf/jksc-28-73.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "dee9e8ffe3b2548b8f4a5933aabece5f78679b04", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6315767
pes2o/s2orc
v3-fos-license
Laminarin-induced apoptosis in human colon cancer LoVo cells A number of scientific studies have revealed that laminarin has antitumor effects. Therefore, the aim of the present study was to investigate the apoptosis of LoVo cells and the underlying mechanisms induced by laminarin. LoVo cells were treated with various concentrations of laminarin and fluorescence-inverted microscopy was used to observe the morphology of LoVo cells treated with laminarin. In addition, western blotting was performed to analyze the expression levels of death receptor (DR)4, DR5, TNF-related apoptosis-inducing ligand (TRAIL), Fas-associated protein with death domain (FADD), caspase-8, caspase-3, Bid and tBid. Flow cytometry was conducted to analyze the expressions of Bcl-2 and Bax, and spectrophotometry was performed to quantify the activity of caspases-8, -3, -6 and -7. Following the treatment of LoVo cells with laminarin for 24 h, the expression levels of DR4, DR5, TRAIL, FADD, Bid, tBid and Bax were observed to be upregulated, whereas the expression levels of pro-caspase-8, pro-caspase-3 and Bcl-2 were downregulated. In addition, the activities of casapse-8, -3, -6 and -7 were observed to increase, which was a significant difference when compared with those of the control group. Therefore, laminarin is considered to induce the apoptosis of LoVo cells, which may occur via a DR pathway, suggesting that laminarin may be a potent agent for cancer treatment. Introduction Colon cancer is a common malignant tumor of the digestive tract and one of the four most common types of malignant tumors worldwide, and therefore presents a significant global health problem. Despite recent advances in the chemotherapy treatment for colon cancer, the outcomes of anticancer therapy remain unsatisfactory. Thus, further improvement of therapies for colon cancer is required. Numerous pharmacological studies have shown that polysaccharides from certain traditional Chinese medicines exhibit antitumor effects with fewer side effects. Laminarin is an active component which is extracted and isolated from the dry thallus of Laminaria japonica Aresch of the Laminariaceae family, or Ecklonia kurome Okam of the Araliaceae family (1). Laminarin consists of β-(1-3)-glucan with β-(1-6)-linkages. The antitumor effect of laminarin has been previously reported (2)(3)(4)(5), and Park et al demonstrated that laminarin inhibits HT-29 cell growth by decreasing cell proliferation and inducing apoptosis via the death receptor (DR) and insulin-like growth factor I receptor pathways (6). Our previous study revealed that laminarin increases the intracellular levels of reactive oxygen species and Ca 2+ , decreases intracellular pH levels and induces apoptosis in LoVo cells. In addition, laminarin was observed to open mitochondrial permeability transition pores (MPTPs), activating the death switch and subsequently decreasing the mitochondrial membrane potential, thus inducing apoptosis through an irreversible mitochondrial pathway. Furthermore, laminarin may alter the expression of apoptosis-related proteins, such as cytochrome c (cyt c), caspase-9 and caspase-3, in LoVo cells and induce apoptosis. Therefore, it may be hypothesized that laminarin induces apoptosis in human colon cancer LoVo cells through a mitochondrial pathway (7). Main apparatus. The CKX41 fluorescence inverted microscope was purchased from Olympus (Tokyo, Japan), and the Mini-Protean Tetra and Mini Trans-Blot electrophoresis systems, Gel Doc XR imaging system and Model 680 microplate reader were purchased from Bio-Rad. The EPICS XL flow cytometer was obtained from Beckman Coulter (Miami, FL, USA) and the CO-150 CO 2 incubator was purchased from New Brunswick Scientific (Edison, NJ, USA). The UV1000 UV-VIS spectrophotometer was purchased from Techcomp Limited (Shanghai, China). Effect of laminarin on LoVo cell morphology. In total, 5x10 4 cells were seeded in six-well plates, cultured for 24 h and treated with various concentrations of laminarin for 72 h. Cells were then harvested by trypsinization, washed twice with cold PBS and fixed in 4% paraformaldehyde for 30 min at 4˚C. The fixing solution was then discarded and the cells were washed twice with PBS. Next, the cells were stained with Hoechst 33258 for 20 min. The stain was discarded and cells were washed twice with PBS prior to observation under a fluorescence microscope. Effect of laminarin on the expression of DR4, DR5, TRAIL, FADD, caspase-8, caspase-3, Bid and tBid in LoVo cells. In total, 5x10 4 cells/ml (2 ml) were seeded in six-well plates and cultured for 24 h, followed by treatment with various concentrations of laminarin for 48 h. The cytoplasm extracts were prepared with 150 µl cell lysis buffer on ice for 30 min. The solution was then centrifuged at 10,000 x g for 10 min and the supernatant was collected. The protein concentration was quantified using the detergent-compatible protein assay kit. Next, the proteins were mixed with 2X sodium dodecyl sulphate sample buffer and a total of 40 µg of protein was separated in 10% (w/v) polyacrylamide gel and blotted onto nitrocellulose membranes. The blots were blocked for 2 h and incubated with the primary antibody for 12 h. Subsequently, the membranes were washed in buffer and incubated with the secondary antibody in blocking buffer. Ponceau staining was performed to ensure equal loading and the bands were detected by BCIP/NBT alkaline phosphatase color development kit. The bands were then visualized and quantified using the Gel Doc XR imaging system. Effect of laminarin on the activity of caspase-8, -3, -6 and -7 in LoVo cells. A total of 5x10 4 cells were seeded in 24-well plates, cultured for 24 h and treated with various concentrations of laminarin for 24 h. The cells were then digested with pancreatin and rinsed twice with PBS. Caspase activity was determined by colorimetric assay using the previously mentioned kits, according to the manufacturer's instructions. The optical density of the reaction mixture was quantified using a spectrophotometer at a wavelength of 405 nm. Effect of laminarin on the expression of Bcl-2 and Bax in LoVo cells. A total of 5x10 4 cells were seeded in six-well plates, cultured for 24 h and treated with various concentrations of laminarin for 24 h. Cells were then digested with pancreatin and rinsed twice with PBS. Next, 2 ml paraformaldehyde (40 g/l) was added to fix cells for 40 min. The fixing solution was then removed and cells were rinsed twice with PBS. In total, 1 ml Triton X-100 (0.1%) was added for 15 min to punch holes in the cell membranes. The Triton X-100 was then removed and the cells were rinsed with PBS twice. Next, 1 ml bovine serum albumin (1%) was added to seal cells for 1 h. The sealing liquid was then removed and mouse anti-human Bcl-2 and Bax antibodies were added to cells and incubated for 1 h at 37˚C. The supernatant fluid was then removed and cells were rinsed with PBS. Next, FITC anti-mouse antibody was added and cells were incubated for 30 min at room temperature. Finally, the supernatant fluid was discarded and 500 µl PBS was added prior to analysis of the cells by flow cytometry (FCM). Statistical analysis. Data are presented as the mean ± standard deviation. Statistical analyses were performed using the analysis of variance test to compare the different groups. P<0.05 was considered to indicate a statistically significant difference. Results Effect of laminarin on LoVo cell morphology. Inverted fluorescence microscopy revealed that cells in the control group grew normally, with cells adhering to the bottom of the plate. The shapes of the endochylema were fusiform, polygonal and irregular with round cell nuclei. Following treatment with various concentrations of laminarin for 72 h, a high proportion of cells demonstrated apoptosis-like changes, including detachment and cytoplasmic condensation, which lead to cellular swelling, rounding, disappearance of the microvilli and cytoplasmic condensing. Treatment with HCPT as a positive control drug for 72 h caused the cytoplasm to condense and apoptotic bodies appeared in the LoVo cells. Consequently, the number of apoptotic bodies was observed to increase (Fig. 1). Treatment with HCPT as a positive control drug for 24 h, caused the expression levels of DR4, TRAIL and FADD in the LoVo cells to increase markedly and the expression levels of procaspase-8 and-3 to decrease markedly compared with the control group (Fig. 2). Effect of laminarin Effect of laminarin on the activity of caspase-8, -3, -6 and -7 in LoVo cells. The results showed that following treatment with laminarin for 24 h, the concentration of pentose nucleic acid in LoVo cells had increased and was significantly different compared with that of the control group and, subsequently, caspase activity increased. Following treatment with laminarin at concentrations of 400, 800 and 1,600 µl/ml, caspase activity increased by 34 Effect of laminarin on the expression of Bid, tBid, Bcl-2 and Bax in LoVo cells. Western blotting showed that following laminarin treatment for 24 h, the expression levels of Bid and tBid in LoVo cells increased in a concentration-dependent manner and were significantly different compared with those in the control group (Fig. 4). Treatment with HCPT as a positive control drug for 24 h, led to an increase in the expression levels of Bid and tBid in the LoVo cells, additionally the expression levels of Bid were markedly different compared with the control group (Fig. 4). The FCM results demonstrated that following treatment with laminarin for 24 h, Bcl-2 expression levels had decreased, whereas Bax expression levels had increased, in a concentration-dependent manner. The expression levels of Bcl-2 and Bax in the laminarin treatment group were significantly different from those in the control group (Table I). Treatment with HCPT as positive control drug for 24 h led to a marked increase in the expression level of Bax in LoVo cells and the expression level of Bcl-2 decreased markedly compared with the control group (Table I). Discussion Apoptosis is a rigorous, active and orderly process of cell death that is regulated by numerous genes to maintain the stability of the intracellular environment (11). The DR pathway is one of the three major apoptosis pathways. Significant DRs include Fas, tumor necrosis factor (TNF) receptor 1 and DR3-6. The DRs, when combined with their respective ligands, mediate cell apoptosis (12). TRAIL is a member of the TNF superfamily. The signal transmission pathway for apoptosis, induced by TRAIL and its receptors, is characterized by selective promotion of tumor cell apoptosis (13,14). The mechanism by which TRAIL and its receptors induce apoptosis is as follows: DR4 and DR5 are functional receptors that contain death domains (DDs), which, when combined with TRAIL, transmit signals for apoptosis into the cell. The DD on the C-terminus of FADD then interacts with that on DR4/DR5. The N-terminus of FADD also contains a DD, which mediates the signal transmission for apoptosis. When the DD is combined with the death effectors domain on procaspase-8, the TRAIL-DR4/DR5-FADD-procaspase-8/death-inducing signaling complex (DISC) is formed. The procaspase-8 in DISC cleaves itself to produce active caspase-8, which activates two pathways for apoptosis signaling. In the first pathway, caspase-8 directly activates caspase-3, -6 and -7, which induces apoptosis via the DR pathway (15,16). In the second pathway, caspase-8 connects to the mitochondrion via the activation of Bid, which subsequently induces apoptosis through the mitochondrion (17,18). HCPT is an agent with an unique spectrum of anti-tumor activity, it may significantly inhibit cell proliferation and induce apoptosis in colon cancer through both intrinsic and extrinsic pathways (19). In this study, it was used as a positive drug and it was demonstrated that HCPT was capable of inhibiting LoVo cell proliferation and also inducing apoptosis through extrinsic (DR-mediated) apoptotic pathways, which may upregulate the expression of DR4, TRAIL and FADD and downregulate the expression of procaspase-8 and-3, resulting in the activation of the caspase enzyme and apoptosis. The results of the western blotting showed that laminarin increases the expression of DR4, DR5, TRAIL and FADD in LoVo cells and decreases the expression of procaspase-8 and -3 in a dose-dependent manner and the effects are the same with HCPT. This suggests that laminarin promotes DISC formation in LoVo cells, which induces apoptosis via a death receptor pathway that is mediated by TRAIL/DR4/DR5. Caspase-8 activates two apoptotic pathways, and the results of the current study confirm that laminarin increases caspase-8 activity and induces LoVo cell apoptosis via the DR pathway; caspase-8 activates caspase-3, -6 and -7, thus inducing apoptosis. However, caspase-8 is also associated with mitochondria via the activation of Bid, which subsequently induces apoptosis through the mitochondria. Bid is an apoptotic protein of the Bcl-2 family that has a BH3 structural domain and is important in the mitochondrial and DR pathways of cell apoptosis, and is usually termed the 'hub' or the 'crosstalk regulation' (20,21). Under normal physiological conditions, Bid is located in the cytoplasm in an inactive state; however, when the cell surface DRs are activated, the Bid proteins are cleaved into 15-kDa functional tBid fragments, which are repositioned on the mitochondrial membrane where they cooperate with Bax proteins. This cooperation promotes the fusion of Bax and the mitochondria, which results in changes in the configuration of Bax proteins. These changes increase damage to the mitochondria, which results in the formation of membrane pores and allows large amounts of cyt c to be released from the mitochondria. Subsequently, caspase-9 is activated and leads to the induction of apoptosis in the cells (22,23). In addition, tBid binds to Bcl-2 and inhibits the anti-apoptosis effect (24,25). Bcl-2 family members are important in the regulation and control of the apoptosis pathway, and are divided into anti-apoptotic and pro-apoptotic proteins. The regulation of apoptosis involves targeting of the mitochondria by anti-and pro-apoptotic Bcl-2 proteins. Bcl-2 inhibits MPTP opening and prevents apoptosis. In addition, Bcl-2 can directly or indirectly prevent cyt c release and form the Bcl-2-Apaf1-caspase-9 complex, resulting in an anti-apoptosis effect. However, Bax can promote MPTP opening and the subsequent cyt c release, which activates the caspase cascade reaction, eventually resulting in apoptosis. The results of the current study demonstrated that laminarin increases the protein expression of Bid, tBid and Bax (as well as its apoptosis-promoting effects). In addition, laminarin was identified to reduce the protein expression of Bcl-2 and its anti-apoptosis effect in LoVo cells, and may transmit signals for apoptosis to the mitochondria through the Bid hub. Our previous study revealed that laminarin induces the opening of MPTPs in LoVo cells, which lowers the mitochondrial membrane potential and consequently increases the expression of cytoplasmic cyt c (7). This significantly increases the expression and activity of caspase-9 and -3. The results of the present study closely correlate with those from our previous study, suggesting that laminarin may induce apoptosis in LoVo cells via two pathways: The DR and mitochondrial pathways. In conclusion, laminarin upregulates DR4, DR5, TRAIL and FADD expression levels in LoVo cells, downregulates procaspase-8 and -3 expression levels and increases the activity of caspase-8, -3, -6 and -7. Therefore, laminarin may induce apoptosis in human colon cancer LoVo cells via the TRAIL/DR pathway. Laminarin also alters Bid, tBid, Bcl-2 and Bax expression levels in LoVo cells, which have close interactions with the mitochondrial pathway. Thus, the present study indicates that laminarin induces apoptosis in LoVo cells via the mitochondrial and DR pathways, suggesting that laminarin is a potent agent for cancer treatment. In addition, laminarin may be used for the therapy and prevention of certain types of digestive tract cancers and therefore, drug preparations must be determined for future clinical application.
2018-04-03T01:55:01.138Z
2014-03-07T00:00:00.000
{ "year": 2014, "sha1": "825ce46a017fdb6bd904ab5f325321d866ef5746", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ol/7/5/1728/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "825ce46a017fdb6bd904ab5f325321d866ef5746", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252543272
pes2o/s2orc
v3-fos-license
Effects of Moderate-Intensity Training Under Cyclic Hypoxia on Cardiorespiratory Fitness and Hematological Parameters in People Recovered From COVID-19: The AEROBICOVID Study Background: Recent studies have indicated that people who live at altitude have a lower incidence of coronavirus disease (COVID-19) and lesser severity in infection cases. Hypothesis: Hypoxia exposure could lead to health benefits, and it could be used in the recovery process as an additional stimulus to physical training to improve cardiorespiratory fitness (CRF). Study Design: Randomized controlled clinical trial. Level of Evidence: Level 2. Methods: The 43 participants, aged 30 to 69 years, were divided into control group (CG, n = 18) and 2 training groups: normoxia (NG, n = 9) and hypoxia (HG, n = 16). Before and after the intervention were evaluated the lactate threshold 2 (L2), peak oxygen uptake (VO2peak), and a blood sample was collected at rest to evaluate hematological adaptation. Both groups performed an 8-week moderate-intensity physical training on a bike. The HG were trained under normobaric hypoxic conditions (fractional inspired oxygen [FiO2] = 13.5%). Results: The 8-week intervention promoted a similar improvement in CRF of people recovered from COVID-19 in the HG (L2 = 34.6%; VO2peak = 16.3%; VO2peak intensity = 24.6%) and NG (L2 = 42.6%; VO2peak = 16.7%; VO2peak intensity = 36.9%). Only the HG presented differences in hematological variables (erythropoietin = 191.7%; reticulocytes = -32.4%; off-score = 28.2%) in comparison with the baseline. Conclusion: The results of the present study provide evidence that moderate-intensity training in normoxia or hypoxia promoted similar benefits in CRF of people recovered from COVID-19. Furthermore, the hypoxia offered an additional stimulus to training promoting erythropoietin increase and hematological stimulation. Clinical Relevance: The present exercise protocol can be used for the rehabilitation of people recovered from COVID-19, with persistent low CRF. In addition, this is the first study demonstrating that physical training combined with hypoxia, as well as improving CRF, promotes greater hematological stimulation in people recovered from COVID-19. I n March 2020, the World Health Organization classified the coronavirus disease caused by the severe acute respiratory coronavirus 2 (SARS-CoV-2) as a pandemic. 52 News related to COVID-19 is commented on in daily media and has great concern and impact on Global Public Health. 23 The COVID-19 pandemic is an unprecedented health emergency of our era, causing mortality and disease worldwide. The clinical condition is diverse, ranging from asymptomatic infection to acute respiratory syndrome and damage to various body systems, increasing inflammatory markers, cardiovascular disorders, lung injuries, and kidney damage. 23 However, a new demand arises in the post-COVID context because some symptoms can last and limit people recovered from COVID-19. After recovery, it is possible to identify people with alterations in cardiovascular and pulmonary systems, in addition to the hematological parameters. 5,23,28 Pulmonary injuries and cardiovascular disorders that impair cardiorespiratory fitness (CRF) have been described predominantly for hospitalized people with COVID-19 but also in asymptomatically infected individuals. 2 The persistent symptoms related to this context are associated with a measurable functional deficit in physical fitness, highlighting the reduced CRF. 14, 27 Clavario et al 12 determined the functional capacity of COVID-19 survivors in addition to the safety and tolerability of using cardiopulmonary exercise testing (CPET) in 225 patients with confirmation of COVID-19 3 months after hospital discharge. It was verified that 88% of the patients had peak oxygen consumption (VO 2peak ) below the predicted. The authors highlighted that 80% of patients experienced at least 1 disabling symptom, an unrelated decrease of VO 2peak and functional capacity. They concluded that approximately 33% of COVID-19 survivors have functional limitations 3 months after discharge, associated with muscle impairment. Recent studies indicate that people who live at high altitudes (above 3000 m) have a lower incidence of COVID-19 and less severity in cases of infection. 1, 3,11,30,32,48 In addition, Brazilian cities with high altitude and low relative humidity have a reduced relative incidence and mortality rate of COVID-19. 21 The factors involved in this lower susceptibility to COVID-19 are related to physiological and anatomical adaptations in the lungs, improving perfusion and capacity. Furthermore, the increase in erythropoietin (EPO) concentrations generates a cytoprotective effect with a broad function that reduces inflammatory conditions and microvascular lesions. 1 In addition, recent studies have demonstrated the possibility of using EPO as an auxiliary approach in treating COVID-19. 16,35,46,53 On the other hand, moderate-intensity interval exercise itself can reduce chronic inflammation and strengthen the immune system, 8,42,51,59 reducing the severity and mortality of viral diseases. 34 In addition, higher levels of CRF can produce short-term improvements in the immune and respiratory systems, 40 both affected by COVID-19. 61 Training methods using hypoxia as an ergogenic resource (cyclic hypoxia) have existed since the 1960s. 17,18,25,44,50 The most recent studies have determined that this type of intervention can present favorable results in health parameters and that the physical training associated with the normobaric hypoxia condition is safe and can be performed with different populations, for example, in the reduction of the fat mass with a concomitant increase of lean mass, 10 and increased CRF. 9 Thus, we speculated that an interval training program of moderate-intensity performed in cyclic normobaric hypoxia could be an efficient proposal in rehabilitating people recovered from COVID-19 to improve their damaged CRF and increase the hematological stimulation. For this, we aimed to study the effects of 8 weeks of moderate-intensity cyclic hypoxic training on the cardiorespiratory capacity and hematological responses of people recovered from COVID-19. Ethical Review This study was approved by an institutional review board. All participants gave written informed consent. Participants Sixty-nine participants were recruited, of which 43 completed all assessments and were therefore included in the study according to the following inclusion criteria: men and women aged between 30 and 69 years, approximately 30 days since the recovery of clinical signs or medical discharge (in case of hospitalization); and having previous experience in aerobic exercise. The exclusion criteria were as follows: exposure to an altitude higher than 1500 m in the last 3 months; significant physical limitations to carry out the evaluations and intervention; acute or chronic clinical illnesses without medical supervision; anemias; use of immunosuppressive drugs; pregnant women; hormone replacement; smokers; and excessive use of alcohol or drugs. In addition, an evaluation of the health status was carried out. The participants who did not present limitations or discomfort that could prevent the performance of the evaluations or the intervention were enrolled in the proposed intervention. Experimental Design The study design is a randomized controlled clinical trial composed of 3 groups: the control group (CG, n = 18), participants who were not available to join the intervention and accepted to carry out a follow-up through the evaluations; and the physical training groups, which were randomly divided according to the association of training with hypoxia (HG, n = 16) or normoxia (NG, n = 9). The experimental protocol of the AEROBICOVID study ( Figure 1) consisted of (1) familiarization and carrying out the initial evaluation (baseline [BSLN]) in the 3 sessions of week 0, with CPET on a bike and blood collection, (2) an 8-week intervention with a partial evaluation developing a CPET to adjust the training load between weeks 4 and 5 (half of the intervention), (3) reevaluation at week 9 with the same evaluations of week 0, following the end of the intervention (post), all biological concerns were described elsewhere. 56 Assessments Anthropometric Variables Body mass and height were measured by a 200 kg capacity electronic weighing scale (Mic-Pp, Micheletti), enabling calculation of the body mass index (BMI) using the equation body mass/height 2 . Questionnaires As control variables, the International Physical Activity Questionnaire (IPAQ) short version, validated in Brazil 37 measured the usual level of physical activity. In addition, the Food Consumption Markers Form of the Ministry of Health 39 was used to assess the frequency of food consumption. Participants were instructed to maintain similar physical activity and eating habits during the study. Hematological Parameters Blood collection was performed by peripheral venous access after 8 hours overnight fasting, carried out by a trained and specialized professional. The hemogram parameters, such as total red blood cell count, hematocrit, and hemoglobin concentration, were evaluated at the Clinical Analysis Laboratory, Faculty of Pharmaceutical Sciences of Ribeirão Preto according to the technical service's standard routine and methodology. The hematologic stimulation index (off-score) was obtained from hemoglobin (Hb in g/L) and reticulocytes (Ret %): off-score = Hb (in g/L) 60√Ret(%). 26,47 The EPO plasma concentration was determined by immunoassay according to the fabricant (EPO ELISA Kit R&D Systems). Peak Aerobic Power (VO 2peak ) A CPET was used to estimate lactate thresholds 1 (L1) and 2 (L2), VO 2peak and iVO 2peak . The CPET was performed in a pendular cycle ergometer with mechanical braking (Ergométrica, Monark). First, the participants started a 5 minute warm-up without any additional load; after that, the intensity was increased by 15 watts every 2 min until the participant did not maintain the 60 rpm cadence or volitional exhaustio Oxygen uptake was measured breath by breath by the gas analyzer (K4b2, COSMED), calibrated according to the manufacturer's specifications. The VO 2peak was defined as the highest VO 2 average in the last 60 seconds in the test. The iVO 2peak was the lowest intensity at which the individual reached VO 2peak during the test. If the participant did not complete the last stage of the incremental test, the iVO 2peak is estimated by the equation proposed by Kuipers et al. 33 Blood samples (25 μL) were collected from the earlobe at the end of each stage, using previously calibrated heparinized capillaries. Blood samples were immediately dispensed and homogenized in microtubes containing 1% sodium fluoride for blood lactate concentration [La] analysis using the YSI 2300 STAT analyzer (YSI, Yellow Springs, OH). The bisegmented method was used, in which the points obtained between [La] and the intensities were subjected to 3 linear adjustments so that the 2 intersections obtained will be assumed as L1 and L2 intensities. 41 Concomitantly, heart rate (HR) and rate of perceived exertion (RPE) were monitored at the end of each stage. Intervention Hypoxia Instrumentation A unidirectional mask (Air Safety) was connected to a 3 m long flexible hose (IVPU, vacuum air PU 1.1/2 cm), and the opposite end was connected to a tent (2 m wide, 3m long, and 2 m high; Colorado Altitude Training Tent), with 12,000 l of air capacity, connected to a hypoxia generator (CAT 430, Altitude Control Technologies). The system was turned on throughout the experiment because the air outlet in the hypoxia generator is approximately 50 L/min. Therefore, the participants' average ventilation after each effort could, in some cases, be greater than 120 L/min. Thus, it was possible to guarantee air safety with a low inspired fraction of oxygen (FiO 2 = 13.5%) for the entire training session. The oxygen concentration monitoring inside the tent was made through an oxygen sensor (Oxygen Sensor R-17MED, Teledyne Analytical Instruments). The detailed strategy for hypoxia instrumentation is available in the published protocol of this study. 56 For the NG, the participants performed the same procedures; however, the opposite hose end received ambient air (FiO 2 = 20.9%) without the participant's knowledge (blind procedure). The altitude where the experiment occurred was 540 m above sea level. Hypoxic Dose Calculation and SpO 2 by FiO 2 index The hypoxic dose was initially defined in km•h = (m/1000)•h, 22 considering that participants may show different peripheral oxygen saturation (SpO 2 ) reductions for the same FiO 2 , leading to lower oxygen availability in hypoxia conditions concomitantly with an oxygen demand increase resulting from exercise. Then, each participant's hypoxic dose was estimated using 2 distinct methods assumed from the product between exposure time in hours (t) by the SpO 2 average reduction in the training session, considering a resting SpO 2 value corresponding to 98%. Thus, hypoxic dose = (98 SpO 2 average) • t, adapted from Millet et al. 38 It was also calculated by the product between SpO 2 by FiO 2 , the index (SF) proposed by Soo et al. 54 Training Program The training sessions were performed thrice a week, with a total duration of up to 50.5 min. The initial part (warm up) and the final stage (cool down) lasted for 5 and 3 min, respectively, and were performed in low intensity, corresponding to "easy" by the RPE. In the main part, the intensity was based on L2 values obtained in the initial and partial evaluation (90-100% L2 from first to the fourth week, and 100-110% L2 from fifth to the eighth week). Each training set consisted of efforts lasting 5 min in the intensity of L2 with a break of 2.5 min between efforts. According to the week of training, the number of sets variated from 3 to 6 sets (Figure 1). HR and RPE were used to control the intensity during all training periods. Furthermore, blood oxygen saturation was monitored to evaluate the hypoxia and training responses using a pulse oximeter (D300C1, Dellamed). The measurements were recorded in 3 moments: rest, end of each effort, and end of the break. Statistical Analyses The data normality was checked using the Shapiro-Wilk test. After the normality confirmation, data were expressed in means (SD). Then data were analyzed using a 2 × 3 (time [BSLN × 8-W] × group [CG × HG × NG]) repeated-measures analysis of variance (ANOVA) to verify the possible variations in statistics. Additionally, the effect size (η 2 ) was calculated for the comparisons between groups and interpreted according to Cohen. 13 Statistical difference was defined as P < 0.05, and the Bonferroni post hoc test was used if necessary. The statistical tests were performed with the software JASP (Version 0.13.1.0). Responsiveness to intervention was computed by comparing typical error (TE) and the smallest worthwhile change (SWC). 29 TE was calculated by dividing the SD of the trial-to-trial difference score by √2. 29 The SWC was derived from betweensubject SD multiplied by either 0.2, 6,29 representing the typical small effect. The option to present the "thresholds" (SWC and 2 × TE) is due to their relevance in clinical application, ie, changes greater than SWC and especially 2 × TE, regardless of statistical significance, may indicate clinical significance. It was considered responsive for those who, for each variable, present values higher than 2 times TE. 6,29,57 Those analyses were performed on Microsoft Office Excel spreadsheets. Control Variables No differences were observed between groups in food records between baseline and 8 weeks (Appendix Tables A1 and A2, available in the online version of this article). In addition, no alterations were observed in walking and sitting times. Still, an increase in physical activity levels (moderate and vigorous) related to participation in the intervention program was observed (Appendix Table A3, available online). The participant's body mass and BMI are listed in Table 1, and no statistical alterations were observed; therefore, the differences in body mass and BMI were lower than the smallest worthwhile change. Internal Training Load and Hypoxic Dose Internal training loads, obtained from exercise time and RPE average, were similar between groups (NG = 3071 ± 742 a.u., and HG = 2954 ± 1032 a.u.). The hypoxic dose, calculated during the training sessions from FiO 2 , was 9.97 for NG and 64.6 km/h for HG. At the same time, the hypoxic dose calculated by peripherical oxygen saturation was 160.5 (93.2) for NG and 1471.3 (354.6) %•h for HG. Thus, in each session, the SF index means calculated from FiO 2 and SpO 2 were 464.0 (3.1) and 657.2 (17.5) SpO 2 •FiO 2 -1 for the NG and HG, respectively. Table 2 describes the results obtained at the maximum intensity reached by the participants in the CPET at baseline and 8 weeks. Statistical improvements were observed in VO 2peak (L/min) and iVO 2peak in both trained groups (NG and HG) compared with the baseline. However, no difference was observed between them. No differences were found in the CG. Regarding the relation to body mass VO 2peak (mL/kg/min), HR peak , and [La] peak no changes were detected. Statistical changes in RPE were observed only in HG after 8 weeks training (P = 0.01). Tables 3 and 4 describe the values associated with the submaximum parameters of the incremental test, L1 and L2. For L1 analysis, the value of VO 2 (mL/min and mL/kg/min) and power were statistically higher after the 8 weeks of training in the NG. In addition, the HG improved the power associated with L1. For L2, similar improvements were observed in VO 2 (mL/min and mL/kg/min) and power in NG and HG after the intervention compared with baseline. Figure 2 shows, from the individual changes related to SWC and 2 × TE, that both NG and HG improved the aerobic capacity and power variables compared with CG. Submaximum Parameters of the CPET Significant differences were not observed for erythrocyte counts, hemoglobin concentration, and hematocrit percentage, comparing the results after 8 weeks training and baseline (Table 5). Conversely, EPO levels and off-score were found to be higher in all 8-week experimental groups than at baseline (P < 0.01), but nondifference between training groups protocols were demonstrated (CG versus NG or HG). On the other hand, a decreased level of reticulocytes was observed for all 8-week groups, which was also statistically different in comparison to baseline (P < 0.01), but also nondifference between experimental groups (Table 5). In addition, HG showed an improvement in EPO and off-score and a decrease in reticulocytes after the 8-week intervention. Figure 3 shows, from the individual changes above SWC and 2 × TE, that both NG and HG improved EPO and off-score, and decreased reticulocytes compared to CG, but not for erythrocytes, hemoglobin, and hematocrit. discussion The main findings of the present study were that 8 weeks of moderate-intensity training improves the CRF of people recovered from COVID-19. In addition, hypoxia promoted advances similar to training in normoxia, with a greater hematological stimulation. Other studies had systematically reported the relationships between maximal exercise capacity 7 and CRF (ie, VO 2peak ) 45 and survival rate for patients with various pathologies. In the case of symptomatic COVID-19 individuals, as well as other respiratory diseases, a reduction in VO 2max has also been observed. Barbagelata et al,4 in a cross-sectional study, showed that patients with the post-COVID-19 syndrome had significantly lower (~10%) VO 2peak (25.8 ± 8.1 mL/kg/min) compared with asymptomatic individuals (28.8 ± 9.6 mL/kg/min). Debeaumont et al 15 evaluated physical fitness and its relationship with functional dyspnea by performing CPET in COVID-19 survivors 6 months after hospital discharge, those admitted to the general ward had a relatively preserved VO 2peak (87%), while those requiring the intensive care unit had a moderately reduced VO 2peak to 77%. These authors concluded that persistent dyspnea was associated with reduced physical fitness at 6 months. In the present study, participants who underwent the experimental training program (NG and HG) showed significant improvements in VO 2peak of 3.9 mL/kg/min and 3.5 mL/kg/min, respectively, besides the fact that a clinic change was observed for these groups, because most individuals showed responsiveness (changes greater than SWC and 2 × TE) to the intervention, while the CG does not significantly modify their values. These results suggest that the proposed training model effectively improved the aerobic power of people recovered from COVID-19. Compared with NG, HG also showed similar improvements in all physiological responses (L1, L2, and VO 2 ) and in their respective workloads. We understand that this is a significant result from a practical point of view, especially for the HG participants, who achieved aerobic and functional gains similar to the NG, with a likely reduction in the external workload during exercise in hypoxia Data presented as mean (SD). *P < 0.05, **P < 0.01 between BSLN and 8W in the same group. (861 ± 45 kpm/min). Sharma et al 50 found reductions of 6% and 4% in the intensity of LT and VO 2peak , respectively, of mid distance runners at 2100 m of normobaric hypoxia. Furthermore, these authors concluded that, in general, altitude training at the same intensity seems to correspond to an increase in the difficulty of approximately 30%. In another study, these authors 50 found 5.5% reductions in velocity at VO 2max (20.1 ± 1.3 vs 19.0 ± 1.0 km/h) in highly trained runners at 2100 m. The highest proportion of participants with relevant (higher than SWC) changes in VO 2peak was observed in participants who trained in hypoxia, even though both groups showed similar substantial increases in intensity corresponding to VO 2peak . NG and HG participants having undergone the same training protocol, the intensity of the stimuli was prescribed based on the internal training load variables (ie, HR and RPE), corresponding to L2. In fact, in the present investigation, the internal training loads were similar, and corroborated our findings, Liu et al 36 reported similar gains from training in hypoxia (FiO 2 = 15.3%) compared with the same program performed in normoxia with exercise intensities based on HR corresponding to 80% of VO 2peak , despite the load, the average external work of HG was 25% lower than NG. Collectively, these studies seem to demonstrate that during training performed at altitude, the absolute workload in LT and VO 2peak is substantially reduced, and this presents a significant advantage, especially in the health area, because it is possible to achieve the same internal load with less mechanical stress. A study pointed out that the first adaptations usually observed after sufficient exposure to hypoxia are hematologic, with an increase in the number of erythrocytes and hematocrit leading to more oxygen transport. 49 However, training performed in hypoxia can also affect other genetic factors controlled by HIF-1α that are associated with performance adaptations and muscle adaptations without necessarily increasing oxygen carrying. 20,55 Additionally, our results showed that participants belonging to the HG increased the EPO levels compared to CG and NG. Although EPO and blood parameters were performed before and after 8 weeks, it is still possible to observe positive clinical effects of hypoxia on the reduction of reticulocytes with a concomitant increase in off-score. These last results might be clinically relevant because EPO stimulates the production of erythrocytes and, consequently, red blood cells, which facilitate oxygen transport to the target tissues. 31 There is still no consensus in the literature regarding the minimum dose needed to produce EPO. Wojan et al. 58 recently investigated the effects of hypoxia exposure itself on EPO production. They found that eight 4 minute passive cycles of intermittent hypoxia, with a target SpO 2 of 80%, represent the shortest protocol to increase serum EPO levels in healthy individuals. Therefore, despite being slightly lower than recommended during passive exposure, the hypoxia doses used in the present investigation were sufficient to stimulate EPO increases, probably due to the combination with moderateintensity training. As clinical changes in hematological variables were observed for all groups, this phenomenon could mean a natural process of EPO increase and reticulocyte levels decrease after a long time post-COVID-19 recovery than an effect of training. The increase in EPO, predominantly observed for HG, may represent particular importance for people recovered from COVID-19. Pramsohler et al 43 recently investigated the relationship between COVID-19 and EPO levels in 59 COVID-19 patients hospitalized in the intensive care unit divided according to disease severity into mild, severe, and critical. Reduced hemoglobin levels were found in the critically ill group and the group of deaths. In addition, the coefficient of variation of the red blood cell distribution width and the ferritin values were significantly higher in the intubated and deceased groups. Finally, it was found that the EPO levels of patients who died were substantially lower than the CG and the group of surviving patients. The other aspect refers to a greater hematological stimulation promoted by hypoxia, which can be seen by the decrease of 32.5% in reticulocytes, concomitantly with the substantial increase of 28% of the off-score for HG. Faulhaber et al, 19 using a "single-blind" model, compared the effects of exposure to hypoxia (continuous and cyclic) on "key markers" of hematologic adaptation, stress, and cardiac damage in elderly people. Both hypoxia protocols lasted approximately 70 minutes, and SpO 2 severity was 85%. Red cell content increased only on day 5 of exposure to hypoxia, compared with baseline values (+7.7%, P < 0.01), whereas hematocrit and off-score increased only at the end of the experiment. The authors concluded that there are differences in responses arising from continuous and cyclic hypoxia protocols when the objective is to stimulate hematological alterations. Despite discussions about the risks and benefits of using EPO, 16,24,46 clinical and randomized studies are still needed to demonstrate its effectiveness in people recovering from COVID-19. In the present study, the EPO increment was succeeded from the combination of exposure to hypoxia and moderate-intensity aerobic training, which is, therefore, a possible nonpharmacological strategy to improve EPO levels and hematological parameters in individuals recovered from COVID-19. This study had significant limitations, including the external load during the training sessions that was not controlled, limiting the conclusions regarding the exercise dose performed for each experimental group. In addition, the number of participants allocated to each group is reduced, especially in NG, due to participants' dropout during the project. Therefore, the gravity during the disease and the impairment level after COVID infection were not the same for all participants. Also, age and physical fitness distributions were heterogeneous because diverse populations were enrolled to provide a more generalizable clinical approach. In conclusion, based on the findings reported in the present study, 8 weeks of moderate-intensity training in normoxia or hypoxia promoted similar benefits in CRF of people recovered from COVID-19. Finally, the hypoxia exposure was an additional stimulus to training, which increased EPO levels and promoted hematological stimulation. Therefore, this type of intervention is suggested as an alternative nonpharmacological treatment for individuals recovering from COVID-19.
2022-09-28T06:18:24.780Z
2022-09-25T00:00:00.000
{ "year": 2022, "sha1": "32b152d342c39eb8fec9ca961b8acc3e2c680770", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Sage", "pdf_hash": "eb2252242836599223a85d57db61217fa485ab53", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227162262
pes2o/s2orc
v3-fos-license
MEMe model in the Post-Newtonian limit We examine the post-Newtonian limit of the MEMe model presented in [J. C. Feng, S. Carloni, Phys. Rev. D 101, 064002 (2020)] using an extension of the PPN formalism which is also suitable for other type-I Minimally Modified Gravity theories. The new PPN expansion is then used to calculate the monopole term of the Post-Newtonian gravitational potential and to perform an analysis of circular orbits within spherically symmetric matter distributions. The latter shows that the behavior does not differ significantly from that of GR for realistic values of the MEMe model parameter $q$. Instead the former shows that one can use precision measurements of Newton's constant $G$ to improve the constraint on $q$ by up to $10$ orders of magnitude. I. INTRODUCTION A recent article [1] introduced a class of Generalized Coupling Theories (GCTs), the simplest of which was termed the Minimal Exponential Measure (MEMe) model. These are modified theories of gravity that do not introduce new dynamical degrees of freedom; rather, they modify the interaction between spacetime and matter in a manner that preserves the Einstein equivalence principle (all matter is minimally coupled to an effective spacetime geometry). According to the classification scheme of [2][3][4], GCTs and the MEMe model are Type I Minimally Modified Gravity (MMG) theories, since they only have two dynamical degrees of freedom and admit an Einstein frame (in the sense that the theories may be rewritten as General Relativity [GR] with a modified source). While it was shown in [1] that the dynamical behavior of the MEMe model differs significantly from GR under the conditions present in the early universe and within a matter distribution, the MEMe model reduces to GR in a vacuum-in this respect, the MEMe model is qualitatively similar to the Eddington-Inspired Born Infeld (EiBI) theory [5]. However, the predictions of the MEMe model differ from those of GR within a matter distribution and in its coupling to matter. The purpose of the present article is to determine the degree to which these differences can be measured in the post-Newtonian limit. Modified gravity theories, i.e. those that attempt to go beyond GR, have been extensively studied for at least three motivations: (i) to understand/solve mysteries in cosmology such as the origins of dark energy, dark matter, and inflation; (ii) to help develop the theory of quantum gravity and (iii) to understand GR itself. Regarding (iii), even if GR is the genuine description of gravity in our universe for a certain range of scales, the only way to prove it experimentally/observationally is to constrain possible deviations from GR by experiments/observations. In this regard, it is useful to have a universal parameterization of possible deviations from GR. For solar system scales, the so-called parameterized post-Newtonian (PPN) formalism proved to be particularly useful. The standard PPN formalism includes 10 parameters to parameterize deviations from GR and covers a wide range of gravitational theories beyond GR [6]. However, there is no guarantee that the standard PPN formalism can be applied to all modified gravity theories. For example, in gravitational theories without the full diffeomorphism invariance, one cannot, in general, adopt the standard PPN gauge and thus may have to introduce additional PPN potentials/parameters (see e.g. [7]). The MEMe model we consider here also requires an extension of the standard PPN formalism for a different reason: the non-trivial matter coupling inevitably generates potentials that are not included in the standard PPN formalism. These potentials are not only relevant in themselves, but they are also necessary to compute the standard 10 PPN parameters. In this article, we shall construct an extension of the PPN formalism appropriate for a subclass of Type I MMG and GCTs based on additional PPN potentials. We will then focus on the MEMe model, finding that one must add to the PPN metric a single new potential, which we denote Ψ, and some additional counterterms. All of the counterterms are proportional to the pressure, mass density, and energy density for a fluid, so they vanish outside matter sources. This is, however, not the case for Ψ. Hence the PPN parameters for the MEMe model in the case of a test particle in an external field agree with those of GR except for the coefficient associated with Ψ. In the context of the MEMe model, we find that the effects of the potential Ψ can be absorbed into the Newtonian potential outside a matter distribution. This result suggests that the modification to the matter cou-plings can in the post-Newtonian limit be reinterpreted as a density-dependent modification of the gravitational constant G. Comparing with [8], we argue that current laboratory methods can improve the constraint on the (single) parameter q in the MEMe model by 10 orders of magnitude over the speed of light constraint discussed in [1]. We also study circular orbits in the presence of spherically symmetric matter distributions and compare the predictions of the MEMe model with GR. Our findings suggest that in most astrophysical systems, the presence of a dilute matter distribution does not significantly affect the motion of matter in the MEMe model. This paper is organized as follows. First, we review the MEMe model and GCTs in Sec. II. We then discuss the Newtonian limit in Sec. III and develop the PPN formalism for the MEMe model in Sec. IV. Afterward, in Sec. V we consider to post-Newtonian order the monopole term for the MEMe model and discuss how constraints on the variation of the effective gravitational constant may be used to constrain the parameter q in the MEMe model. Finally, in Sec. VI we compare the behavior of circular orbits within a spherically symmetric matter distribution in the MEMe model to that of GR. Sec. VII is then devoted to a summary of the paper and some discussions. II. GENERALIZED COUPLING THEORIES AND THE MEME MODEL Generalized coupling theories are defined by an action of the form [1]: where the metric g µν is assumed to have the form: and the function F = F (A · · ) is chosen so that in a vacuum, A µ α = δ µ α is an extremum of the action. Upon varying the action with respect to the metric and remembering that A µ α is independent of g µν , one obtains field equations of the form: whereĀ α µ is the inverse of A µ α , and f ν α = f ν α (A · · ). The MEMe model, discussed at length in [1], is a simple example of a generalized coupling theory. The MEMe model is defined by the following action: where κ := 8πG and the Jordan-frame metric g µν is defined (with A := A σ σ ): andΛ = Λ−λ, with Λ being the observed value of the cosmological constant. Unless stated otherwise, indices are raised and lowered using the metric g µν and g µν . Defining the parameter the equation of "motion" for A µ α takes the following form where T µν is the energy-momentum tensor defined by the functional derivative of L m [φ, g ·· ] √ −g d 4 x, and T := g µν T µν . Here, we assume qT = 4. Since Eq. (8) whereĀ α µ is the inverse of A µ α as already explained, and |A · · | = det(A · · ). One may see from the form of Eq. (9) that the MEMe model admits an Einstein frame in the sense of [2], making this a Type I MMG. Here, the operating definition for an Einstein frame is a choice of variables in which a theory is recast as GR with a modified source, which may involve additional degrees of freedom. We define the Jordan frame as a choice of variables in which matter is minimally coupled to the metric tensor. In the MEMe model, it is the frame in which matter is coupled to the metric tensor g µν . We should stress however that, despite some similarities, these frames are not related to the well-known conformal transformations in modified gravity. The choice of frame is important also because it specifies the worldlines of free-falling test particles: since matter is minimally coupled to the Jordan frame metric, one expects that small clumps of matter to follow the worldlines of test particles as defined by the Jordan frame metric. 1 For this reason, the Jordan frame is the most physically relevant choice. 1 One should keep in mind that since A β α = δ β α in a vacuum, the Einstein and Jordan frame metrics coincide in the absence of matter. Equation (8) can be solved exactly for a single perfect fluid. The dual (lowered-index) fluid four-velocity u µ is constructed from the gradients of the potentials, so it is appropriate to regard u µ to be the metric-independent fluid variables. The energy-momentum tensor for the fluid takes the form: the Jordan frame trace of which is T = 3p − ρ. Note that while g µν u µ u ν = −1, g µν u µ u ν = −1. It is useful also to define a dual four-velocity vector which is normalized with respect to the Einstein-frame metric g µν . Defining ε := g µν u µ u ν , one can obtain such a four-velocity (defining u µ := g µν u ν ): where u µ = g µν u ν , and it follows that u µ u ν = −εU µ U ν . Since the MEMe model admits two metric tensors g µν and g µν , one should be careful when raising and lowering the indices of the four-velocity-while the (dual) vector u µ is the lowered index Jordan frame four-velocity, the raised index Jordan frame four-velocity u µ is defined as the following: which is in general not equal to u µ . One may obtain a simple relationship between the respective raised and lowered components of the Jordan frame fluid fourvelocity u µ and u ν by first noting that A µ α u α ∝ u µ andĀ α µ u α ∝ u µ ; it follows that u µ ∝ u µ (where u µ = g µν u ν ). One may then write u µ = a u µ where a is some factor. Now recall that u µ u µ = ε, and since u µ u µ = u µ u µ g µν = −1, one can show that a = −1/ε and obtain the result: It follows that u µ u µ = 1/ε, and U µ U ν = −ε u µ u ν . Given the following ansatz for A µ α : one can easily solve Eq. (8), with the result: The inverseĀ α µ = [δ µ α + εZ(Y + εZ) −1 U µ U α ]/Y has a similar form. The gravitational equation (9) takes the form where T µν is the effective energy-momentum tensor in the Einstein frame defined by and with the following expression for the determinant: So far, the gravitational field equations (3), (9), and (17) are written as dynamical equations for the metric tensor g µν . One can in principle attempt to reexpress the field equations in terms of the metric g µν . This can be done by solving Eq. (6) for g µν and inserting the resulting expression into the Einstein tensor to obtain an expression for the gravitational field equations in the Jordan frame. In this case the resulting field equation will contain derivatives up the second order of the tensor A µ α . We do not report here the form of such an equation which is rather long. However, we wish to highlight this feature of the Jordan frame field equations as it is relevant for the following discussion on the distinction between the MEMe model and other modified gravity theories, and also the extension of the PPN formalism that we will present in the next section. The reader may note that the MEMe model superficially resembles other modified gravity theories that can be interpreted as a modification of the gravitational coupling, such as scalar-tensor theory or disformal theories [9][10][11][12][13]. Indeed, as pointed out in [1], the Jordan metric g µν may be viewed as a type of vector disformal transformation [11]. However, the difference here is that the MEMe model, being an MMG, does not introduce additional dynamical degrees of freedom; the tensor A µ α is an auxiliary field, the components of which can be expressed directly in terms of the fluid quantities ρ, p, u µ and the metric. As discussed in [14], the addition of an auxiliary field in a gravitational theory will generically produce terms involving derivatives of the energy-momentum tensor in the field equations. While the MEMe model evades this problem in the Einstein frame, the derivatives of A µ α present in the Jordan frame equations discussed in the preceding paragraph will by way of Eq. (8) generate terms containing up to second-order derivatives of T µν . The standard PPN formalism is not equipped to handle such terms, and in the following sections, we propose and develop methods for dealing with this obstacle. III. NEWTONIAN LIMIT OF THE MEME MODEL It is helpful to first consider the Newtonian limit of the MEMe model. In doing so, we will assume that q is at most of order one. Such a choice is motivated by the values that we have found for the modulus of q in [1]. This assumption, combined with the smallness assumption on ρ that is made in the Newtonian and Post-Newtonian analysis, implies that in our calculation we have at most, Our primary aim in this section is to identify and study the Newtonian potential in the MEMe model. We begin by expressing Eq. (17) in the form where T = g αβ T αβ . In an appropriately chosen coordinate system (see also Ch. 4 of [6,15] for further discussion), the (0,0) component becomes where we have used the fact that in the Newtonian limit From Eq. (19), and taking into account the fact that in our approximation det(A) ≈ 1, we obtain so that, defining R 00 = ∆Φ E (with ∆ being the Laplacian) However, from an operational point of view, an accelerometer would measure the Newtonian limit of the Jordan frame metric g µν . Such a potential would be related to Φ E by the relation In the case of MEMe, the coefficient C is given by In order to preserve the traditional notation we will from this point on work in terms of a potential U satisfying an equation of the same form as Eq. (25). While it is convenient to work in terms of a potential satisfying Eq. (25), one should keep in mind that the physically relevant potential is Φ J , which we will relate to U as we develop the extended PPN formalism in the next section. The expression for Φ J in Eq. (26) brings up a potential conceptual difficulty. If ρ has a sharp discontinuity, as one might expect at the boundary of a star, the gradient of Φ J can be large-a similar difficulty has been identified in the qualitatively similar EiBI theory [16]. However, a large gradient in Φ J implies a strong gravitational force, which would lead to a rearrangement of matter. One would expect this gravitational backreaction on the matter distribution to drive the system away from large gradients in Φ J (similar arguments [17] have been made for the corresponding difficulty in EiBI-see also [18]). IV. EXTENDED PPN FORMALISM Naively, one might expect that the PPN formalism applied to generalized coupling theories in the Einstein frame yields a set of PPN parameters which are the same as those of General Relativity. In the MEMe model, for instance, the theory is identical to GR if the energymomentum tensor T µν as defined in Eq. (18) has the perfect fluid form. However, as established in [1], the Jordan frame metric is the physically relevant metric, since it is the metric which couples directly to matter. Moreover, the microscopic description of matter is specified by the action of matter fields minimally coupled to the metric in the Jordan frame and thus gives the equation of state of the matter fluid in the Jordan frame. It is therefore appropriate to introduce the PPN potentials and parameters in the Jordan frame. On the other hand, it is more convenient to perform most of the computations in the Einstein frame. Notice that the distinction between the two frames concerns only physical systems in which matter sources play important roles, and therefore it does not concern the correction to e.g. celestial mechanics on solar system scales. It may be helpful to provide a brief overview of our procedure, which we first develop for a more general class of modified gravity theories and generalized coupling theories, and then apply to the MEMe model. We first attempt to apply the PPN formalism to the Jordan frame metric, but we find that to avoid higher-order derivatives of the PPN potentials in the field equations, counterterms must be added to the Jordan frame metric. We then express the Einstein frame metric in terms of Jordan frame variables so that we can use the simpler field equation (17) in the PPN analysis. A. Standard PPN formalism We follow the conventions of [6] with the post-Newtonian bookkeeping (with the mass density being defined as ρ := ρ(1 + Π)): so that the velocity components v i are of order O( 1/2 ). It should be mentioned that v i , which are raised components of the three-velocity in the Jordan frame, do not correspond directly to the components of u µ , but to the raised index four-velocity u µ in the Jordan frame. Recall that the distinction between u µ and u µ is necessary because there are two metric tensors in generalized coupling models. The components of u µ have the explicit form: where v is the coordinate 3-velocity of the fluid in the Jordan frame with components v i . In terms of Jordan frame fluid quantities, one may use Eqs. (14) and (18) to write the source of the gravitational field equation (17) as follows: Following [6] (and the coordinate conventions therein), we introduce the conserved rest mass density ρ * which is defined according to the following formula: Given ρ * , one may then define the following PPN potentials by the differential relations: 2 and the following potentials by integral relations: In the standard PPN formalism, the metric tensor is expanded as follows: The metric is inserted into the field equations, and expanded to PPN order O( ); one then matches terms proportional to each of the potentials in Eqs. (32-40) to obtain the PPN coefficients. 2 We point out to the reader that while the PPN formalism in [6] is equivalent to that of [15], the definitions of the PPN potentials have changed (though the PPN parameters are the same); where the PPN potentials in [15] are defined with respect to ρ, the PPN potentials in [6] are defined with respect to ρ * . This change results in a change in the coefficients in front of the potentials Φ 1 and Φ 2 in Eq. (46) for the metric componentg 00 . B. Extended PPN formalism The procedure outlined in the preceding section does not suffice for certain classes of modified gravity theories. For instance, one might imagine in four dimensions a rather general theory of the form (use of the Cayley-Hamilton theorem has been employed on the RHS): where e µ ν contains additional geometric or gravitational terms, T µ ν is the energy momentum tensor, and A i (T · · ) and B(T · · ) are scalar functions that are polynomials in scalar invariants of T µ ν up to third order. Examples of such a theory include the EiBI [5] or the braneworld model of [19]. We also note that Eq. (44) is also a subcase of the gravitational field equation given in [14]. We consider a class of Type-I MMG theories in which the source terms in the Einstein frame can be written exclusively in terms of the energy-momentum tensor so that e µ ν = 0. Expanding the RHS of Eq. (44) to post-Newtonian order, one has a term proportional to ρ 2 , however the PPN expression for the Ricci tensor does not contain any term that can absorb such a term. One may remedy this by adding a term to g 00 (41) of the form 3 2νΨ, where Ψ is a O( 2 ) potential defined by the following: We note here that unlike the standard PPN potentials, this additional potential Ψ is dimensionful-since the metric components must be dimensionless, it follows that the associated parameter ν must also be dimensionful. We attribute this to the fact that the coefficient for the ρ 2 term which appears in the PPN expansion of (44) introduces an additional scale into the theory. Later, we shall see this explicitly when applying this extended PPN formalism to the MEMe model. We now turn to the case of generalized coupling theories as described by Eqs. (1)(2). For an appropriate choice of reference frame, the extended PPN metric for the Jordan-frame metric would take the form (note the addition of the term 2νΨ ing 00 ): However, one still encounters a difficulty when attempting to apply the standard PPN analysis to Eq. (9). As discussed earlier, the gravitational field equations in the Jordan frame will contain up to second-order derivatives of T µν . It follows that the direct application of the PPN form to the Jordan frame metric will introduce terms involving second derivatives of the fluid potentials and four-velocity, but the standard PPN formalism and the extended formalism encapsulated in Eqs. (46)-(48) are incapable of absorbing these terms. To see this, consider the following expression for the Einstein frame metric g µν : From Eq. (4), the tensorĀ α µ and the factor Ξ = Ξ(A · · ) depend on ρ * , Π and p, and we assume g αβ takes the usual PPN form given in Eqs. (46), (47), and (48). Upon expanding the Ricci tensor for g µν as given by (49) into Eq. (9), one will obtain terms containing derivatives of ρ * , Π and p, which cannot be absorbed by remaining terms in Eq. (44) if e µ ν = 0. 4 To eliminate these additional terms, we can add counterterms to the metric componentsg µν given in Eqs. (46), (47), and (48), and then choose coefficients such that Eq. (49) does not contain the quantities ρ * , Π and p. In general, the counterterms take the following form: where we restrict to terms of order g 00 ∼ O( 2 ), g 0j ∼ O( 3/2 ), and g ij ∼ O( ). At this stage, one may collect terms of order in g 00 which yields the Newtonian potential in the Jordan frame: consistently with what was obtained in (26). We then choose the coefficients c 0−4,Ψ,w , d V,W and e 0 so that the 4 One might suppose that e µ ν contains terms with derivatives of ρ * , Π and p, which can cancel the additional terms introduced by Ξ andĀ α µ. Derivatives of ρ * , Π and p correspond to higher order (> 2) derivatives of the potentials, which correspond to higher order derivatives of the metric-one then has a higher order theory of gravity, which (excluding frame-dependent theories like Hořava-Lifshitz gravity [20] and a certain class of type-II MMG theories [21][22][23][24]) generically suffers from Ostrogradskian instability [25,26]. Einstein frame metric takes the desired form: where again we restrict to terms of order g 00 ∼ O( 2 ), It is worth mentioning at this point that to post-Newtonian order, the metric g µν retains the form expected for the PPN gauge in the sense that the spatial components g ij do not acquire cross terms. It should also be mentioned that we are in fact working in a PPN gauge sinceg ij is diagonal and depends strictly on the potentials (32-40)-from Ch. 4 of [6], we expect that a non-PPN gauge will introduce an additional potential. To clarify, one first chooses the gauge in whichg µν has the form given in Eqs. (46-48); after the gauge is chosen, the set of counterterms in Eqs. (50-52) for g µν is sufficient to characterize the PPN expansion. The proposed modification to the PPN parameterization has been motivated by necessity; without these modifications, one cannot apply the PPN formalism to a class of Type-I MMGs and GCTs whose equations of motion can be written in the form of Eq. (44) (with e µ ν = 0), including the MEMe model. Though we have provided here a preliminary discussion regarding the theoretical interpretation for the new potential Ψ, it is perhaps appropriate to also understand the physical interpretation of Ψ and the counterterms in a phenomenological context. We will attempt to address this point in later sections by studying the net effect of these quantities on some post-Newtonian systems in the MEMe model. C. MEMe model coefficients We now apply the extended PPN formalism described above to the MEMe model. First, we note that in Eq. (49), Ξ = 1 for the MEMe model (compare Eqs. (2) and (6) and recall that A = 4 on shell). We then demand that the Einstein frame metric g µν has the form given in Eqs. The expression for the Einstein frame metric g µν in Eqs. (54-56) is then substituted into Eq.(17), and we find that all of the standard PPN parameters are exactly the same as that of general relativity (γ = β = 1, all others zero). However, the new parameter ν, which has the value ν = 0 in general relativity, has the following value in the MEMe model: As anticipated by our remarks in the preceding section, the parameter ν corresponds to the scale q = 1/λ that appears in the MEMe model. A. General analysis We will now investigate the physical effects of the modification of the PPN monopole term associated with Ψ. We begin by assuming that the matter distribution is compact and static (so that v i = 0), and consider what happens outside the matter distribution. One may then define an effective gravitational potential in the following manner: Outside a matter distribution, the counterterms vanishrecall that outside of a matter distribution, the Einstein and Jordan frame metrics coincide. For a theory with no preferred location effects (ξ = 0), the effective gravitational potential takes the form (we set v i = 0 so that Φ 1 = Φ 6 = 0): where (following the reasoning in Ch. 40 of [27]): Note that up to an overall factor of 2, Φ consists of all terms in g 00 such that ∆Φ can be written as an algebraic function of ρ, Π, p, and U up to fourth order in . We consider the case where the gravitational theory is fully conservative, with the parameter choices α 1 = α 2 = α 3 = ζ 1 = ζ 2 = ζ 3 = ζ 4 = 0 (in addition to ξ = 0), one has β 2 = (1 − 2β)/2, β 3 = 1, and β 4 = γ. We now consider the multipole expansion for the Newtonian potential: where ρ e is an effective energy density given by: The monopole moment is given by: where: The definition given in Eqs. (64) and (66) It follows that for a spherically symmetric matter distribution, the additional PPN potential can be absorbed into the mass, as one might have expected. This suggests that outside of a spherically symmetric matter distribution, the effects of the additional potential Ψ cannot be disentangled from the other potentials. To distinguish the effects of the potential Ψ and parameter ν, one should consider the internal structure of the source. In particular, if one has a detailed model for the source itself, it may be possible to disentangle the effects of the parameter ν from the total mass of a spherical source. To see how one might distinguish the effects of an additional potential Ψ, we consider a given matter distribution, and split the mass M into two parts, one which depends on the original PPN parameters, and one which depends on the new parameter ν. Defining the potential,Φ and definingρ e := ρ e − ν G ρ * ρ, one has the result where the mass defined with respect to the original PPN potentials takes the form: Now we consider the standard multipole expansion for the new PPN potential: Now ρ * ρ =ρ 2 e + O( 3 ). The monopole moment is given by: where: The relationship betweenM and µ 2 is sensitive to the internal structure of the source. For instance, if one considers the following Gaussian profile forρ e : then one has for µ 2 : Note that µ 2 depends on the size σ for the source. Motivated by the Gaussian expression, one can use Eq. (77) as a parameterization for the internal structure of the source, with σ being a parameter which represents a characteristic length scale for the source. It follows that: whereρ C := 3M /4πσ 3 is the compactness of the source. Given some matter distribution, the massM is the post-Newtonian mass in the GR limit ν → 0 for the MEMe parameter choice. B. MEMe model analysis It should be mentioned that this dependence on the compactness is only apparent when a detailed description of matter is taken into account. Since MEMe coincides with GR outside matter sources, the inertial mass outside the source is equivalent to the gravitating mass M . It follows that one can only compute the difference between the GR valueM and the MEMe value M when computing the gravitating mass directly from the density. To understand this difference, consider lowering a particle with a small mass m into a matter distribution satisfying the distribution in (76). We consider this process in the Einstein frame since the gravitating mass M in the MEMe model is defined in this frame. The gravitational binding energy between the particle and the matter distribution is given by As discussed in [1], a stability argument suggests that q < 0, which in turn suggests ν < 0. Since Ψ(x) > 0, the gravitational binding energy of the particle within a matter distribution is decreased in the MEMe model compared to GR. This result indicates that in the MEMe model, the gravitating mass of an object outside matter sources is less than the sum of its parts due to a weakening of the gravitational binding energy. If the inertial mass and the gravitating mass of an object in a vacuum are the same, then one may then place constraints on the parameter ν by measuring the mass of an object, disassembling it into its constituent parts, and measuring the mass of the individual components. C. Constraints on the MEMe model One can in principle place a constraint on the parameter ν without requiring the equivalence of inertial and gravitating masses. To see this, first note that one can interpret Eq. (78) as resulting from a dependence in the effective gravitational constant on the compactnessρ C of the source. For a source massM and the Gaussian profile one has the following expression for the effective gravitational constant: Recent experiments [8] with spherical stainless steel (SS 316) source masses, which have a density of ∼ 7.87 × 10 3 Kg/m 3 , constrain Newton's constant to a fractional uncertainty of about 3 × 10 −5 . While the experiment in [8] alone cannot place a constraint on ν, one might imagine a variation of the experiment in which the spherical source masses can be disassembled into thick spherical shells. If the same experiment is performed for each shell individually, and then again for the fully reassembled source mass, one can search for differences in the effective gravitational constant-such differences are evident of a weakening or strengthening of the gravitational binding energy when masses are brought together. Assuming that fractional uncertainties similar to those of [8] can be achieved, one can in principle constrain ν up to a value on the order of ν ∼ 10 −7 m 3 /Kg, or 10 −24 m 3 /J, in units of inverse energy. This in turn can place a strong constraint on q: which is ten orders of magnitude stronger than the speed of light constraint (|q| < 2 × 10 −14 m 3 /J) in [1], though still 12 orders of magnitude weaker than scales corresponding to the inverse of the highest energy densities (∼ 14GeV/fm 3 ≈ 2.2 × 10 36 J/m 3 ) probed in accelerator experiments to date [28,29], and 26 orders of magnitude weaker than that from a TeV scale breakdown. VI. LAPLACIAN COUNTERTERMS AND ORBITS A. Circular orbits for conservative theories We focus now on the effect of the Laplacian counterterms in the modified PPN metric (50)-(52) on circular geodesics in the post-Newtonian limit. For simplicity, we assume that matter sources are spherically symmetric and stationary, so that V i = 0, W i = 0, Φ 1 = 0, and Φ 6 = 0. We also consider a conservative theory, which corresponds to the choice α 1 = α 2 = α 3 = ζ 1 = ζ 2 = ζ 3 = ζ 4 = 0 in the original PPN analysis of [6]. The line element then has the form (dΩ 2 being the line element on the unit two -sphere): To simplify the analysis, we neglect internal energy density and internal pressure. The functions f and h take the following forms: For a spherically symmetric matter distribution, one can obtain solutions for the potentials by directly integrating a Poisson equation of the form ∆ψ = −4π G ρ s , which in spherical symmetry may be written explicitly: where ρ s is a source function. This can be integrated to obtain the solution: ρ s (y ) y 2 dy dy. (86) Given a Jordan-frame geodesic x µ (τ ) parameterized by proper time τ , one has the following conserved quantities: From the unit norm condition for the four-velocity, one can show that the specific energy e must have the form: The effective potential may be obtained by considering the turning point (dr/dτ = 0) expression for e 2 : We now consider circular orbits and assume spherical symmetry (f = f (r), h = h(r)); circular orbits lie at the minima of the effective potential, and are given by the condition V ef f (r) = 0. One can solve V ef f (r) = 0 for the specific angular momentum l to obtain: and a comparison with Eq. (87) yields the proper tangential velocity: From the line element Eq. (82), one has dt/dτ = −f (r) − h(r) v 2 , which yields the tangential coordinate velocity: B. Circular orbits in the MEMe model The MEMe model is a conservative theory in the sense of [6], as the standard PPN parameters are the same as that of GR. The extra parameters in the extended PPN formalism have the values given in Eqs. (57)-(59), which differ from that of GR, so one expects circular orbits in the MEMe model to differ from those of GR, given some profile for the matter distribution. We first consider a Gaussian profile: with ρ 0 being the central density, and σ a characteristic scale. Equation (86) may be used to obtain the potentials: which may be used to compute the tangential velocity v(r) as given by Eq. (92). It turns out that a large modulus for q is required to obtain rotation curves that differ from q = 0 in a discernible way. For the Gaussian model, the tangential velocity of a circular orbit as a function of radius (rotation curve) is plotted in Fig. 1, for the parameter choices ρ 0 = 10 −6 , and σ = 1 (with G = c = 1); with one curve corresponding to q = 0 and another corresponding to q = 10. The rotation curve for q = 10 is virtually identical to that of q = 0 at large radii (as illustrated in the plot for the difference ∆v := v GR − v MEMe ), and has an increased value for relatively small values of r. One might expect this behavior; for instance, one may note that c 0 ∆U ∝ −qρ > 0 (for q < 0) and upon comparison, one finds that the slope for c 0 ∆U (r) ∝ ρ * (r) (as given by Eq. (93)) matches the slope for the potential U (r); it follows that the counterterms enhance the force in the radial direction, which in turn increases v(r). The convergence to the GR rotation curve at large r is expected, as one expects the MEMe model to converge to GR at low density. These general features persist in the other examples we consider. Another relevant matter profile is the isothermal one: Such a profile is known to yield flat rotation curves in Newtonian gravity, and is of interest (upon regularization of the singularity at r = 0) for modeling dark matter halos. The curve v(r) is plotted in Fig. 2 for the parameter choices M h = 10 −2 , and a h = 10 3 . Again, one sees behavior similar to that of the Gaussian case-the q = −10 curve only differs (and has a lower value) from the q = 0 case at small values for r, as expected. The divergence in the rotation curves at small r is expected, since ρ * diverges in the limit r → 0. In Fig. 3, we plot v(r) for the combined Gaussian and isothermal matter distributions with the same parameter values as those of Figs. 1 and 3. Again, we note the velocities are increased at small r. In all cases, we find that while the Laplacian counterterms have some effect on the behavior of rotation curves, the value of q must be rather large in order to distinguish the MEMe model and GR, and even then, this occurs only at small values of r, as illustrated in the plots for ∆v. If one expects the MEMe model to break down at the TeV scale, then 1/|q| is expected to be 30 orders of magnitude larger than the average density of the Earth; for realistic astrophysical systems (galaxies), one might expect 1/|q| and the matter density to differ by an even greater amount. For the Gaussian example, the central density ρ 0 in Fig. 1 is six orders of magnitude below the density scale 1/|q| = 10 −1 at which the MEMe model breaks down. For the isothermal example, the average density 3M h /4πa 3 h is eleven orders of magnitude below the density scale 1/|q| = 10 −1 . While these results suggest that signatures of the MEMe model are unlikely to appear in galactic rotation curves and dilute matter distributions, the MEMe model may still produce measurable differences in the interiors of neutron stars. The density for a neutron star is roughly an order of magnitude less than the highest energy-density (∼ 14GeV/fm 3 ≈ 2.2 × 10 36 J/m 3 ) states of matter probed to date in accelerator experiments [28,29]. If the scale for the cutoff density is assumed to be an order of magnitude higher than that of the quark-gluon density, so that it is two orders of magnitude higher than the neutron star density, then upon modeling a neutron star with a Gaussian matter distribution, the term c 0 ∆U can become comparable to −2Φ 2 deep within the distribution. In particular, one can choose ρ 0 = M/2 √ 2π 3/2 σ 3 , with the normalization M = G = 1 and σ = 6. In this case, the magnitude of the counterterm c 0 ∆U is roughly ∼ 0.75 of the post-Newtonian correction −2Φ 2 when r = σ/10, though at the same radius, one finds −c 0 ∆U/2U 2 ∼ 1.2 × 10 −3 and −c 0 ∆U/2U ∼ 1.7 × 10 −4 , so the corrections are still rather small. However, this rough calculation suggests that the corrections from the MEMe model may modify the properties of the Neutron star in a measurable way. VII. SUMMARY AND DISCUSSION In this article, we have extended the PPN formalism to handle a subclass of Type I MMGs and GCTs, and have applied the extended formalism to the MEMe model. Outside matter sources the Einstein frame and the Jordan frame coincide with each other and the field equations in either frame agree with those in GR. However, in the non-vacuum case, a PPN analysis for GCTs and the MEMe model should be performed with respect to the Jordan frame metric g µν . In fact, matter is mini-mally coupled to the Jordan frame metric g µν , and it is in this sense that the Jordan frame metric is the physical metric. In order to perform a PPN analysis for g µν , it is necessary to introduce an additional (dimensionful) potential Ψ and counterterms (the latter vanish outside a matter distribution) constructed from the Laplacians of the PPN potentials. This can be understood considering the form of the field equations in the Jordan frame, which contains the energy density and its derivative up to the second order. We have found that with the exception of the counterterm parameters and the parameter ν associated with Ψ, the parameters in the extended PPN formalism are the same as those of GR. The new potential Ψ and its associated parameter ν are not dimensionless. One might ask whether it is possible to define a dimensionless potential from Ψ. This can be done by choosing an appropriate length scale, however such a procedure is not necessarily model-independent. For example, to post-Newtonian order, a theory having the form of Eq. (44) would necessarily include a ρ * 2 term on the RHS, the coefficient of which would introduce an additional scale. Indeed, each of the additional coefficients appearing on the RHS will introduce additional scales, and any of these can provide a reference scale to make Ψ dimensionless. To avoid the choice of one scale rather than the other, here we have chosen to leave Ψ and ν dimensionful. Given some compact, spherical matter distribution, we have considered the monopole term in a standard multipole expansion and have found that to post-Newtonian order, the MEMe model is indistinguishable from GR in vacuum regions outside the matter distribution. This is not particularly surprising, as the Einstein and Jordan frame metrics coincide in vacuum, and one can for a single fluid in the Einstein frame absorb the differences from GR by a redefinition of fluid density and pressure. However, the differences between MEMe and GR become apparent when the details of the matter distribution are taken into account. The monopole expansion indicates that in MEMe, the effective gravitational constant G depends on the internal structure of the source masses, and we argue that one can use this dependence to place strong constraints on the free parameter q of MEMe. In particular, we argue that (conceptual issues aside, see the next paragraph) a modification of the experiment described in [8] may improve the constraint on q over the speed of light constraint of [1] by 10 orders of magnitude. In particular, we propose an experiment in which the spherical source masses are disassembled into concentric "thick" shells, and the active gravitational masses of the individual shells and the assembled spheres are compared. This proposal might bring up a conceptual issue regarding the gravitational binding energy between concentric thick shells of matter. In GR, this situation can be treated using the standard junction and thin-shell formalism of Israel [30]. Since the geometry outside the shells is essentially that of GR, one might ask whether the binding energy is modified at all. This question depends on the behavior of the theory at the boundaries of spatially compact matter distributions, which can be rather subtle in certain theories of modified gravity. In the case of EiBI gravity [5], which shares a structure similar to that of the MEMe model in the weak-field limit (it falls into the class of models described by Eq. (44), and has a Newtonian potential resembling Eq. (26)), it was argued in [16] that discontinuities in matter distributions, such as those at the boundaries of stars, can generate unacceptable curvature singularities in EiBI gravity. However, we have argued that in the Newtonian limit of the MEMe model, such singularities correspond to strong gravitational forces acting on matter which lead to a rearrangement of matter distributions, so that the gravitational backreaction may resolve such singularities-similar arguments have been made for EiBI theory [17] (see also [18]). A detailed investigation of this issue beyond the Newtonian limit in the MEMe model will be left for future work. Finally, we compared the post-Newtonian predictions of the MEMe model and GR within a matter distribution to understand the effects of the counterterms that appear in the gravitational potential. In particular, we studied the behavior of circular geodesics in the presence of spherically symmetric Gaussian and isothermal matter distributions. Plots of the tangential velocity rotation curves indicate that the predictions of the MEMe model only differ significantly from that of GR only for high matter densities and large values for the parameter q. It follows from this result that the MEMe model alone cannot describe galactic rotation curves in the absence of dark matter-in fact, the MEMe model (slightly) increases orbital velocities at small radii-and the differences in the behavior of geodesics between the MEMe model and GR are minimal even within a distribution of dark matter. These results also indicate that in general, the counterterms do not have a strong effect on the geodesics unless the parameter q is increased to an unrealistically large value. On the other hand, a rough es-timate suggests that, for a cutoff density 1/q an order of magnitude higher than the highest densities probed in accelerator experiments, the MEMe model may yield measurable corrections to the properties of neutron stars. In the present paper, we have considered the MEMe model as a type-I MMG theory and have focused on its gravitational aspects. Alternatively, in the Einstein frame, one can consider the MEMe model as a theory of a modified matter action minimally coupled to GR. Indeed, after integrating out the auxiliary tensor field A µ α the matter action in the Einstein frame is modified in such a way that the fields in the standard model of particle physics acquire additional (renormalizable and nonrenormalizable) interactions among themselves. In future work, it is certainly interesting to study phenomenological consequences of those extra interactions (that remain even in the G → 0 limit) and their implications to collider physics, cosmic rays, early universe cosmology, and so on. The extended PPN formalism developed in the present paper may be applied to some of other type-I MMG theories. It is worthwhile investigating the PPN constraints on theories in this class and also extending the formalism so that it can be applied to other type-I MMG theories and some type-II MMG theories as well.
2020-11-26T02:42:02.820Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "a0c3d0111ded84d6ed91c4e5cc95c9cdbb4cde8c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a0c3d0111ded84d6ed91c4e5cc95c9cdbb4cde8c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
254388289
pes2o/s2orc
v3-fos-license
Prurigo Pigmentosa: A Case Report With Unusual Presentation Prurigo pigmentosa (PP) is an idiopathic cutaneous inflammatory disorder. Here we report a 50-year-old healthy male of Arabic descent who presented with a six-month history of very itchy persistent skin lesions on his back. Skin examination revealed multiple brownish non-scaly excoriated papules and patches in the midline of his lower back. The differential diagnosis includes lichen planus (LP), confluent and reticulated papillomatosis (CARP), and PP. Skin biopsy revealed acanthosis, spongiosis, and dyskeratotic keratinocytes in the epidermis. The dermis showed mild perivascular lymphocytic infiltrate. Based on the previous clinicopathological findings, the patient was diagnosed with PP. He was prescribed doxycycline 100 mg once daily (OD) for two months. Two months after treatment, all lesions disappeared completely. After one year at the follow-up, he presented with a recurrence of the same skin lesions at the same site. We restarted him on doxycycline treatment. Introduction Prurigo pigmentosa (PP) is an idiopathic cutaneous inflammatory disorder. It primarily affects adolescents and young adults. It is characterized clinically by a recurrent, sudden appearance of itchy, erythematous papules, macules, and/or papulovesicles on the back, neck, and chest that occur in crops. Healing of lesions occurs within weeks leaving macular reticulate hyperpigmentation. Prurigo pigmentosa is commonly seen in Japanese women; much fewer cases have been reported worldwide without predominant ethnicity [1]. The etiology of PP is not fully understood. However, there are some endogenous factors and exogenous factors that have been implicated in the pathogenesis of the disease [2]. Here we present an unusual case of PP that presented as small lesions in the midline in the sacral area in the lower back. Case Presentation A 50-year-old male of Arabic descent presented with a six-month history of very itchy persistent skin lesions on his back. Past medical history, drug history, and review of systems were unremarkable. There is no similar case in the family. Skin examination revealed multiple brownish non-scaly excoriated papules and patches in the midline of his lower back ( Figure 1). The differential diagnosis includes lichen planus (LP), confluent and reticulated papillomatosis (CARP), and PP. Skin biopsy revealed acanthosis, spongiosis, and dyskeratotic keratinocytes in the epidermis. The dermis showed mild perivascular lymphocytic infiltrate ( Figure 2). Hair, nail, and mucosal examination were all normal. Based on the previous clinicopathological findings, a diagnosis of PP was made. He was prescribed doxycycline 100 mg once daily (OD) for two months. Two months after treatment, all lesions disappeared completely (Figure 3). At the oneyear follow-up, he presented with a recurrence of the same skin lesions at the same site and was restarted on doxycycline treatment. Discussion Prurigo pigmentosa is an idiopathic cutaneous inflammatory disorder. It is characterized by a recurrent sudden onset of pruritic and erythematous papules on the back, neck, and chest that heal in a reticulated pattern [3]. Like in our patient, PP occurs in multiple stages, with some in the early stage with excoriated papules and others in the late stage with reticulated hyperpigmented patches. It most commonly occurs in females in the third decade of life with a female-to-male ratio of 2-4:1 [4]. Our patient is a male in his fifth decade of life who presented with pruritic non-erythematous brownish non-scaly excoriated papules in the lumbosacral area which is a rare location. Moreover, they were confined to the midline which is an unusual feature. The main differential diagnosis in our patient includes LP, CARP, and PP. However, the clinical presentations of these entities are different. Confluent and reticulated papillomatosis is characterized by non-pruritic hyperpigmented papules and plaques that are confluent in the center and reticulated at the periphery. Although our patient had midline lesions which are typical for CARP, the morphology of the lesions and history of very itchy lesions are typical for PP. Histopathologically, papillomatosis is a typical feature of CARP which was not present in our case. Although PP has been reported with ketoacidosis in poorly controlled diabetes as well as ketosis following a restrictive calorie or low carbohydrate diet, our patient had none of these. Table 1 shows the differentiations between PP and CARP. Unknown but associated with diabetes mellitus, nutritional deficiency, fasting, dieting, bariatric surgery, anorexia nervosa, adult-onset Still disease, pregnancy, friction with clothes, and exogenous factors like nickel, chrome, and paraamino compounds [6]. Clinical features Multiple hyperpigmented scaly macules or papules. Forming a confluent plaque centrally, and reticulations at the periphery [5] . Sudden appearance of pruritic and erythematous papules and macules on the back, neck, and chest that heal in a reticulated pattern [6]. TABLE 1: Differentiation between confluent and reticulated papillomatosis (CARP) and prurigo pigmentosa (PP) The first-line treatment of PP is oral minocycline. However, our patient responded well to doxycycline. Prurigo pigmentosa does not respond to topical or systemic corticosteroids or antihistamines. Conclusions Prurigo pigmentosa is an idiopathic cutaneous inflammatory disorder. It is clinically characterized by recurrent sudden onset pruritic and erythematous papules that occur in crops and heal in a reticulated pattern. Sometimes PP and CARP look similar, as in our case, however, these clinical entities can be distinguished by their typical clinical and histopathological features. Our patient presented with very itchy brownish excoriated papules with brownish patches that were confined to the midline on his lower back. The purpose of this case report is to raise awareness of this condition. We recommend additional research with a higher level of evidence to investigate and assess this condition. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-12-08T16:20:03.464Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "089800c899d8cf1d304c866be52263a6c9d9ec39", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/124964-prurigo-pigmentosa-a-case-report-with-unusual-presentation.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5f08623abcd144608f393e5c029d89b527946e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
118604730
pes2o/s2orc
v3-fos-license
Stronger Schr\"odinger-like Uncertainty Relations Uncertainty relation is one of the fundamental building blocks of quantum theory. Nevertheless, the traditional uncertainty relations do not fully capture the concept of incompatible observables. Here we present a stronger Schr\"odinger-like uncertainty relation, which is stronger than the relation recently derived by L. Maccone and A. K. Pati [Phys. Rev. Lett. 113 (2014) 260401]. Furthermore, we give an additive uncertainty relation which holds for three incompatible observables, which is stronger than the relation newly obtained by S. Kechrimparis and S. Weigert [Phys. Rev. A 90 (2014) 062118] and the simple extension of the Schr\"odinger uncertainty relation. In a recent work [Phys. Rev. Lett. 113, 260401 (2014)], L. Maccone and A. K. Pati presented two stronger uncertainty relations and an amended Heisenberg-Robertson uncertainty relation for incompatible observables. In this work we derive a pair of Schrödinger-like uncertainty relations for the product and sum of two variances. We also obtain a uncertainty relation for three observables and investigate its property for spin-1 particle state, which indicates that the new uncertainty relation may provide a stronger lower bound than the trivial extension of Schrödinger uncertainty relation. One of the distinct features of quantum mechanics is quantum uncertainty. The initial spirit of quantum uncertainty was postulated by Heisenberg [1]. The Heisenberg's uncertainty relation was mathematically derived by Kennard [2] and Weyl [3]. Quantum uncertainty includes uncertainty principle and uncertainty relation. The former places a restriction upon the degree to which one can constrain the likelihoods of future measurements made on a quantum system [4]. The latter refers to the repulsive nature of incompatible observables which induces a spread in the measurement outcomes, and does not refer to the disturbance induced by the measurement or to joint measurements [5]. The best known modern formulation of uncertainty relation, the Heisenberg-Robertson uncertainty relation [22], bounds the product of the variances through the expectation value of the commutator for any observables A, B, and any state |ψ , where the variances of an observable O in state |ψ is defined as ∆O 2 = ψ|O 2 |ψ − ψ|O|ψ 2 and the commutator is defined as [A, B] = AB − BA. A stronger extension of the uncertainty relation (1) was made by Schrödinger [23], which is generally formulated as where the anticommutator is defined as {A, B} = AB + BA, and O is equal to ψ|O|ψ for any operator O. The above two uncertainty relations want to quantitatively express the impossibility of jointly sharp preparation of incompatible observables. However, in practice, they cannot achieve this and not capture the notion of incompatible observables [24]. Recently, Maccone and Pati derived two stronger uncertainty relations based on the sum of variances ∆A 2 + ∆B 2 [24], which to a large extent can avoid the triviality problem, i.e. to be null on both sides of the inequality, and provide more stringent bounds for observables being incompatible on the quantum state. The first one reads which is valid for arbitrary states |ψ ⊥ orthogonal to the state of the system |ψ , where the sign should be chosen so that ±i [A, B] (a real quantity) is positive. The second uncertainty relation is Here |ψ ⊥ A+B ∝ (A + B − A + B )|ψ is a state orthogonal to |ψ . Maccone and Pati also derived an amended Heisenberg-Robertson uncertainty relation which is stronger than the Heisenberg-Robertson uncertainty relation. In this work we derive an improved Schrödinger-like uncertainty relation and an even stronger uncertainty relation than Maccone and Pati's relations for the sum of variances. Using the same procedure, we also obtain an uncertainty relation for three observables and discuss briefly the implication of this new uncertainty relation in spin-1 particle system. II. AMENDED SCHRÖDINGER UNCERTAINTY RELATION The amended Schrödinger uncertainty relation reads which is valid for arbitrary states |ψ ⊥ orthogonal to the state of the system |ψ and stronger than the Schrödinger uncertainty relation (2). Proof : To prove uncertainty relation (6), we start by introducing a general inequality withĀ = A − ψ|A|ψ ,B = B − ψ|B|ψ , and m, n, k and τ being arbitrary real numbers. By expanding the square modulus, we have Here, ∆A 2 and ∆B 2 are the variances of A and B calculated on |ψ , respectively. . We choose the value of k that maximizes the right-hand-side of (8), namely k = −mnβ/2λ, and then get We can further choose m = ∆B and n = ∆A in above inequality, it then becomes Suppose |ψ = cos θ|ψ + e iφ sin θ|ψ ⊥ , where |ψ ⊥ is orthogonal to |ψ , by taking the limit θ → 0, the state |ψ reduces to |ψ and then the above inequality can be reexpressed as There exists τ = −α, so that e iτ ψ|ĀB|ψ is real and can be written as | ĀB |, then the second term of (11) becomes {Re[e iφ ψ| − A ∆A + e iα B ∆B |ψ ⊥ ]} 2 . Choosing proper phase φ which makes this term in the square brackets to be real, so that this term can be expressed as | ψ| A ∆A − e iα B ∆B |ψ ⊥ | 2 . Hence, the inequality (11) turns out to be Of the quantity | ĀB |, it is easy to see that Therefore, the inequality (12) can be represented as Hence, we get the improved Schrödinger-like uncertainty relation (6). Given m = 1 and n = 1 in (9), using the same procedure described above, we can obtain a even stronger version of the uncertainty relation than (3), i.e. A. New uncertainty relation One may generalize the Schrödinger uncertainty relation (2) to three observables trivially, that is which is simply the sum of the inequality (16). However, this inequality meets the triviality problem, as the lower bound can be null. Instead, we will prove that the following more stringent inequality exists: Proof : To prove this uncertainty relation, we start by introducing a general inequality whereĀ = A − ψ|A|ψ ,B = B − ψ|B|ψ ,C = C − ψ|C|ψ , and k is a real number. By expanding the square modulus, we have where we define Using the above equality, Eq. (23) can be rewritten as Assuming |ψ = cos θ|ψ + e iφ sin θ|ψ ⊥ and using the same techniques employed to derive (15), we obtain which is equivalent to relation (18) Recently, the variance-based uncertainty equalities were introduced by Yao et al. [26] for all pairs of incompatible observables A and B. Similarly, the uncertainty equality for three observables can be written as follows where {|ψ , |ψ ⊥ n d−1 n=1 } form an orthonormal and complete basis in d-dimensional Hilbert space. If we retain only one term associated with |ψ ⊥ ∈ {|ψ ⊥ n d−1 n=1 } in the summation and discard the others, the uncertainty equality (27) reduces to the uncertainty inequality (18). B. Application to spin-1 particle state As an illustration of the new uncertainty relation (18), we consider a simple case of spin-1 particle state. Let A = J x , B = J y and C = J z to be the three components of angular momentum. We may choose |ψ = sin θ cos φ|1 + sin θ sin φ|0 + cos θ| − 1 , (28) and |ψ ⊥ =(cos θ cos φ cos βe iγ − sin φ sin β)|1 + (cos θ sin φ cos βe γ + cos φ sin β)|0 − (sin θ cos βe γ )| − 1 , with | ± 1 and |0 eigenstates of J z corresponding to the eigenvalues ±1 and 0. Then there will be infinite number of orthogonal states of system depending on the values of β and γ. For such a choice of |ψ , |ψ ⊥ , A, B and C, we compare numerically the new uncertainty (18) with the trivially generalized Schrödinger uncertainty relation (17) for three observables in Fig. 1. For φ = 0, (17) can be represented as Removing the last term in (18), it then writes because state |ψ ⊥ in (18) is arbitrary state orthogonal to the state of the system |ψ . The blue points in Fig. 1 illustrate the bound of (18) for 15 randomly taking state |ψ ⊥ for each of the 200 values of the phase θ depicted. We find the uncertainty (18) is non-trivial for all θ and stronger than the trivially generalized Schrödinger uncertainty relation (17). IV. CONCLUSIONS In this work, we derived two new Schrödinger-like uncertainty relations for product and sum of variances of a pair of incompatible observables, which are in general stronger than previous ones. Using the same procedure, we also derived an uncertainty relation for three observables and discussed its implication for spin-1 particle system, which states that the new uncertainty relation is stronger than the trivially generalized Schrödinger uncertainty relation. Note added: as this work was finished, there appears a comment [25] mentioning that Eq. (3) of Ref. [24] may still experience triviality problem in special case while the state |ψ ⊥ = (A− A )|ψ ∆A or |ψ ⊥ = (B− B )|ψ ∆B , which means the uncertainty relations in this work also have such drawback. These uncertainty relations are state dependent, but some uncertainty relations [27,28] are quantum state independent and hence immune from the triviality problem.
2017-01-04T17:45:25.000Z
2015-04-05T00:00:00.000
{ "year": 2015, "sha1": "b05055c491fa23c6b020063cfccc3336b58b2eb0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1504.01137", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b05055c491fa23c6b020063cfccc3336b58b2eb0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
32264109
pes2o/s2orc
v3-fos-license
Evaluation of models for dermal exposure assessment in farming systems in developing countries Dermal Exposure Assessment is a crucial aspect within the risk assessment of pesticide use as it may lead to the development and improvement of measures to reduce the health risk of pesticides users. Even though, tools for dermal exposure assessment are available, their implementation in developing countries is problematic as they are designed under working conditions in industrialized countries and most of them are not specifically focused on processes like pesticide management. This paper evaluates dermal exposure models finding out the most appropriate ones to assess dermal exposure of pesticide use in farming systems in developing countries. Seven models (i.e. COSHH, DERM, DREAM, EASE, PHED, RISKOFDERM and STOFFENMANAGER) were evaluated according to a multi-criteria analysis and four models (i.e. DERM, DREAM, PHED and RISKOFDERM) were selected for the assessment of dermal exposure in the case study of potato farming systems in Vereda La Hoya in the highlands in Colombia. The model estimations were compared with dermal exposure measurements made in the study area. The results show that the four models provide different dermal exposure estimations which are not comparable. Because of the simplicity of the algorithms and the specificity of the determinants, the models DERM and DREAM were found to be the most appropriate ones. In addition, it was found that model outcomes would be more accurate in the assessment if determinants like climate conditions, cleaning of the equipment, task duration, personal protective equipment and hygiene habits are included in the models. When comparing the final model assessment of dermal exposure in the study area, DREAM was found as the model that assesses more appropriately dermal exposure because of the qualitative assessment and the type of determinants included in the model. Introduction The agricultural sector is under pressure to increase crop productivity in order to maintain the food security for an increasingly growing population [10]. FAO has reported that 868 million people continue to suffer from undernourishment and the negative health consequences of micronutrient deficiencies continue to affect around 2 billion people [10]. Pests affect agricultural productivity by causing losses in the agricultural output, storage and the distribution of products. Worldwide approximately 9,000 species of insects and mites, 50,000 species of plant pathogens, and 8,000 species of weeds damage crops [40]. In different crops worldwide pests can cause losses of 14% by insects, 13% by plant pathogens and 13% by weeds [26] but there is an overall decline in crop losses when pesticides are used. However, even though pesticides play an important role in plant protection, in many cases, overuse or inappropriate use compromise the health of pesticide users, agricultural workers, and bystanders [9]. The occupational hygiene field has turned the attention to investigate the exposure in the agricultural workplace in order to improve the pesticide management and to reduce the health risk [11]. In developing countries this is of special interest because pesticide management activities face weak safety standards [2,12,13,17]. Studies in potato farming systems in Vereda La Hoya in Colombia [12,13,15,18,21,30]. Mojanda, Ecuador [31]; and El Angel, Ecuador [27] have shown that pesticide management in these countries has no particular knowledge foundation and is performed by trial and error, finding out what works out in practice. Furthermore, farmers do not wear adequate personal protective equipment, apply pesticides which are banned in industrialized countries and modify the standard discharge of nozzles to reduce the application time [21]. Because these issues increase the health risk, a risk assessment of pesticide use in these areas is required in order to determine the risk level faced by people. Human exposure to pesticides occurs via three main pathways: inhalation, ingestion and dermal contact [28,29]. Of these three, dermal exposure is the most complex one and there is still no consensus about the most appropriate way to evaluate it [28,29]. There are different models available that might be applied to assess dermal exposure to pesticide use in developing countries like EASE [5]. EUROPOEM [36], PHED [8], RISKOFDERM [37], COSHH [16], STOFENMANAGER [23], DREAM [35], DERM [3] and the approaches proposed by the U.S.EPA [34]; 2 doi: 10.7243/2050-1323-3-1 however, there are still uncertainties about the adequacy of these models when they are applied in developing countries as most of them have been developed in industrialized countries, at occupational situations in industrialized processes in Europe and USA, and do not consider agricultural processes like pesticide management. In the case of the model DERM, even though it has been developed under conditions relevant for developing countries, its methodology has been criticized and the model itself has not been validated. The goal of this paper is to evaluate the available models for dermal exposure assessment in order to find out the most adequate one to estimate the dermal exposure in farming systems in developing countries. To reach this goal the following research questions will be addressed: a. Which of the existing models for dermal exposure are feasible to be applied in case studies in farming systems in developing countries? b. What are the most relevant parameters to be taken into account to increase the confidence and accuracy level of the estimations? c. When comparing the model outcomes with the dermal exposure measurements in the study area, which models assess dermal exposure more accurately? Methodology After a literature review seven available models were considered for the analysis: COSHH [16], DERM [3], DREAM [35], EASE [5], PHED [8], RISKOFDERM [37] and STOFFENMANAGER [23]. These models were selected because of their availability, clear description of the algorithms, and their potential applicability in the assessment of pesticide use. They were analyzed according to the following group of criteria ( The exposure assessment model COSHH was developed in the United Kingdom (UK) by the Health and Safety Executive (HSE) and has been used since 2002. Originally, the model is targeted on large companies and safety professionals who have the equipment and the knowledge to apply the model and interpret the law [16]. Later on, a new version of the model was developed, namely the model COSHH Essential (COSHH-E). This is an improved version that provides assistance to small and medium-sized enterprises (SMEs) that have limited available resources. The goal of this model is to provide easy-to-understand and easy-to-use assistance to SMEs, and to give advice on how to control the chemical risks [16]. DERM (Dermal Exposure Ranking Method): It was developed in a project called "Assessment of dermal pesticide exposure and pesticide-related skin lesions: implication for intervention". The fieldwork of the study was conducted at the Universidad Nacional Autónoma de Nicaragua (UNAN-León) and first published in 2008 [3]. The goal of DERM is to develop a low-cost, easy-to-use method to assess dermal exposure to pesticides in developing countries. The model concentrates doi: 10.7243/2050-1323-3-1 on assessing dermal exposure in terms of the potential and actual exposure. The outcome can answer questions like which determinants causes the highest exposure among subsistence farmers, and/or which farmers are the most exposed while working on the field [3]. DREAM (Dermal Exposure Assessment Method): The model DREAM was developed in the Netherlands in 2003 [35]. The goal of the model was to create a method that can assess and evaluate occupational dermal exposure to chemical agents in a generic way. The model can be used in occupational hygiene and epidemiology for any given situation. It can be used for initial assessment of dermal exposure levels of liquids and solids, as a framework for measurement strategies (i.e., who, what and where to measure), or as a basis for control measures. It gives insight into the distribution of dermal exposure over the body and indicates in which routes the exposure takes place. The outcome is a numerical estimate indicating the amount of dermal exposure that workers encounter while performing a certain task. The estimate is divided into seven intervals ranging from 0 to 1,000 (no exposure to extremely high exposure) [35]. EASE (Estimation and Assessment of Substance Exposure): This model was developed in the early 1990s by the UK's Health and Safety Executive [5,6]. The model can assess inhalation and dermal exposure. For inhalation exposures, the model predicts a range of expected exposure levels for a given set of circumstances. For dermal exposures, the model predicts the potential exposure for hands and forearms (no other body parts are considered), expressed as a mass per unit area of exposed skin per day (mg/cm 2 /day). The exposure ranges can take five different values, from very low up to 5-15 mg/cm 2 /day. The model EASE was one of the first models to assess dermal exposure. Originally, this model was used as a screening tool for regulatory risk assessment for new chemicals. Nowadays, EASE is more a risk assessment tool to estimate exposure of new or existing substances in a simplified way [5,6]. PHED (Pesticide Handlers Exposure Database): The first version of this model was published in 1992 [8,34]. The database of the model was developed by a task force, consisting of representatives from the Health Canada Pest Management Regulatory Agency (PMRA), the United States Environmental Protection Agency (EPA), the American Crop Protection Association (ACPA), and the software by an environmental consulting firm in Springfield, Virginia. The model was used by all major regulatory agencies in USA and worldwide by many other regulatory groups. Also, it was used by the pesticide industry to evaluate product safety issues [8,19]. Self-reported exposure information on pesticide from questionnaires, as well as pesticide monitoring data from the literature, were used to estimate the levels of exposure to pesticides. The database consists of information collected from about 100 studies submitted primarily by companies that wish to register a specific pesticide and it contains data for over 1,700 monitored exposure events [8]. RISKOFDERM (Risk Assessment of Occupational Dermal Exposure to Chemicals): RISKOFDERM was developed with the cooperation of 15 different institutes from 10 different European countries in 2003 [1,37]. The aim of the project was to develop a conceptual model for dermal risk assessment and management for regulatory purposes. It was created to be a simple-to-use toolkit for SMEs. The model can be used for comparison of the skin-related hazardous properties of chemical products, general recommendations for risk control, or assessment of health risk from skin exposure for a specific working task in the field [25]. STOFFENMANAGER: This model was developed in the Netherlands and has been used since 2003 [32]. Its goal is to assist SMEs in risk assessment and to prioritize and control risks of handling chemical products in their workplace. It was created to combine previous work published and requirements that are mandatory in the Netherlands for SMEs [23]. The model uses information from the COSHH model for its hazard banding and the publications by Cherrie (1996) [4] and [29] for the algorithm of the model. In addition, it uses information from the RISKOFDERM toolkit for the dermal exposure method and incorporates information from companies in the Netherlands gathered by several surveys. Sectors and companies were selected and the surveys were conducted by occupational hygienists. Also, information was used from research projects made by the Dutch government [32,33]. Selection of models for the evaluation in the study area The multi-criteria analysis was defined based on criteria of model characteristics such as availability, guidance, knowledge required, reliability, type of outcome, type of substance, target group, dermal exposure descriptor and dermal exposure pathway. After this evaluation, four models (i.e., DERM, DREAM, PHED, and RISKOFDERM) were selected to be applied in the case study of Vereda La Hoya in the highlands of Colombia. COSHH, EASE and STOFENMANAGER were not selected because they did not fulfill most of the criteria, as the results will show in the section 3.1 and (Figure 1). The data used as input to evaluate the models DERM, DREAM, PHED and RISKOFDERM was taken from a previous survey made in the study area with 197 smallholder potato growers in four communities [13] and previous studies about dermal exposure in the same study area [15,21]. The calculations and outcome of each model are provided in the supplementary information. Sensitivity analysis of de models The influence of each determinant in the model score for Vereda La Hoya was evaluated by a sensitivity analysis. Each determinant was evaluated for the models DERM, DREAM, PHED and RISKOFDERM according to the One-at a-Time sensitivity analysis methodology [7,24]. A series of scenarios were established for each model changing the input values to the score for one specific determinant according to the Description of the Study Area The study area selected was Vereda La Hoya which is a rural region that belongs to the city of Tunja in the highlands of Colombia. This region is devoted mainly to the cultivation of potato in production units of around 3 hectares. Potato crops in this region are vulnerable to three major pests: the soil-dwelling larvae of the Andean weevil (Premnotrypes vorax), the late blight fungus (Phytophthora infestans) and the Guatemalan potato moth (Tecia solanivora) [22]. The pesticide management to control these pests is performed along three main activities: the preparation of the pesticide, the application itself, and the cleaning of the spraying equipment [18,21]. During the whole pesticide management, farmers use work clothing consisting of trousers, short sleeve shirts and plastic boots. The pesticide management is performed along three main activities which are: a. Pesticide preparation, which consists of opening the bottle containing the pure pesticide substance, mixing the solution of (different) pesticides and water, and loading the tank of the knapsack sprayer. Farmers in Vereda La Hoya prepare the pesticides in a container of 100-L capacity. The pesticide and the water (normally 80 L to obtain four applications of 20 L each) are mixed in this container with the aid of a wooden stick. During the mixing and the filling of the tank there are usually spills out of the container reaching different parts of the body including hands, arms, chest and legs. b. Pesticide application, in which the knapsack sprayer is carried on the back and the pesticide application starts with the spraying process on the field. During this activity the farmers' body is exposed to the droplets emitted by the nozzles. In the study area, the spraying is performed with hand pressure sprayers with a 20-L capacity. Farmers use two types of nozzles for pesticide application which differ in the amount of pesticide discharged: a high-discharge (HD) nozzle (1.88L/min) used during the first crop phases (sowing and emergence) and a low-discharge (LD) nozzle (1.26 L/min) used during the rest of the crop phases (growth, flowering and preharvest). c. Cleaning, in which once the application is finished, farmers clean the sprayer and the container by pouring clean water on all the accessories in a procedure repeated three times. This procedure is included in the booklet "Good Agricultural Practices" [14], which farmers use as a reference for the pesticide management. During this activity, there are numerous spills from the equipment Results Multi-criteria analysis of dermal exposure assessment models (Table 1) shows the description of the evaluated models according to the different criteria and characteristics of the model (i.e., origin, goal, basis, target group, availability, guidance, knowledge/equipment required, reliability, type of outcome, type of evaluated substance, dermal exposure pathway, dermal exposure descriptor, and evaluated body part). (Figure 1) shows the radar diagram with the multi-criteria analysis based on the defined criteria. From the analysis, it was found that DERM, DREAM, PHED and RISKOFDERM were the most appropriate models to be applied in farming systems in developing countries because they comply best with most of the criteria. However, there are still important criteria missing in the structure of each model. For instance, DERM has not been validated and it has been criticized about the reliability and reproducibility of the outcomes as there were mistakes in the methodology when the model was developed and tested in the same study area [20]. DREAM has been partially validated and it has been criticized about the accuracy of their estimations and the reproducibility in several case studies with different characteristics [39]. PHED is focused on farming systems in industrialized countries, its determinants evaluate the exposure during pesticide applications made by tractor and with motorized equipment, there is no distinction of the pesticide transport processes such as emission, transfer and deposition. RISKOFDERM is focused in SME's in industrialized countries but it does not differentiate the pesticide transportation processes like emission and transfer which are very important in farming systems with manual pesticide applications. COSHH was excluded from the evaluation as it does not consider important criteria relevant for case studies in developing countries such as target group, as it is focused on SME´s; guidance, as it is only available in a website with a user's manual for only some specific industries; outcome, as its assessment is qualitative; evaluated substances, as it does not evaluate pesticides in farming systems; dermal exposure descriptor, as it only assesses the potential exposure; and evaluated body parts, as it does make a distinction between any body part. EASE was also excluded from the evaluation as it does not consider criteria such as target group, as it is focused on industrialized processes; guidance, as there is no a user's manual with the model description; outcome, as it is qualitative; dermal exposure descriptor, as it evaluates only the potential exposure; and evaluated body parts, as it takes only arms and forearms. STOFENMANAGER was also excluded from the evaluation as it does not comply with criteria such as target group, as it is focused on industrial processes; guidance, as the website does not show the algorithms or model calculations; outcome, as the assessment is qualitative; and evaluated body parts, as there is no information available. Model outcomes for the case study of vereda la hoya (Table 3) shows the actual dermal exposure assessment outcomes for the case study performed by the selected models DERM, DREAM, PHED and RISKOFDERM and (Figure 2) shows the results of the sensitivity analysis of these models. The qualitative outcomes of actual dermal exposure for the four models differ significantly from each other. DERM assessed the actual dermal exposure as "moderate"; DREAM assessed the actual dermal exposure as "very high"; meanwhile both PHED and RISKOFDERM assessed the actual dermal exposure as "high". These assessments differ between each other because of the different structure of determinants within the models and the different scoring system for each determinant. According to the sensitivity analysis each model highlights different determinants which influence greatly the model outcomes. These determinants are spraying against the wind, height of the nozzle during the application, nozzle positioning in front and possible leaking of the sprayer for the model DERM; pesticide concentrations, emission, deposition and transfer for the model DREAM; washing the equipment, wearing gloves, replacement frequency of gloves and clothes, and personal hygiene for the model PHED; and the exposed body are and protection clothing for the model RISKOFDERM. In addition, the outcomes from DERM, DREAM, and PHED are semi-quantitative and the outcome from RISKOFDERM is quantitative. These issues show that the model outcomes are not comparable and an accurate risk assessment is only possible by measuring the dermal exposure directly in the study areas. Evaluation of models Previous studies in Vereda La Hoya found that dermal exposure to pesticides is very high [15,21] because of the inadequate work clothing, the modification of nozzles to increase the discharge, the inappropriate cleaning of the application equipment, the pesticide application against the wind direction and the use of pesticide with a high level of toxicity. Even though the evaluated dermal exposure models give an insight of the level of exposure, their outcomes are not comparable ( Table 2). Furthermore, none of them covered all the relevant determinants according to the findings in previous studies. However, the model DREAM assesses the dermal exposure in the study area as "very high" and taking into account that its determinants cover many characteristics of these farming systems and the level of exposure risk in the study area, this model might give the most accurate dermal exposure estimation. Even though, the validity and accuracy have been partially proved [38,39], the results of this paper might help to the further validation of the model. Based on the sensitivity analysis and the results, several issues might be taken into account inside the structure of the models, which could improve the accuracy of the estimations. These issues are discussed separately for each model. DERM (Dermal exposure ranking method) This is a low-cost and easy-to-use method for the assessment of exposure to pesticides in developing countries and it helps to identify the most determinants that influence the exposure; however, the validation of this model is incomplete and important determinants like washing the equipment, task duration, wearing gloves, frequency of replacement of gloves, work clothing, personal hygiene and climate conditions like wind speed and humidity, should be included to improve the assessment. DREAM (Dermal exposure assessment method) This model approach has a structure in which the determinants cover most of the characteristics present in the case study. However, there are still some important determinants that doi: 10.7243/2050-1323-3-1 can improve the accuracy. One is the differentiation of the level of protection for the body parts. Previous studies have found that the level of protection given by the work clothing differs between each body part [21] and the model only differentiates the protection for the body and the hands. On the other hand, the inclusion of climate conditions like wind speed and humidity which influence the dermal exposure, might improve the model accuracy as well. Despite this issue and comparing the model outcome with the exposure assessment previously made in the study area, the qualitative assessment of this model is the most realistic from the four evaluated models. PHED (Pesticide handlers exposure database) This method is easy to use and includes determinants not included in other models, such as washing the equipment, wearing gloves, replacement frequency of gloves and clothes, and personal hygiene, which, according to the sensitivity analysis, influence strongly the scoring. However, other determinants in the model like using enclosed mixing system, tractor with enclosed cab and application with motorized sprayers, are not relevant for the working situations of farming systems in developing countries. Additionally, this model does not assess processes like emission and transfer; therefore, this model is useful for a quick assessment of dermal exposure in agricultural systems in industrialized countries but it is not appropriate for study areas in developing countries. RISKOFDERM (Risk assessment of occupational dermal exposure to chemicals) This model assesses easily the dermal exposure, giving estimations in terms of mg/cm 2 /h which facilitates the comparison with direct dermal exposure measurements or reference values to assess the risk. However, this assessment does not take into account emission and transfer processes and also includes determinants only relevant for agricultural systems in industrialized countries such as automation. Therefore, this model is not appropriate for the case study of farming systems in developing countries. DERM, DREAM, PHED and RISKOFDERM were applied in the case study of Vereda La Hoya in which pesticide management is performed by handed-pressurized sprayers. From the comparison of the models, DERM and DREAM were found to be the most appropriate models. These results are valid for potato farming systems and many other crop systems with similar characteristics in different regions in Latin America and might be also be valid for other regions worldwide with similar pesticide applications in Africa or Asia. However, the results are not valid for other sophisticated pesticide applications in crops in developing countries such as flowers, banana, coffee, sugar cane, rice, etc. For these crops, the comparison of model outcomes might give a different conclusion. For instance, DREAM and PHED are models whose assessments are able to be targeted on pesticide applications with sophisticated techniques and they might be useful for the exposure assessment in these farming systems. Improvement in the structure of the determinants of the models DREAM and DERM might not only improve the accuracy of exposure estimations but also might result in a brand new model for human exposure with high specificity for farming systems in developing countries. Conclusions This research evaluated in depth the available models for human exposure assessment, so assessors can decide which model is the most appropriate according to the characteristics of the study area in which the model is going to be applied and furthermore this research suggested improvements in the models in order to increase the estimation accuracy. From a comparison of the models after a multi-criteria analysis, DERM, DREAM, PHED and RISKOFDERM were selected as the most appropriate models as they fulfill the required criteria for the case studies in developing countries. Afterwards, these four models were applied to assess the dermal exposure in the case study of Vereda La Hoya and their determinants were compared with the characteristics of the study area. DREAM and DERM were found as the most appropriate models. However, because some relevant determinants are still absent (i.e., differentiation in the protection factor according to the different body parts and climate conditions in the case of DREAM, and washing the equipment, task duration, wearing gloves, frequency and replacement of gloves, work clothing, personal hygiene and climate conditions in the case of DERM), the accuracy of these models could be improved if these determinants are included. When comparing the final model assessment of dermal exposure in the study area, DREAM was found as the model that assesses more accurately the dermal exposure in this study area.
2019-04-12T13:56:50.852Z
2014-01-20T00:00:00.000
{ "year": 2014, "sha1": "4914e4b791c87da2b3e7f55d87c11c5d46e0281a", "oa_license": "CCBY", "oa_url": "http://www.hoajonline.com/journals/pdf/2050-1323-3-1.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5b23580c5b7a70ac9fa3a963d0e355b9f28ae08c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Engineering" ] }
32904527
pes2o/s2orc
v3-fos-license
Bis(trimethylammonium) tetrachloridodiphenylstannate(IV) The title compound, [(CH3)3NH]2[Sn(C6H5)2Cl4], consists of [(CH3)3NH]+ cations and [SnPh2Cl4]2− anions in which the Sn atom, located on a centre of inversion, is bonded to four Cl atoms and two phenyl rings, giving an octahedral geometry with the phenyl rings in trans positions. In the crystal, the cations and the anions are connected by N—H⋯Cl hydrogen bonds and C—H⋯Cl interactions. The title compound, [(CH 3 ) 3 NH] 2 [Sn(C 6 H 5 ) 2 Cl 4 ], consists of [(CH 3 ) 3 NH] + cations and [SnPh 2 Cl 4 ] 2À anions in which the Sn atom, located on a centre of inversion, is bonded to four Cl atoms and two phenyl rings, giving an octahedral geometry with the phenyl rings in trans positions. In the crystal, the cations and the anions are connected by N-HÁ Á ÁCl hydrogen bonds and C-HÁ Á ÁCl interactions. Comment Our interest for organotin(IV) compounds is related to the various applications found for this family of compounds (Evans & Karpel, 1985;Kapoor et al., 2005;Zhang et al., 2006). Many compounds containing the [SnPh 2 Cl 4 ] 2ion in the cis or trans conformation have been reported (Ouyang et al., 1998;Hazell et al., 1998;Fernandez et al., 2002;Venkatraman et al., 2004;Garcia-Seijo et al., 2001;Teoh et al., 1992). In our search for new organotin(IV) compounds we have initiated here the study of the interactions between (CH 3 ) 3 N.HCl and SnPh 2 Cl 2 , which has yielded the title compound. In the [Ph 2 SnCl 4 ] 2anion, the tin atom is located on a centre of inversion and is bonded to four Cl atoms and two phenyl groups giving an octahedral geometry with the phenyl groups in trans-positions (Fig. 1). Consequently, the angle between the two trans groups is exactly 180 ° while the phenyl rings are almost perpendicular to the equitorial SnCl~4~ plane [C1-Sn1-Cl1 = 89.39 (6)°, C1-Sn1-Cl2 = 90.86 (7) In the crystal the anion and the cations are linked by N-H···Cl hydrogen bonds (Fig. 1) and C-H···Cl intermolecular interactions (Table 1). Experimental The title compound was obtained as a white crystalline solid by reacting trimethylammonium chloride with diphenyltin dichloride in chloroform (2/1 ratio; M.p: 443 K). After slow evaporation of the solvent colourless crystals, suitable for X-ray diffraction analysis, were obtained. Refinement The NH H-atom was located in a difference Fourier map and was freely refined. The C-bound H-atoms were included in calculated positions and treated as riding: C-H = 0.95 and 0.98 Å for CH and CH 3 H-atoms, respectively, with U iso (H) = k × U eq (C), where k = 1.2 for CH H-atoms, and k = 1.5 for CH 3 H-atoms. supplementary materials sup-2 Figures Fig. 1. The molecular structure of the title compound with the atom-numbering scheme. Displacement ellipsoids are drawn at the 50% probability level [the C-bound H-atoms have been omitted for clarity; symmetry code: (') = -x + 1, -y + 1, -z + 1]. Special details Experimental. multi-scan from symmetry-related measurements Sortav (Blessing 1995) Geometry. Bond distances, angles etc. have been calculated using the rounded fractional coordinates. All su's are estimated from the variances of the (full) variance-covariance matrix. The cell e.s.d.'s are taken into account in the estimation of distances, angles and torsion angles Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
2014-10-01T00:00:00.000Z
2011-01-15T00:00:00.000
{ "year": 2011, "sha1": "c8acb9d4982075be744e25e5137ad01887192e24", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2011/02/00/su2238/su2238.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8acb9d4982075be744e25e5137ad01887192e24", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Computer Science" ] }
216275533
pes2o/s2orc
v3-fos-license
Study of circulation ratio for natural circulation in water-tube boiler at different operating conditions. The basic objective of the paper is to identify the desired circulation ratio for the natural circulation of water tube boilers in different operating conditions. This requires the basic study of heat flux and the mode of the boiling heat transfer, and the phenomenon like departure from nucleate boiling and tube overheating. The parameters, which need to be studied are heat flux, pressure, dryness fraction, void fraction, liquid velocity and their impact on the required circulation ratio. For a natural circulation boiler, the circulation ratio is one of the most important design parameters as the other design parameter like critical heat flux and skin temperature are mainly derived from the circulation ratio. The required circulation ratio can vary with the boiler pressure, liquid velocity and maximum heat flux. This study is intended to provide input for the safe and optimum design of a natural circulation boiler. Introduction The continuous motion of water and steam mixture ensures continuous and efficient removal of heat from the heating surface of the boiler. This motion is usually referred to as circulation. Boilers with natural circulation have a wide range of applications such as power cycles and industrial heating processes. The motive force driving the steam/water mixture through the tubes (water tube boilers) or over tubes (fire tube boilers) in natural-circulation systems is the difference in density between cooler water in the downcomer circuits and the steam/water mixture in the riser tubes. This flow must be adequate to cool the tubes and prevent overheating. As this flow involves two-phase flow, the flow pattern is a very important parameter in designing of the steam generator. The flow pattern strongly affects the flow stability and heat transfer characteristics in a steam generating equipment. This study explains the role of the circulation ratio on the efficient and reliable operation of a boiler. Boiler Circulation Boilers are designed with economizer, evaporator and superheater depending on the design parameters. Economizers add sensible heat to water. The economizer water outlet temperature will be closer to saturation temperature. The water is forced through the economizer by the boiler feed pumps. Evaporators may be multi-tubular shells, water wall tubes, boiler bank tubes or bed coils as in fluidized bed combustion (FBC) boiler. In evaporators, the latent heat is added. The addition of heat is done at boiling temperature. Superheaters add heat to steam. That is the heat added to steam leaving the Boiler Boiler Circulation is the movement of water, mixture of steam & water, or steam through the boiler. Saturated water is, the densest in a boiler circuit, and may contain some steam bubbles. Figure 1. shows the typical arrangement for the water-tube, natural-circulation boiler with an external steam drum and external downcomers and riser pipes. Feedwater enters the drum from an economizer. This mixes with the steam/ water mixture inside the drum. Downcomers carry the resultant cool water to the bottom of the evaporator tubes while external risers carry the water/steam mixture to the steam drum. The heat transfer tubes also act as risers for generating steam. Figure 1.Circulation circuit of boiler Tube failures occur due to conditions known as departure from nucleate boiling (DNB) when the actual heat flux in the boiling circuit exceeds a critical value known as critical heat flux-a function of the variables. When this occurs, the rate of bubble formation is so high compared to the rate at which they are carried away by the mixture that the tube is not cooled properly, resulting in overheating and failure. If there is not enough flow through a riser tube the tube metal will get overheated and maybe burst. Circulation Ratio The ratio of the actual mass flow through the circuit to the steam generated is called circulation ratio. This is primarily the reciprocal of the outlet dryness fraction. The circulation ratio is the function of tube diameter, no. of tubes, tube orientation, rate of heat transfer & the available height. The flow of water through a circuit should be more than the steam generated to protect the tube from overheating. The Boiler tubes, its feeding downcomer pipes, relief tubes/pipes are arranged in such a way that the desired flow be obtained to safeguard the tubes. Circulation ratio (CR) by itself does not give a complete picture of the circulation system. CR must be used in conjunction with heat flux, steam pressure, tube size, orientation, the roughness of tubes, water quality, etc., to understand the boiling process and its reliability. The following can be the impact of the lower circulation ratio. Tube deformation/tube leakage failures/tube to fin weld failures take place. The failure mode varies depending upon the flow, heat input, tube size, boiler configuration, and water quality. Reliability criteria The reliability of the circulation circuit should be checked for the various flow abnormality like flow stagnation, flow reversal, flow stratification, departure from nucleate boiling, heat transfer in the dry out region. 1 Flow stratification This phenomenon mostly occurs in the tube with lower upward inclination and lower mass velocity. In this case, steam flows in the upper part of the tube and water in the lower part of the tube. This is also referred to as a separated flow. Consequently, the upper part of the tube can be overheated or burn out due to the poor steam side heat transfer coefficient. The difference in temperature of the upper and lower side of the can also cause tube bending. Flow stratification is primarily function of flow rate, tube inclination and the heat flux. The flow rate in a horizontal tube or a tube with lower upward inclination angle should be large enough to avoid the phase stratification. If the heat flux is sufficiently low, small bubble generated can flow along the upper wall without coalescence and does not cause phase stratification. On the other hand, at high heat flux, high velocity of two-phase mixture & disturbances induced by high generation rate of the vapor prevent phase stratification but induce intermittent flow. The phase stratification mainly occurs at the moderate heat flux and the lower velocity. This phenomenon is more pronounced in the horizontal tube and decreases with an increase in upward angle. The problem is aggravated with the tube with downward inclination. The criterion for the flow stratification is mainly based on the Froude number, which is the ratio of the horizontal component of momentum and the gravity force. One of the most common correlations for the calculation of critical velocity can be expressed as follows. Where, -Volumetric flux of the liquid, -Gravitational acceleration, -Tube diameter (equivalent diameter of channel cross-section) Other correlation for the calculation of critical mass flux, (kg/m 2 s) (Styrikovich's formula) can be expressed as follows where, where, -Void fraction As the objective of this work is to deduce the circulation ratio, which is the reciprocal of outlet dryness fraction, one need to study the impact of velocity and pressure on the critical dryness fraction. The result of this study has been plotted in figure 3. The observations can be summarized as follows. Acceptable dryness fraction decreases with an increase in velocity and the rate of increase in dryness fraction with respect to velocity decreases at higher velocity. Acceptable dryness fraction decreases with an increase in pressure. 2 Departure from nucleate boiling The nucleate boiling regime is characterized by high heat transfer coefficient, which is a very important boiler design parameter as it determines the ability of flow regime to remove the heat from the heat transfer surface. As the heat flux reaches the critical limit, the sharp reduction in the heat transfer coefficient is observed causing higher metal temperature and possible tube "burnout". This phenomenon is called departure from the nucleate boiling and the critical limit of the heat flux is called "Critical heat flux". The flow pattern and the mechanism of the heat transfer are closely associated with the boiling process. The following are the various flow patterns and the associated mode of heat transfer. Subcooling boiling-The core liquid is not saturated but bubbles are generated on the wallowing to high wall temperature & the bubble collapse in the core liquid. Saturated nucleate boiling -The fluid is in the saturated state & the bubble generated on the wall is distributed in whole tube cross-section. Saturated forced convective boiling-There is a thin liquid layer with an annular flow pattern. Heat transfer in nucleate boiling Heat transfer performance is very high in the saturated nucleate boiling region and the saturated forced connective region because a liquid layer exists on the tube wall. The following correlation (Schrock and Grossman) is used to calculate the heat transfer coefficient for both nucleates and forced convection boiling. In equation 5 for the heat transfer coefficient, the first term shows the effect of nucleate boiling and the second term shows that of connective heat transfer. As per equation, in saturated nucleate boiling characterized by low steam quality region, the first term dominates and the heat transfer increases with heat flux. At higher steam quality, the flow pattern shifts to forced convection boiling and the heat transfer increases with an increase in liquid velocity. 2. 2 Heat Transfer in post dry out region. The popular correlations available for the calculation of exit dryness fraction are empirical in nature. As per these correlations, exit dryness fraction is the strong function of heat flux. In ordinary steamgenerating tubes in the boiler in which the maximum heat flux is usually lower than 4 × 10 5 W/m 2 , the value of dryness fraction is not practically affected by the heat flux. For such equipment, exit dryness fraction decreases with an increase in mass flux. This phenomenon can be mainly attributed to the increases shear force at the annular interface of liquid and steam causing an increased rate of droplet generation and disappearance of the liquid layer at higher mass flux. The following correlation (Doroshchul and Nigmatulin) can be used for the calculation of exit dryness fraction. Where, -Surface tension -Kinematic viscosity -Liquid density -Vapour density This correlation can be used to analyze the effect of mass flux and pressure on the exit dryness fraction. This demonstrates that the exit dryness fraction decreases with an increase in mass velocity and pressure. The effect of mass velocity on the exit dryness fraction has been explained earlier. As the pressure increases, the Kinematic viscosity and the difference between liquid and gas density decrease causing the disappearance of the liquid layer from the interface area. Due to this effect, exit dryness fraction decreases with an increase in pressure. As the dry out is a phenomenon related to the flow configuration the value of is little affected by the heat flux, through the heat flux affected the distance to the dry-out point from the tube inlet. The wall temperature rise owing to the high-quality region called "burnout". But even if to dry out occurs at low heat flux conditions, physical burnout of the tube does not necessarily occur because of the small rise in the wall temperature. 3 Void fraction In two-phase flow, void fraction is defined as the ratio of the gas flow area to the total flow area. It is the function of the volume flow rates volumetric flux of the gas, volumetric flux of the liquid, liquid density, Vapour density & surface tension geometrical dimensional of channel & mass fluxes of the steam & water. In steam water two-phase flow, the properties of steam & water are the function pressure consequently the void fraction reduces to the function of mass fluxes, the steam quality & the channel dimension. According to smith's (1969) empirical formula for void fraction, Void fraction is the function of steam quality and the density ratio, which in turn is the function of pressure but independent of mass flow rate and channel dimension. This correlation can be used to analyze the effect of dryness fraction on void fraction, which is an important parameter for the reliability consideration of steam generating equipment. In the natural circulation boiler, void fraction is used as a design criterion. As per design norms, the void fraction at the exit of the tube should be less than 0.7. As the void fraction can be expressed as an exit dryness fraction, critical exit dryness fraction can be plotted as a function of pressure. Conclusion The study can be used to deduce the design criteria for the design of natural circulation circuit of the boiler. The most important design aspect of the natural circulation circuit is to ensure a sufficient mass flux of circulating water to avoid burnout of evaporator tubes. The following are the practical design considerations for a natural circulation boiler. Circulation velocity: Circulation velocity is a very important design criterion for a horizontal tube or a tube with an upward inclination less than 10 0 to avoid phase stratification. This becomes more critical in the circuit having less circulation ratio or higher dryness fraction. The critical dryness fraction as a function of velocity and pressure has been plotted in the earlier section. Void fraction: Void fraction is another important design criteria for natural circulation circuit design. As the void fraction is primarily function of the dryness fraction and pressure, acceptable dryness fraction can be deduced from the void fraction criteria. The acceptable value of dryness fraction has been plotted in the previous section. Acceptable dryness fraction has been also deduced from the departure from nucleate boiling criteria. As this dryness fraction is higher than the acceptable dryness fraction from the void fraction criteria. The acceptable dryness fraction from void fraction criteria is considered as design norms.
2020-03-12T10:18:19.193Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "d0db4c0eb8b1d99602d0e3e87f9f913dc7dc7c93", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1473/1/012026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "036c2d4b728fbef13d6c5e55bf63e5caa0a91ebd", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
248080993
pes2o/s2orc
v3-fos-license
Adjusting the Structure of a Peptide Nucleic Acid (PNA) Molecular Beacon and Promoting Its DNA Detection by a Hybrid with Quencher-Modified DNA : In this study, we performed an elaborate adjustment of the structure of peptide nucleic acid (PNA) molecular beacons as probes for detecting nucleic acids. We synthesized the PNA beacons with various numbers of Glu, Lys, and dabcyl (Dab) quenchers in them, and we investigated their fluorescence changes (F 1/1 /F 0 ) with and without full-match DNA. As the numbers of Glu/Lys or Dab increased, the F 1/1 /F 0 tended to decrease. Among the different beacons, the PNA beacon with one Glu and one Lys ( P1Q1 ) showed the largest F 1/1 /F 0 . On the other hand, a relatively large F 1/1 /F 0 was obtained when the number of Glu/Lys and the number of Dab were the same, and the balance between the numbers of Glu/Lys and Dab seemed to affect the F 1/1 /F 0 . We also investigated the DNA detection by the prehybrid of P1Q1 , which consists of the T790M base sequence, [ P1Q1(T790M) ], with quencher-modified DNA (Q-DNA). We examined the DNA detection with single-base mismatch by P1Q1(T790M) , and we clarified that there was difficulty in detecting the sequence with P1Q1 alone, but that the sequence was successfully detected by the prehybrid of P1Q1 with the Q-DNA. the quenchers the PNA on the detection of Introduction Peptide nucleic acid (PNA) [1] is a nucleic acid surrogate that forms a stable-base sequence-specific hybrid with DNA and RNA [2]. Since PNA consists of an uncharged amide backbone, PNA/DNA and PNA/RNA hybrids do not exhibit electrostatic repulsion, in contrast to DNA/DNA and DNA/RNA hybrids, which results in greater stability. Therefore, it is expected that PNA can be used as a probe for detecting nucleic acids, and colorimetric detection using dye [3][4][5][6][7][8][9][10] and mass-based [11,12], visual [13], electrochemical [11,[14][15][16][17][18][19][20], and optical [18,21] detection, among others, has been reported. Furthermore, the detection of nucleic acids on the basis of fluorescence by conjugating PNA and fluorescent dyes is plausible as a highly sensitive detection method, and numerous studies that use these conjugates as probes to detect DNA and RNA have been reported [22]. Among these reported approaches, PNA molecular beacons (PNA beacons), in which the PNA is modified by a fluorescent dye and a quencher, are useful as illuminating probes for detecting DNA and RNA. A typical PNA beacon contains one fluorescent dye (Fam) and one Dab quencher at both termini, and it contains, adjacent to these groups, one negatively (Glu) and one positively (Lys) charged amino acid residue [23][24][25][26][27][28][29][30][31][32]. Before the PNA beacon hybridizes with DNA or RNA, the Glu and the Lys are close to each other because of the intramolecular electrostatic interaction; as a result, the fluorescent dye and the quencher are also close to each other, and the PNA beacon is quenched. On the other hand, as the PNA beacon forms a hybrid with the target DNA and RNA, thereby relieving the interaction between the Glu and the Lys, the fluorescent dye and the quencher become more distant from each other, and the PNA beacon emits fluorescence. In this work, we consider that the adjustment of the intramolecular electrostatic interaction by varying the numbers of Glu and Lys in the PNA beacon is important in order to obtain a better performance of the beacon. However, to the best of our knowledge, no discussion has yet been conducted on the numbers of Glu/Lys in this context. Therefore, we prepared PNA beacons that contain various numbers of Glu/Lys, while also varying the number of Dab quenchers, in order to obtain a PNA beacon that fluoresces more effectively when hybridized with nucleic acid. We also assessed the fluorescence characteristics in this study. In addition, we recently reported a system in which a hybrid of Fam-modified PNA (Fl-PNA) and Dab-modified DNA (Q-DNA) detected a target nucleic acid by emitting light from the Fl-PNA via strand exchange with the target nucleic acid [33][34][35][36][37]. In this report, we describe that the use of the PNA beacon, instead of the Fl-PNA, further improves nucleic acid detection. Peptide Synthesis All PNA beacons [PnQms, P1Q1(T790M), P1Q1(L858R), and P1Q1(exon19del); Figure 1] were synthesized on Fmoc-NH-SAL-PEG resin, containing 7.2 µmol Fmoc on its surface, by using standard Fmoc protection chemistry. The deprotection and coupling processes were carried out at room temperature. The Fmoc group was removed by using 20% piperidine in DMF for 7 min. Each coupling process included: 4 parts Fmoc-derivatized amino acids; Fmoc-derivatized PNA monomers; an Fmoc-derivatized ethylene glycol linker; a Fam or Dab quencher; 3.6 parts HBTU; and 11.5 90 min at room temperature. Crude peptides were precipitated in diethyl ether and were washed twice with diethyl ether until a neutral pH was reached. Peptides were then dried by parts, and NMM were dissolved in DMF/NMP and were added to the resin. The reaction mixture was then shaken for 45 min. Peptides on the resin were by treatment with 95:2.5:2.5 (v/v) TFA/TIPS/water, and they were dissolved in 50:50 (v/v) 0.1% aqueous TFA in water/acetonitrile. Peptides were purified using reverse-phase high-pressure liquid chromatography (HPLC) on a C18 preparative column (Cadenza 5CD-C18; Imtakt, Kyoto, Japan). The final product was identified by using matrix-assisted laser desorption/ionization-time-of-flight (MALDI-ToF) mass spectrometry (Shimadzu AXIMA Confidence) (Supplementary Material, Figures S1-S17) and HPLC on a C18 analytical column (Cadenza CD-C18; Imtakt) (Supplementary Material, Figures S18-S34). Melting Curves of Mixtures of PNA Beacon and DNA Molar concentrations of PNA beacons and DNAs were estimated from the absorbance at 260 nm and were measured by using a UV-vis spectrometer (JASCO V-560) with molar extinction coefficients of the nucleobases. The UV melting curves of the equimolar mixtures of P1Q1 or P4Q1, with and without full-match DNA (fmDNA: 5 -TCTGCTGGGT-3 ) or scrambled DNA (scrDNA: 5 -CGTGGTTCTG-3 ), in aqueous buffer (100 mM PBS; pH: 7.0), were measured by using a Shimadzu TMSPC-8 Tm Analysis System that was equipped with a UV-visible spectrometer (UV-1700) and an eight-position Peltier temperature controller. Each concentration of the PNA beacons was 5.0 µM, and the quartz cell length was 1 cm. The melting curves were recorded with the cooling of the solution by 0.5 • C/0.5 min, from 80 to 10 • C, while measuring the absorbance at 260 nm. The observed absorbance was normalized to that at 80 • C. The melting temperature, at which 50% of the strands remained hybridized (T m ), was obtained by using a TMSPC-8 with Tm analysis software. Fluorescence Titration Curves of PNA Beacon and DNA The fluorescence spectra of the 500 nM PnQm and DNA (fmDNA and scrDNA) mixtures, in aqueous buffer (100 mM PBS; pH: 7.0) at 25 • C, were measured at an excitation wavelength of 495 nm, and at emission wavelengths from 500 to 700 nm, by using a JASCO FP-8200 fluorescence spectrometer and a 1 cm quartz cell. In the case of the titration curves, the concentration of PnQm was maintained at 500 nM, while the concentrations of DNA were varied in order to create PnQm/DNA ratios of 0, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, or 2.0. The fluorescence spectra were measured after the PnQm was incubated with DNA for 15 min. The fluorescence titration curves were recorded on the basis of the fluorescence intensities at 525 nm. The assessment of the DNA detection by PnQm was conducted by measuring the ratio (F/F 0 ) of the fluorescence intensity (at 525 nm) of the PNA beacon without DNA (F 0 ) to that of the PNA beacon with DNA (F). In the case of the fluorescence spectra of P1Q1(T790M), P1Q1(L858R), and P1Q1(exon19del) with DNA, we used T790M: 5 -CATCATGCAG-3 L858R: 5 -GGGCGGGCCA-3 , and exon19del: 5 -ATCAAAACAT-3 as the DNAs. Hybrid Formation of DNA with PNA Beacon and Fluorescence Detection of DNA by PNA Beacon To confirm that the synthesized PNA beacons (PnQms) formed hybrids with the DNA, we measured the UV melting curves of a 1:1 mixture of the P1Q1 and DNAs (left panel in Figure 2). A clear sigmoid curve was observed for the mixture of P1Q1 and full-match DNA (fmDNA), which indicates that they formed a hybrid (red-filled circles: T m = 65 • C). On the other hand, for P1Q1 alone, and for P1Q1 with scrambled DNA (scrDNA), the absorption changed linearly under the applied conditions, which indicates that a hybrid did not form (blue-and black-filled circles, respectively). The UV melting curve of a 1:1 mixture of P4Q1 with and without DNAs also showed similar results (right panel in Figure 2; T m = 66 • C for P4Q1/fmDNA). From these results, we confirmed that the PNA beacons and DNA sufficiently form a sequence-specific hybrid under the conditions of 25 • C and 37 • C, and we subsequently performed the fluorescence assessment. To confirm that PnQms detected the target DNA, we measured the fluorescence spectra of P1Q1 (as a typical PNA beacon) and P1Q0 (not modified by a Dab quencher), as a control, with and without an equimolar amount of fmDNA. As shown in Figure 3, P1Q0 showed fluorescence that was derived from a Fam group at 520 nm (black solid line); however, the fluorescence intensity was less than that of Fam [5(6)-carboxyfluorescein] (green solid line). This indicates that the peptide chain affects the fluorescence properties of the Fam group. The intensity of P1Q0 with fmDNA (black broken line) was lower than that of P1Q0. On the other hand, P1Q1 showed almost no fluorescence under these conditions (red solid line), whereas, in the case of P1Q1 with fmDNA, the fluorescence was observed at 530 nm (red broken line). This fluorescence indicates that, when P1Q1 forms a hybrid with fmDNA, the Fam group moves away from the Dab quencher in the PNA beacon, and the fluorescence that is derived from the Fam group is reactivated. In other words, P1Q1 is well folded in the molecule before forming the hybrid with fmDNA, and it is suggested that Dab quenches the fluorescence of the Fam group. The intensity of P1Q1 with fmDNA cannot be restored to that of P1Q0, which indicates that the chain length of the PNA beacon, which is 10-mer, does not keep the Fam group far enough away from the Dab quencher. To examine the fluorescence characteristics of the P1Q1 with DNAs, we measured fluorescence spectra for various concentrations of DNAs and created their titration curves ( Figure 4). As shown in the left panel in Figure 4A, the fluorescence spectra of P1Q1 shows that the fluorescence intensity at 525 nm increased as the fmDNA was gradually added. On the other hand, the addition of scrDNA, as shown in the right panel in Figure 4A, shows almost no change at 525 nm. This indicates that the PNA beacon forms a sequence-specific hybrid with the DNAs. Next, we plotted the fluorescence intensity (F/F 0 ) at 525 nm for each DNA/P1Q1 ratio on the basis of the fluorescence spectra in Figure 4A,B. As the fmDNA was added from a 0 to 1 equivalent, the F/F 0 increased linearly, and, above a 1 equivalent, the F/F 0 did not change (blue-filled circles). On the other hand, when scrDNA was added, the P1Q1 hardly emitted fluorescence under these conditions (red-filled circles). These results show that P1Q1 forms a 1:1 hybrid with full-match DNA. Elaborate Adjustment of PnQms for Fluorescence Detection of DNA The structure of the PNA beacon, which is reported by Kuhn et al. (P1Q1 in this study), is used as a typical PNA beacon [23]. We are interested in whether this PNA beacon is the optimal structure for detecting nucleic acids. Therefore, we used PnQms, in which the numbers of Glu/Lys and Dab quenchers in the PNA beacon were changed, and we assessed these levels of DNA detection by the fluorescence titration curves for the DNA, in the same manner as in Figure 4. PnQ0, which does not contain the Dab quencher, showed a slight decrease in fluorescence upon the addition of fmDNA, despite varying the number of Glu/Lys from 1 to 3, which indicates that it is difficult for PnQ0 to detect DNA by fluorescence ( Figure 5A and Supplementary Material, Figures S35-S37). On the other hand, for PnQ1, a clear 1:1 correspondence of the increase in the fluorescence upon the addition of fmDNA was observed, and the response was further influenced by the number of Glu/Lys ( Figure 5B and Supplementary Material, Figures S38-S41). The F 1/1 /F 0 value, which is the fluorescence ratio of the 1:1 mixtures of the PNA beacon with fmDNA to the PNA beacon alone, showed higher fluorescence levels in the order of: P1Q1 > P2Q1 = P3Q1 > P4Q1. It was confirmed that the F 1/1 /F 0 values of PnQ2 and PnQ3 also depend on the number of Glu/Lys: P2Q2 ≈ P3Q2 > P1Q2 ≈ P4Q2 ( Figure 5C and Supplementary Material, Figures S42-S45) and P3Q3 > P1Q3 = P2Q3 ( Figure 5D and Supplementary Material, Figures S46-S48). As a summary of Figure 5A-D, each F 1/1 /F 0 of PnQms is shown in Figure 5E. This indicates that the numbers of Glu/Lys (n) and Dab quenchers (m) show a relatively large F 1/1 /F 0 when n = 1 and m = 1 (F 1/1 /F 0 = 6.1). When the number of m is 3, the F 1/1 /F 0 level decreased, which indicates that the quenching effect of Dab against the Fam group was too strong. Regarding the number of n, the F 1/1 /F 0 level tended to decrease from a smaller number to a larger number. We assume that, as the number of Glu/Lys increases, the electrostatic interaction between Glu and Lys may inhibit the hybrid formation of PNA and DNA. On the other hand, a relatively high F 1/1 /F 0 level was observed when the numbers, n and m, were the same (F 1/1 /F 0 = 6.1, 5.1, and 3.4 for n; m = 1.1, 2.2, and 3.3, respectively). This shows that not only the numbers of n and m, but also the balance of the numbers of n and m, affect the F 1/1 /F 0 value. Overall, we determined that, among these PNA beacons, the PNA beacon with the highest relative fluorescence is P1Q1. This PNA beacon is consistent with the PNA beacon that is reported by Kuhn et al. Comparison of Fluorescence Detection for Target DNA of PNA Beacon and PNA Beacon/Quencher-Modified DNA Hybrid In previous studies, we reported that a probe that prehybridized Fam-modified PNA (Fl-PNA) with a Dab quencher-modified DNA (Q-DNA) successfully detected target DNA [35,36], and that a probe that prehybridized the PNA beacon with Q-DNA detected target miRNA [37]. To investigate whether the prehybridized Q-DNA and PNA beacon in this study detect target DNA more effectively than the PNA beacon alone, we assessed the F 1/1 /F 0 value of the hybrid of P1Q1(T790M) with Q-DNA for target DNA (Figure 7 and Supplementary Material, Figure S52). The values of P1Q1(T790M) mixed with one-basemismatched T790M DNA and P1Q1(T790M) alone were almost 1.0, whereas the value of P1Q1(T790M) mixed with full-match T790M DNA was 3.0. This was almost the same as the value shown by P1Q1(T790M) to T790M DNA in Figure 6 (F 1/1 /F 0 = 2.5). In other words, P1Q1(T790M) detected the target DNA, but the recognition of the PNA beacon for DNA was not sufficient. On the other hand, the value of the P1Q1(T790M)/Q-DNA hybrid mixed with the one-base-mismatched DNA, and the value of the hybrid only, were almost 1.0, whereas the value of the hybrid mixed with the complementary DNA was 8.0. In a previous study, when the complementary T790M DNA was detected with the Fl-PNA/Q-DNA hybrid, the F 1/1 /F 0 value was 1.5-2.0 [35]. These results indicate that the PNA beacon/Q-DNA hybrid detects the target DNA more effectively than the Fl-PNA/Q-DNA hybrid or the PNA beacon alone. From these results, we demonstrated that a synergistic or additive effect by the Dab quenchers contained in the PNA beacon and the Q-DNA was effectively exerted on the fluorescence detection of DNA in this study. Conclusions In this study, we optimized the number of Glu/Lys and the number of Dab quenchers in the PNA beacons (PnQms). As a result, we found that the structure of the conventional PNA beacon, P1Q1, is the best for the fluorescence detection of target nucleic acids. In other words, it was not necessary to increase the number of Glu/Lys and the number of Dab quenchers to improve the DNA detection of PNA beacons. However, we also obtained new findings that suggest that the balance between the number of Glu/Lys and the number of Dab quenchers affects the DNA detection of PNA beacons. This result is expected to be valuable to the optimization of the structures of other versions of PNA beacons. P1Q1s were shown to achieve detection, depending on the sequence of the target DNAs, but the hybrid of P1Q1, with quencher-modified DNA, conferred a more effective detection to the PNA sequences, with a relatively low F 1/1 /F 0 value. This indicates that the combination of this PNA beacon and the PNA beacon/quencher-modified DNA hybrid probe is effective when detecting multiple DNA sequences. In the future, we will investigate by using a longer PNA than the PNA (10-mer) that was used in this study in order to develop the PNA beacon that targets RNA, and we plan to use the hybrid probe to detect nucleic acids in cells. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.
2022-04-11T15:07:52.847Z
2022-04-08T00:00:00.000
{ "year": 2022, "sha1": "5bce470d3db98a4e44a619568c25219d72cb6f81", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/10/4/722/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8c91286ef1908d0f5c7367993c56bc973b3f22ec", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [] }
59304434
pes2o/s2orc
v3-fos-license
An effi cient induction protocol for deriving mature oligodendrocytes from human dental stem cells INTRODUCTION: This study examined the process of deriving mature oligodendrocytes from human dental pulp stem cells (hDPSCs) via addition of cerebrospinal fl uid (CSF). METHODS: The hDPSCs were cultured in the presence of retinoic acid and CSF. The oligodendrocytes were confi rmed using immunocytochemistry for specifi c glial markers, namely Olig2 and MBP markers. RESULTS: The differentiated oligodendrocytes were immunopositive for Olig2 and MBP markers at the end of induction phase. CONCLUSION: It is concluded that this study indicated the glial differentiation of hDPSCs in the presence of CSF and appropriate inducers, which is a usable therapeutic technique in neuroregenerative medicine (Fig. 3, Ref. 24). Text in PDF www.elis.sk. Introduction Oligodendrocytes form the myelin sheet which surrounds the axons. The impairment of these cells disrupts the myelin structure and leads to severe neurological symptoms (Grade et al, 2013;Czepiel et al, 2015). Cell therapy is considered an effi cient method to improve remyelination in demyelination diseases such as multiple sclerosis (MS) (Chang et al, 2014;Bojnordi et al, 2017). Among various types of stem cells, human dental pulp stem cells are recognized as a novel noninvasive source with the property of neuroglial differentiation under in vitro culture conditions (Sloan et al, 2007;Gronthos et al, 2002). However, in vitro culture derivation of mature oligodendrocytes is accompanied with some limitations. Cerebrospinal fl uid (CSF) can promote the neuroglial differentiation of mesenchymal stem cells (Johanson et al, 2008;Miyan et al, 2003). Nevertheless, the effect of CSF on in vitro differentiation of hDPSC to glial differentiation has not been investigated so far. Therefore, the aim of our study was to evaluate the differentiation potential of hDPSC into oligodendrocytes in the presence of glial inducers and CSF. Based on the fact that the enrichment plexus of CSF consists of various growth elements and essential factors, we used CSF and designed an effi cient culture system by adding CSF to a cocktail of inducers. Our culture system is usable as an alternative procedure leading to signifi cant enhancement in the process of generating oligodendrocytes from BMSC. CSF Collection Rat embryos were used for collecting CSF in each experiment. Cerebrospinal fl uid was isolated from cisterna magna (Lee et al, 2012). Morphological characterization of cultured hDPCs The hDPSCs were collected from human dental pulp of molar teeth in Mazandaran University of Medical Sciences. After mechanical and enzymatic digestion, the cells were cultured and cell proliferation and morphological changes of hDPSCs were monitored daily via phase contrast microscopy. The morphological changes of differentiated glial cells were evaluated during the differentiation period. Flow cytometry of mesenchymal surface markers At the fourth passage, the hDPSCs were immunostained for mesenchymal surface markers antibodies, namely CD90, CD44 and CD73. Im munocytochemical evaluation with fl uorescence microscopy Imm unocytochemistry technique was performed to confi rm the differentiated oligodendrocytes. The specifi c oligodendrocyte markers, e.g. Olig2 and MBP were investigated. The stained cells were evaluated via fl uorescent microscope. Statistical analysis Dates were analyzed using one-way analysis of variance test (ANOVA) and (SPSS 13.0 software, while p < 0.05 was considered signifi cant. Characterization of hDPSc The hDPSCs appeared to gain the spindle fi broblastic morphology typical for mesenchymal stem cells (Fig. 1). The fl ow cytometry data proved hDPSCs to be immunopositive for mesenchymal markers, namely CD 44, CD 90, and CD 105 (Moayeri et al, 2017). Differentiation of hDPSCs into mature oligodendrocytes The differentiation of hDPSCs to oligodendrocytes was confi rmed by glial structure. The differentiated cells exposed to CSF showed the specifi c morphological changes as shown whith phase contrast microscopy (Fig. 2). Also, the glial differentiation into oligodendrocytes was confi rmed by immunostaining the specifi c glial markers, namely Olig2 and MBP (Fig. 2). Discussion The hDPSCs have a therapeutic potential in neural tissue engineering and neuroregenerative medicine based on their ability in neuroglial differentiation (Gronthos et al, 2000, Huang et al, 2008. The hDPSCs are multipotent stem cells that arise from neural crest and have the ability to differentiate into neurons and oligodendrocytes. These characteristics predispose them to become an applicable cell source in cell therapy for neurodegenerative diseases , Chun et al, 2016. In vitro differentiation of oligodendrocytes from hDPSCs depends on various differentiation procedures with different inducers. In our study, we have designed an effi cient culture condition for oligodendrocytes differentiation by adding CSF. CSF contains a variety of growth factors and natural nutrients, which provides a niche that is similar to extracellular matrix of neural networks. These properties improve the maturation and differentiation processes of stem cells or progenitor cells ( Our results showed that hDPSCs appeared to gain fi broblastic morphology and high adherent potential, which is in agreement with previous researches (Martens et We observed that hDPSCs can be differentiated into mature oligodendrocytes in the presence of CSF, the fact of which can be used in promoting remyelination in frame of therapeutic strategies applied in neuroregenerative medicine.
2019-01-29T14:02:27.805Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "0ccc39d1e1ce9cd8c620946a5352d13dfd1259f7", "oa_license": null, "oa_url": "http://www.elis.sk/download_file.php?product_id=6007&session_id=hh9i5k3hlj9lpin29k67ipi1m3", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "09f7332d61430a7880dd57b215a6fd8796e0c107", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
242025214
pes2o/s2orc
v3-fos-license
Barriers and Enablers to Pulmonary Rehabilitation in Low- and Middle-Income Countries: A Qualitative Study of Healthcare Professionals Introduction Low- and middle-income countries bear a disproportionately high burden of global morbidity and mortality caused by chronic respiratory diseases. Pulmonary rehabilitation is recommended as a core intervention in the management of people with chronic respiratory diseases. However, the intervention remains poorly accessed/utilised globally, especially in low- and middle-income countries. Aim This qualitative study explored barriers and enablers to pulmonary rehabilitation in low- and middle-income countries from the perspective of healthcare professionals with pulmonary rehabilitation experience in these settings. Methods Online-based semi-structured in-depth interviews with healthcare professionals were undertaken to data saturation, exploring lived barriers and enablers to pulmonary rehabilitation in their low- or middle-income country. Anonymised interviews were audio-recorded, transcribed verbatim, and analysed using thematic analysis. Results A total of seven healthcare professionals from seven low- and middle-income countries representing Africa, Asia, and South America were interviewed. They included five physiotherapists (four females), one family physician (male), and one pulmonologist (female). Themes for barriers to pulmonary rehabilitation included limited resources, low awareness, coronavirus disease 2019, and patient access-related costs. Themes for enablers included local adaptation, motivated patients, coronavirus disease 2019 (which spanned both enablers and barriers), better awareness/recognition, provision of PR training, and resource support. Conclusion Barriers to pulmonary rehabilitation in low- and middle-income countries include limited resources, low awareness, coronavirus disease 2019, and patient access-related costs. Enablers include local adaptation, motivated patients, coronavirus disease 2019 (which spanned both enablers and barriers), better awareness/recognition, provision of PR training, and resource support. Successful implementation of these enablers will require engagement with multiple stakeholders. The findings of this study are a necessary step towards developing strategies that can overcome the existing pulmonary rehabilitation evidence-practice gap in low- and middle-income countries and alleviating the burden of chronic respiratory diseases in these countries. Introduction Pulmonary rehabilitation (PR) is a core component in the management of people with chronic respiratory diseases (CRDs). It is defined as a comprehensive intervention based on a thorough patient assessment followed by patient-tailored therapies, which include, but are not limited to, exercise training, education, and behaviour change, designed to improve the physical and psychological condition of people with chronic respiratory disease and to promote the long-term adherence of health-enhancing behaviours. 1 PR leads to significant reductions in symptoms such as dyspnoea, fatigue, anxiety and depression, and significant improvements in exercise tolerance and overall healthrelated quality of life. 2 Data from high-income countries suggest that it also significantly reduces the direct costs of chronic obstructive pulmonary disease (COPD) by decreasing unnecessary use of the healthcare system, particularly unplanned hospital admissions. 3 While the bulk of this evidence is based on those with COPD, 2 there is also evidence supporting effectiveness of PR in people with other CRDs including asthma, 4 post-tuberculosis lung disease 5 and bronchiectasis. 6 In addition, PR is costeffective as it may be delivered using minimal, low-cost equipment, making its implementation feasible even in low-and middle-income countries (LMICs) where access to specialist exercise equipment may be limited. 7,8 Although PR is recommended in various national and international guidelines for the management of people with CRDs, notably COPD and bronchiectasis, 3,9 it remains poorly accessed or underutilised around the world. 3,10,11 Specifically, referral and patient uptake is poor. 12 In addition, although it is LMICs that are disproportionately burdened by CRDs, 13 current PR evidence is mainly based on studies from high income countries. 14 Of the eight papers exploring barriers and enablers to PR, none were from LMICs. 10,12,[15][16][17][18][19][20] LMICs have different challenges to high income countries in terms of access to resources, meaning that current literature cannot be generalised. Moreover, it has been reported that clinical PR services are not widely available in LMICs 21 due to certain barriers. This study aimed to explore those barriers (and enablers) to PR in LMICs from the perspective of health professionals with PR work experience in these countries. This would be a necessary step towards developing strategies that can overcome the existing PR evidencepractice gap. 10 Study Design This was a qualitative, interview-based study. This method permitted an in-depth exploration of participants' lived experiences in implementing or delivering PR in an LMIC setting. 22 One-to-one semi-structured interviews with participants elicited individual participant insights into their experiences regarding barriers and enablers to PR in their respective LMIC. 23,24 Participant Recruitment Participants were purposively recruited with the inclusion criteria being healthcare professionals with experience in implementing or delivering PR in LMICs. Participants were identified from papers included in two recent systematic reviews on PR in low resource settings, 14,25 and the Global RECHARGE network. 26 Recruitment emails were sent to the corresponding authors of the papers, along with a consent form, participant information sheet and request to disseminate the invitation to their colleagues. Further to this, an open invitation was posted on Twitter. Data Collection Interviews used a topic guide (Supplementary Material 1) informed by published literature and piloted a priori, encompassing open questions around barriers and enablers to PR in an LMIC. The topic guide underwent several stages of development as follows: (a) It was first drafted by the lead researcher/author (FMB), followed by senior peer review and suggestions for improvement by his supervisor (HS). (b) FMB revised the topic guide accordingly, the outcome of which was the second draft. (c) The second draft was then piloted by administering it to FMB's classmate on the UCL MSc physiotherapy programme in an hour practice in-depth interview on Zoom, in the presence of HS. (d) After the pilot session, HS gave FMB feedback to improve the conduct of the interview. Information regarding each participant's profession and experience in implementing or delivering PR in their LMIC was also collected. Interviews were conducted at a time and via an online platform (either Microsoft Teams or Zoom) of each participant's preference or convenience. It had been pragmatically planned that at least six participants would be recruited, as evidence suggests that that a sample of six interviews may have been sufficient to enable development of meaningful themes and useful interpretations. 27 However, our final sample size was determined by code and thematic saturation, that is, a point at which no new codes or themes, respectively, are observed in the interview data. 28 The lead researcher (FMB) is a qualified physiotherapist with qualitative research experience. 29 Data Management and Analysis Interviews were audio-and video-recorded and later transcribed verbatim. The transcripts were anonymised and compared with the audio interview recordings for completeness and accuracy. Subsequently, transcripts were imported into QSR International's NVivo 12 qualitative data analysis software for iterative line-by-line coding and inductive thematic analysis 30 across all interview data. This involved the lead researcher's familiarisation with the entire interview data set, by reading and rereading the transcripts while making reflective notes on the literal content, looking closely at words used by participants, interpreting what the data meant by assigning initial codes or classifications to segments of text, and exploring relationships between these classifications and reducing them to core general themes. Then, the senior author (HS, lead researcher's supervisor) checked both the codes and themes against the transcripts to confirm that they were accurate and representative of the participants' views. The identified key themes were refined for referential adequacy by returning to the raw data. Participants' quotations from the transcripts were extracted to provide supporting data for each final theme when narrating findings. Results Results from the participant recruitment process are shown in Figure 1. In summary, a total of seven healthcare professionals from seven LMICs, representing South America, Africa, and Asia, were interviewed. Countries from Asia were Kyrgyzstan and India, from South America was Argentina, and from Africa were Kenya, Malawi, South Africa, and Zimbabwe ( Figure 2). Of the seven interviews, four were conducted on Zoom while three were conducted on Microsoft Teams. On average, each interview lasted for 40 minutes (ranging from 20 to 60 minutes). Five of the participants were physiotherapists (four females), one was a family physician (male), and one was a pulmonologist (female). Of the five physiotherapists, two were respiratory physiotherapists, one was a sports physiotherapist, one was a public health specialist researching the implementation of PR, and one was a professor of PR. All physiotherapists 143 had PR experience in either a clinical or research context within their respective LMIC. Four physiotherapists had experience in implementing and/or delivering a structured PR programme, while one had practised some aspects of PR as part of their broader cardiorespiratory physiotherapy practice. The pulmonologist was leading an ongoing randomised controlled trial of a PR programme, with their roles including leading the exercise component. The family physician had no prior experience with PR but had previously offered research support as a cosupervisor on a student-led PR research project. Key themes relating to barriers to PR in LMICs were limited resources, low awareness or recognition, Coronavirus Disease 2019 (COVID- 19), and patientunique access barriers (Table 1). Key themes for enablers were local adaptation, motivated patients, COVID-19, better awareness or recognition, available PR training, and available PR resource support (Table 1). Barriers Theme 1: Limited Resources Participants expressed limited availability of various resources needed to implement and deliver PR as a barrier. The first resource barrier was the shortage of rehabilitation professionals, specifically physiotherapists, who would implement and deliver PR in their setting: …there are few physiotherapists doing this practice. It's difficult to find a skilled physiotherapist to do the work. (Argentina participant) Participants also mentioned a lack of PR knowledge or expertise in PR among the healthcare professionals as another barrier: …there was nobody who was like you… there was a physio, but they had never heard of pulmonary rehab. So, they did lots of outpatient stuff and parks and all of that, but their respiratory knowledge was next to zero…. (Kenya participant) Participants attributed this lack of knowledge or expertise to the lack of PR training in their national undergraduate physiotherapy curriculum: I don't think they are aware that there's a whole field of rehabilitation, no. I think part of that is because it's not part of the curriculum… I think there's an important gap in the training that's being offered. (South Africa participant) Another resource barrier was limited equipment. Either the equipment was not available or, where available, minimal, or of low quality, or could not be utilised: …you can't be saying I'm going to use a treadmill because you may not have one. Or in Zimbabwe's case… you can have that…, but there might be no electricity. (Zimbabwe participant) This equipment barrier was attributed to the lack of financial support needed to purchase it. For example, in Argentina, patients had to donate funds to purchase the equipment: Theme 3: COVID-19 The restrictions associated with the ongoing COVID-19 pandemic have resulted in low recruitment of eligible patients for PR in LMICs: Initially, we were supposed to do a face-to-face intervention. But after COVID set in, we are still not allowed to have a lot of patients in one room because they are at risk of contracting the infection. (India participant) The pandemic has also challenged the expertise of delivering the conventional face-to-face PR. Therefore, some face-toface PR programmes have been moved to online delivery. However, participants expressed barriers for this delivery model too, including lack of digital access and illiteracy: Local adaptation also meant the use of locally available staff and equipment: Community health workers could be trained to guide people in this and be involved with community rehabilitation or pulmonary rehabilitation… it doesn't have to be a physiotherapist in my mind… If you can have services in the community, people will be more likely to access it. (South Africa participant) …looking at the equipment for exercise therapy, we manufactured them locally using the locally available resources… for strength training, we actually had to hire a tailor from the village, and he brought his own machine and we just had to go to kaunjika (market) and buy zitenje (cloth) and cut them off into pieces that can accommodate 1kg, 1/2 kg, 2kg, 3kg up to 8kg using sand. (Malawi participant) Theme 2: Motivated Patients Participants also described increased motivation of their patients to participate in PR as a facilitator. Several factors contributed to patient motivation including good therapistpatient relationship: …it's patients' mood … patients were really, really enthusiastic and really interested to take part in this study…. It was really easy to communicate with patients…. before the trial, we ask for consent from patients and also ask them what time is more convenient for them when they can participate in our PR trial… So, it was discussed with patients, and they feel themselves really comfortable for this and they find it convenient. (Kyrgyzstan participant) Some specific components of the PR programme and associated benefits also contributed to patient motivation to participate: I think the facilitators for patients is that they feel well, they notice that they feel well. And that made that they want to continue to the program with the program… group effect of the sessions…, the dynamic of the session. We try to do a diversity… We don't do unique training method; we try to use other methods in the programs that the patient feel that they do not do a routine training … (Argentina participant) Theme 3: COVID-19 Although COVID-19 was perceived as a barrier to PR, also it was also perceived as a facilitator. Specifically, long COVID-19 is an extra CRD burden, thereby increasing the demand for PR, especially tele-PR: Discussion This study identified a shortage of healthcare professionals, particularly physiotherapists, as a barrier to PR in LMICs. An important aim of PR is to increase exercise tolerance and functional ability for people limited by their CRD. 31 As such, exercise training is a core component of PR which should be prioritised 32 and physiotherapists are responsible for supervising this component as they are trained in exercise testing, prescription, and training. 32,33 The shortage of physiotherapists in LMICs found in this study corroborates previous evidence; the WHO reported that although there is no universally agreed or recommended minimal number of physiotherapists, the critical shortage of these professionals in LMICs is evident, with fewer than 10 physiotherapists per million inhabitants in many countries in the South of Africa's Sahara and the South-East Asia Region. 34 This study also found a lack PR knowledge or expertise among the available LMIC physiotherapists as another barrier to PR in LMICs. Participants attributed this barrier to a lack of PR training in the physiotherapy undergraduate curriculum. Noteworthy, this training gap is reported to exist in most countries worldwide. 35 However, it is more evident in LMICs, especially in the African, Eastern Mediterranean, and South-East Asia regions, where the general current workforce of "skilled" rehabilitation professionals does not necessarily meet the needs of the population. 34 Therefore, participants in this study recommended inclusion of formal PR training in their undergraduate training of physiotherapists and other healthcare workers as part of interprofessional education. This is in line with the American Thoracic Society/European Respiratory Society policy statement for enhancing the implementation, use and delivery of PR, which recommends formal training in PR for any healthcare professionals involved in the care of people with COPD. 35 Another barrier to PR in LMICs reported by participants in this study was low awareness or recognition of PR by the public including people with CRDs, healthcare professionals and governments. The public is less aware or cognisant of physiotherapy services in their country, including PR. This results in low uptake, attendance, and adherence to PR by people with CRDs as they are not aware of its benefits, a finding that has also been reported elsewhere. 36 Participants also said this lack of awareness exists among the healthcare professionals, for example some doctors, which results in fewer referrals of patients with CRDs to a PR programme. This finding is consistent with a Lebanese study which attributed lack of patient referral to PR by chest physicians to absence of awareness and education about PR. 37 Consequently, the lack of awareness of PR benefits or value translates into under-funding or under-resource allocation for the service. This is not surprising because LMIC governments have restricted budgets and must prioritise investment in healthcare interventions that are perceived https://doi.org/10.2147/COPD.S348663 DovePress International Journal of Chronic Obstructive Pulmonary Disease 2022:17 as "high-value". 38 Government authorities cannot perceive PR as a high-value intervention if they are not aware or knowledgeable about its benefits in the first place. In addition, healthcare services in LMICs compete for resources and, as with other health interventions that require long-term investment, rehabilitation services such as PR appear less attractive than interventions that produce immediate results. 39 The primary healthcare needs of populations in LMICs may be so great that attending to people with CRDs feels like a luxury. 40 This, coupled with the absence of high-quality impact evaluation studies of PR in LMICs, can make the case for directing resources towards PR more difficult. 41 Some participants in this study felt a tilt in favour of medicine in their LMIC which is evident in low numbers of rehabilitation professionals such as physiotherapists who are trained and employed by the government compared to the number of doctors. This doctor dominance over other healthcare professionals in many LMICs may lead them to deprioritize rehabilitation (including PR), which is traditionally delivered by therapists. 40 Therefore, participants in this study suggested the need for physiotherapy advocacy or public awareness campaigns to improve awareness and recognition of physiotherapy services including PR. This suggestion resonates with one of the key messages in the American Thoracic Society/European Respiratory Society policy statement on increasing implementation and delivery of PR, that public awareness campaigns are needed to foster public awareness of PR. 35 The ongoing COVID-19 pandemic was also mentioned by participants in this study as an access barrier to PR in LMICs. Due to physical distancing requirements to prevent COVID-19 transmission, the pandemic has resulted in low recruitment of people with CRDs for conventional face-to-face PR. In some LMICs, face-to-face PR programmes have been suspended, thereby imposing an unprecedented barrier to PR access by people with CRDs in LMICs, further to other existing access barriers. Similar COVID-19 impact on PR access has been reported in high-income countries. 42 In response, some LMICs have tried to move to tele-PR but participants described barriers to this delivery model too, including lack of digital access and literacy. These barriers have also been reported elsewhere. 43,44 On the other hand, participants saw COVID-19 as a facilitator for PR in LMICs in that it serves as a stimulus for stakeholders in LMICs to develop rehabilitative interventions such as PR. The disease primarily affects the respiratory system and survivors weaned from mechanical ventilation are at a higher risk of developing post-intensive care syndrome (PICS) or long COVID, requiring treatment with PR. 45 However, there is currently a limited evidence base for PR in post-acute COVID-19. The programme will potentially be wider in scope than current PR programmes 1 to meet the needs of these individuals and consider the additional burden placed upon survivors by this unique disease, such as social isolation strategies and the associated emotional burden. 46 In addition, survivors may be of a different age group to the "usual" PR population and supporting a successful return to work will be important. 46 Finally, participants in this study described several direct and indirect costs incurred by patients as access barriers to PR in their LMIC, most notably transportation costs associated with long distance travel to a healthcare facility to access PR. For instance, in Malawi and Uganda, low-income countries in Africa, 84% of each of these countries' population lives in rural areas/villages, located far from urban areas where higher level health services are concentrated. 47,48 Fifty percent of the Malawi population live within five kilometres of their health centre, a walkable distance for a healthy person, though not necessarily for someone seeking health care. 49 As a result, people with CRDs living in rural areas would incur significant transport costs to access a centre-based PR service in an urban area. This potentially encourages people living in rural areas to normalise living with their CRD, discouraging them from seeking remote hospital services including PR. As some participants in this study said, for these people, the benefit of sacrificing their hardearned income on transport would weigh lesser than that of sacrificing it on food. Travel and transport have been frequently cited as patient-related barriers to uptake of centre-based PR programmes elsewhere. [50][51][52][53][54][55][56][57][58] To address this barrier, all participants from Africa in this study suggested the need for a community-based PR. This would reduce the need for people with CRDs in remote rural areas to travel significant distances to access PR. This suggestion is not new as it fits the already existing wider community-based rehabilitation model established in 1978 by the WHO as an approach for social inclusion in resource-constrained settings and focused on working with people with disabilities within their communities. 40 community-based rehabilitation as a service delivery approach in settings with a scarcity of resources. 59 Recently, a community-based PR programme conducted in a non-healthcare facility with patients with several CRDs demonstrated positive effects on patients' exercise capacity, health-related quality of life, and a reduction in respiratory-related hospital admissions in the 12 months following the programme. 60 Participants in the current study suggested that such community-based PR would be possible with the use of locally available equipment within the communities and would be delivered by locally available non-rehabilitation professionals such as community healthcare workers (ie, through task shifting 61 ). However, unlike rehabilitation professionals like physiotherapists, community healthcare workers are, by nature of their job, not trained in PR. Therefore, PR delivered by community healthcare workers may compromise the professional standards of PR and safety of patients. Capacity-building in PR for community healthcare workers by physiotherapists would be needed. 62 Study Strengths and Limitations This is the first original multi-country and multicontinental study to explore barriers and enablers to PR in LMICs. The use of in-depth interviews permitted exploration of participants' lived experiences with an everwidening explanation. 22 By purposively recruiting healthcare professionals with PR experience in LMIC, the study collected first-hand information on lived barriers and enablers to PR in LMICs. However, only 7 out of 142 LMICs 63 were represented in this study, which limits the generalisability of the findings to all LMICs, although data saturation was reached. Furthermore, most participants in this study were physiotherapists (71%), which limits the generalisation of the findings to healthcare professionals of different professions, who are also members of the multidisciplinary PR team, including nurses 64 and occupational therapists. 65 These might have different experiences or perspectives regarding barriers and enablers to PR in LMICs. Conclusions and Recommendations From the perspective of healthcare professionals who participated in this study, barriers to PR in LMICs include limited resources (including shortage of PR expertise), low awareness or recognition of physiotherapy services including PR, COVID-19, and access costs incurred by patients including transport costs associated with long distance travel to a healthcare facility to access PR. Enablers include provision of PR training and resources (ie, funding), local adaptation (including community-based PR), tele-PR in the face of COVID-19, and public awareness campaigns. Successful implementation of these enablers will require engagement with multiple stakeholders including people with CRDs and their families or caregivers, rehabilitation professionals including physiotherapists, other members of the multidisciplinary team such as doctors, and government authorities including ministries of health. Future studies should evaluate the extent to which implementation of these enablers can improve access to PR and subsequent reduction of CRD burden in LMICs. Data Sharing Statement The data analysed in this study are available from the corresponding author upon reasonable request. Ethical Approval Ethical approval was granted by the UCL Research Ethics Committee (Project ID: 20465/001). It was deemed "low risk" and was approved via chair's action. Each participant was given an information sheet and was asked to provide written informed consent, which they reiterated verbally at the start of their interview. Each participant's informed consent included publication of their anonymized responses. Funding This work was supported by the National Institute for Health Research (NIHR) (using the UK's Official Development Assistance (ODA) Funding) and Wellcome [221465/Z/20/Z] under the NIHR-Wellcome Partnership for Global Health Research. The views expressed are those of the authors and not necessarily those of Wellcome, the NIHR or the Department of Health and Social Care. Disclosure The authors declare no competing interests. Publish your work in this journal The International Journal of COPD is an international, peer-reviewed journal of therapeutics and pharmacology focusing on concise rapid reporting of clinical studies and reviews in COPD. Special focus is given to the pathophysiological processes underlying the disease, intervention programs, patient focused education, and self management protocols. This journal is indexed on PubMed Central, MedLine and CAS. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2021-11-04T15:09:08.537Z
2021-11-02T00:00:00.000
{ "year": 2022, "sha1": "c4254dadebf033a29d1bae4e32fa9809818889e9", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=77587", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3b128685c6f56c6c6a197d473e1eae61198afde0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235398680
pes2o/s2orc
v3-fos-license
Holocene paleoclimate inferred from stable isotope (δ18O and δ13C) values in Sphagnum cellulose, Mohos peat bog, Romania We measured stable isotopes (δ18O and δ13C) in Sphagnum cellulose that was extracted from a long peat core drilled in the ombrotrophic Mohos peat bog, Ciomadul Mountain, Romania. The 10-m-long peat profile spans the period from 11,800 cal yr BP to present. The δ18O and δ13C data indicate there were several cooling events and warm periods in the area of the Mohos peat bog during the Holocene. The 8.2-ka cold event, however, was not detected using δ18O and δ13C values. Response of the peat bog to changing environmental conditions was inferred using data on organic matter accumulation, independent of the stable isotope results. All cool periods during the Holocene, whether of short or long duration, were identified as times of reduced organic matter accumulation rate. Similarly, dry periods were also correlated with reduced accumulation rates of organic matter. Introduction The stable carbon and oxygen isotope ratios (d 13 C and d 18 O) of plant organic matter provide valuable information that is frequently used in paleoenvironmental and paleoclimate research. Several studies of the oxygen isotope composition of plant cellulose have established that plant isotope ratios are well-correlated with multiple climate factors, in particular temperature, precipitation, moisture source and humidity (Aucour et al. 1996;Ménot-Combes et al. 2002;Wolfe et al. 2007; Moschen et al. 2009;Tillman et al. 2010;Bilali et al. 2013). Paleoclimate information can be determined from the oxygen isotope composition of the water that is used in plant cellulose synthesis, if the plant remains are well-preserved. Continuously accumulating peat deposits can be useful archives of past climate information, especially in ombrotrophic peat bogs (Barber and Charman 2003), where peat accumulation may be undisturbed for thousands of years and the source water for the plants is derived entirely from precipitation. Peat bogs play a large role in the global fixation and sequestration of carbon. In peat bogs, carbon accumulates when the amount of primary production in the environment exceeds the amount of organic matter decomposition. This makes peat bogs important ecosystems in the context of climate change, as fixed CO 2 is one of the main greenhouse gases, along with methane, and all wetlands, including peatlands, are natural sources of methane (Nisbet et al. 2016;Harenda et al. 2018;Günther et al. 2020). Carbon dioxide, however, has a longer atmospheric residence time in the atmosphere (20-150 years) than does methane (10 years). Climate variables like temperature and humidity influence both the production and decomposition of Sphagnum (Breeuwer et al. 2008). The oxygen isotope values in Sphagnum tissues reflect the oxygen isotope composition of the water taken up by the plant, as well as isotopic enrichment during evapotranspiration from the Sphagnum surface, and isotopic fractionation involved in the biochemical synthesis of cellulose in environmental water (Brenninkmeijer et al. 1982;Moschen et al. 2009;Granath et al. 2018). Therefore, the oxygen isotope composition of cellulose reflects environmental conditions that influence source waters in ombrotrophic bogs, mainly precipitation and evaporation (Moschen et al. 2009;Tillman et al. 2010;Daley et al. 2010). The d 18 O value of precipitation is influenced by temperature, relative humidity, precipitation events, air mass history, amount, circulation, altitude and latitude effects, the form of precipitation (snow, rainfall) and the moisture source region (Dansgaard 1964;Gat and Gonfiantini 1981;Rozanski et al. 1992;Tan 2014). Peat bogs are usually dominated by Sphagnum (Booth and Jackson 2003;Bilali et al. 2013), which is the most abundant peat-forming genus in the middle to high latitudes. Bulk peat and extracted cellulose have different oxygen isotope compositions, so separation of Sphagnum cellulose from the bulk peat is required. Sphagnum cellulose preserves the stable isotope composition during plant growth, and therefore provides important data for paleoclimate studies, necessitating extraction of the cellulose component from Sphagnum. Enrichment between source water and Sphagnum cellulose, which occurs during cellulose synthesis, has been described in many studies, and amounts to an excess of 27 ± 3% for the heavier isotope ( 18 O) (DeNiro and Epstein 1981;Zanazzi and Mora 2005). The stable carbon isotope ratio of Sphagnum cellulose has the potential to record changes in bog wetness, which can also be related to bog hydrology, and thus to climate variability (Ménot-Combes et al. 2004;Loader et al. 2007;Lamentowicz et al. 2008). The Sphagnum species do not have stomata or vascular tissues, and are therefore unable to control water uptake and loss. Hence, water uptake and loss is controlled only by environmental conditions, and the mosses fix and lose water more rapidly than vascular plants. In addition, the stable carbon isotope composition of the cellulose extracted from Sphagnum is highly dependent on water availability. The chloroplasts are surrounded by so-called ''hyaline cells'' that function as water reservoirs. The concentration and isotopic composition of CO 2 in the chloroplast depend on isotopic discrimination during biochemical fixation of CO 2 (Ménot and Burns 2001;Loader et al. 2007). In wet environments, the hyaline cells are filled with water and CO 2 diffusion is relatively low, in which case the proportion of fixed 13 C increases because diffusion of the CO 2 from the atmosphere is slower and the pool of CO 2 therefore takes longer to be replenished. In dry environments, CO 2 diffusion is relatively high and the proportion of fixed 13 C decreases (Ménot and Burns 2001;Moschen et al. 2009;Granath et al. 2018). As a consequence, the d 13 C of the cellulose is more positive in wet environments, whereas it is less positive in drier conditions. There are differences in oxygen isotope composition among different plant genera in peat bogs (Moschen et al. 2009), but some studies have shown that there are no significant differences in d 18 O among Sphagnum species (Daley et al. 2010;Bilali et al. 2013). In contrast, some studies indicate it is important to separate different Sphagnum species because they bond the 12,13 C and 16,18 O differently (Tillman et al. 2010;Granath et al. 2018). It has also been shown that there is a stable isotope offset between branches and stems (Tillman et al. 2010). The recommended approach for using the Sphagnum archive to infer paleoclimate involves separating the Sphagnum samples from bulk material, isolating stems and branches, and extracting the cellulose from the separated Sphagnum branch material. Our study focused on a 10-m-long peat core from Mohos peat bog, Ciomadul Mountain, Romania, as a paleoclimate archive. We used a single peat core for cellulose extraction and isotope measurements, designed to draw reliable conclusions about the climate of the past. Organic matter accumulation in the Mohos peat bog over the past * 12,000 years was examined in conjunction with d 18 O-and d 13 C-inferred environmental variables, to better understand the response of the peat bog to changing environmental conditions. Descriptions of local and regional patterns of Late Pleistocene-Holocene climate oscillations in the Carpathian-Pannonian region have been presented in numerous papers (Schnitchen et al. 2006;Constantin et al. 2007;Magyari et al. 2009Magyari et al. , 2013Buczkó et al. 2013;Geanta et al. 2014;Haliuc et al. 2016;Longman et al. 2017;Hubay et al. 2018a). There are some disagreements among those studies with respect to paleoclimate interpretations in space and time, and about wet versus dry and cold versus warm periods in the past. These recent studies, which include several detailed local and regional reconstructions of Holocene climate oscillations, provided the motivation to undertake a comprehensive study of paleoclimate in the region, and moreover, to compare findings with environmental conditions on a global scale. We therefore undertook a high-resolution study of paleoenvironmental changes in the region of the Mohos peat bog, using multiple variables in peat deposits that had accumulated continuously over the past * 12,000 years, with a focus on the d 18 O and d 13 C of cellulose extracted from the accumulated Sphagnum samples. Study site The Mohos peat bog (46°08 0 3.60 00 N, 25°54 0 19.43 00 E, 1050 m altitude) is located in the Eastern Carpathians, in the Ciomadul Massif. The Ciomadul is a single dacitic volcano with two craters, the younger occupied by Lake Saint Ana (Lacul Sfânta Ana), and the older by the Mohos peat bog (Fig. 1). The Lake Saint Ana crater is the result of the last volcanic eruption, when ejected material settled in the area and filled the preexisting crater that contains the Mohos peat bog. The last eruption was dated to between 35 and 27.5 kyr BP (Harangi et al. 2015;Szakács et al. 2015;Karátson et al. 2017). The Mohos bog covers about 65 ha in a crater of approximately 285 ha. There is no water inflow into the bog, so the only water source is precipitation and runoff from the crater. In this sense, the Sphagnum-dominated wetland is classified as an ombrotrophic bog. It has one outflow, Veres spring. The climate is temperate, with 800-1000 mm annual precipitation (Karátson et al. 2013) and a current mean annual temperature of 15°C. Core collection and sampling An undisturbed 10-m-long peat core was taken in 2012 using a novel modified coring technique (Hubay et al. 2018b). Each core section was stored in a refrigerator and cut into 2-cm increments before laboratory analysis, except between 7.5 and 6.5 m, where it was sectioned at 1-cm increments to achieve higher temporal resolution in the interval of expected paleoclimate events. Chronology Sphagnum samples were taken every 30 cm along the 10-m-long peat core and were chemically prepared for 14 C dating. Dry Sphagnum samples for AMS dating were prepared using the modified BABAB (base-acidbase-acid-bleaching) method (Nemec et al. 2010), with scaled-up amounts doubling the amount of reagents for peat cellulose preparation. The clean cellulose samples were combusted in sealed tubes and converted first to CO 2 , and then graphite, after which they were measured on an EnvironMICADAS accelerator mass spectrometer (Molnár et al. 2013). Details of the sampling, radiocarbon dating and development of the core chronology are found in Hubay et al. (2018b). All ages were used in the BACON model, and no dates were excluded as outliers. The upper 930 cm of the Mohos peat bog core spans the age range from 11,770 cal yr BP to AD 2012. The agedepth model of the sequence is shown in Fig. 2 and the 95% confidence intervals range from 6 yr at 1 cm depth to 950 yr at 566 cm depth. Sphagnum cellulose sample preparation and stable isotope analysis To facilitate complete extraction for stable isotope analysis, bulk material must be first homogenized and sieved. To obtain a Sphagnum sample for stable isotope analysis, it is necessary to separate the Sphagnum component from the mixture of plant remains, and isolate Sphagnum branches from the bulk peat. A stacked sieve system was used for isolation, in which 1000-, 560-, 200-and 40-lm mesh sizes were employed and samples were wet-sieved using deionized water. Sieved material was observed under a microscope. Larger fragments of Sphagnum and other plant fragments were retained by the 1000-lm sieve. Smaller branches, free leaves and stem sections were retained by the 560-lm sieve. Remaining branch samples on the 200-lm sieve were used for stable isotope analysis. Some mineral particles remained on the 40-lm sieve. The peat core shows a continuous record up to present, consisting of peat with different degrees of humification, which was analysed manually by microscope (all analysed slices) where the individual Sphagnum leaves and stems could be easily identified. Apparently, the peat core had been slightly affected by biological degradation. C/N ratios and d 15 N analysis of pure peat samples revealed more about the degree of humification. Two hundred mg of sieved Sphagnum samples were used for cellulose extraction, with two replicates made from each cellulose preparation. Every fourth sample was prepared for cellulose, except between 7.5 m and 5.5 m (where the 8.2-ka event was assumed to be), and between 9.3 and 9.0 m, where all samples were prepared for cellulose. Samples that returned outlier stable isotope ratios were re-run, with both the sample preparation and stable isotope measurement steps repeated. In the case of isotope-inferred climate events, measurements were carried out at higher temporal resolution. The extraction of resin was performed prior to the purification of cellulose from organic material using a 2:1 mixture of chloroform and ethanol, for approximately 6 h (Ménot-Combes et al. 2002). The next step was oxidation of lignin using an acidified sodium chlorite solution (Cullen and Grierson 2006;Daley et al. 2010) in an ultrasonic bath at 70°C, followed by removal of hemicellulose in sodium hydroxide in an ultrasonic bath at 80°C (Tillman et al. 2010). As a final step, the samples were rinsed in deionized water (Daley et al. 2010) and subsequently freeze-dried. The term cellulose is used hereafter because a-cellulose is not a defined molecule, since it is not a chemical entity. It is simply a convenient measure of the 'true' cellulose, which is insoluble in sodium hydroxide in an ultrasonic bath at 80°C. For that reason we use the term cellulose. Oxygen and carbon stable isotope ratio measurements were carried out on the extracted cellulose samples. Cellulose (0.30 ± 0.02 mg) was weighed into silver capsules, and samples were prepared for oxygen isotope analysis using a Thermo Finnigan TC/ EA (temperature conversion elemental analyzer) equipped with a zero-blank autosampler (Kéri et al. 2015). Samples for carbon stable isotope measurements were prepared with a Fisons Instruments NA 1500 NCS elemental analyzer, with 0.65-0.70 mg of cellulose sample weighed into aluminum capsules and dropped into the autosampler of the elemental analyzer (Major et al. 2018). Both instruments were attached to a Thermo Finnigan Delta PLUS XP continuous-flow isotope ratio mass spectrometer. Results are expressed in conventional delta notation, where the d 18 O and d 13 C values are relative to VSMOW and VPDB standards, respectively. We used cellulose reference materials for d 18 O from IAEA-C3 and Merck, ? 32.14 and ? 28.67%, relative to VSMOW, respectively (Saurer et al. 1998;Kéri et al. 2015). For d 13 C, we used IAEA-C3 cellulose (-24.91%, VPDB) and an in-house sulfanilamide standard (226.69%, VPDB). The d is expressed as follows: d (%) = (R sample /R reference -1) *1000, where R is the 18 O/ 16 O or 13 C/ 12 C ratio of the sample or reference standard. Every cellulose sample was measured at least twice for each stable isotope and standard deviations of individual d 13 C and d 18 O measurements were ± 0.2 and ± 0.3%, respectively. Organic matter concentrations and accumulation rates To determine the accumulation rate of the organic material, loss on ignition (LOI) was performed on all samples from the 10-m peat core. In each case, 1 cm 3 of bulk peat was dried for 24 h at 105°C, weighed, then combusted at 550°C for 4 h and weighed again (Heiri et al. 2001). The weight loss on ignition represents the organic matter content, which can be expressed as percent of dry mass or in terms of mg/cm 3 wet peat. Using the age-depth chronology, the LOI was converted to organic matter accumulation rate. Stable isotope analysis of recent precipitation and modern Sphagnum samples A rainwater collector was installed on the saddle between the two Ciomadul craters (Santa Ana and Mohos) in September 2016 to collect monthly precipitation. Paraffin oil was added to the collecting bottles to prevent evaporation. Surface bog water from the Mohos peat bog was collected in May and September 2016, in July 2017 and in May 2018, and was used to compare isotope values in the water with those in recent Sphagnum samples. The collected precipitation and bog water samples were measured for d 18 O and d 2 H using a laser absorption spectrometer (Los Gatos Research [LGR]). All water samples were measured in triplicate and measurement precision was better than ± 0.15% for d 18 O, and ± 1.5% for d 2 H. Recent Sphagnum samples were taken in May and September 2016, and in July 2017 from the exact place where the bog waters were taken. We used meteorological data from Meteoblue in the area of Bȃile Tuşnad (Lat: 46°15 0 N, Long: 25°85 0 E, Alt: 845 m), the closest meteorological station to Mohos peat bog (distance: 5 km). Mean annual values for the time span 1985 to 2018 were used, with monthly temperature averages (°C) and monthly precipitation amounts (mm) applied as recent climatological variables. We also used meteorological data from the WMO Station, Miercurea Ciuc (Station identifier: 961, Lat: 46°22 0 N, Long: 25°44 0 E, Alt: 661 m), which is 26 km from Mohos peat bog. Mean annual values were calculated from records spanning 1960 to 2015. Recent climate variables at Ciomadul Crater Oxygen isotope values for recent precipitation collected from the Ciomadul Volcano, and cellulose extracted from recent Sphagnum samples from the Mohos peat bog, were compared with average temperatures and precipitation amounts. Based on the meteorological data from Bȃile Tuşnad and Miercurea Ciuc, annual precipitation was about 650 mm. Monthly variation in precipitation throughout the year showed a maximum in June (Table 1). Eleven monthly precipitation samples were collected between October 2016 and August 2017 and measured for d 18 O ( Table 2). The d 18 O values ranged from -19.1 to -2.1% (Fig. 3), with highest d 18 O values observed in summer and lowest values in winter months. The oxygen isotope ratios of recent precipitation were compared with monthly average temperatures for Bȃile Tuşnad and Miercurea Ciuc (Fig. 4). The lighter isotope values for precipitation can be attributed to colder climatic conditions, and heavier values occur during warmer climate conditions (Rozanski et al. 1992). There is a strong correlation between monthly temperatures at the Bȃile Tuşnad and Miercurea Ciuc stations, and those temperatures, in turn, show a strong correlation with d 18 O of collected precipitation (r = 0.82; r = 0.84, respectively). This strong relation can be used to develop a local calibration of the oxygen isotope thermometer for the Mohos peat bog, i.e. d 18 O = 0.566*T-14.87. A very similar connection is seen between monthly temperature averages of the two meteorology stations and d 18 O results from the collected precipitation (r = 0.81), with results showing a d 18 O increase of 0.55% per°C. The peat bog water stable isotope results (Fig. 5) fall on the Local Meteoric Water Line (LMWL) calculated from the precipitation results (Ciomadul), which confirms the precipitation origin of the peat water, and there is apparently no isotopic enrichment from evaporation. (Table 3) represent the source water that the Sphagnum takes up for cellulose synthesis. There is a direct correlation between the oxygen isotope composition of cellulose and the source water, where the enrichment factor, e, is given as 27 ± 3% (DeNiro and Epstein 1981; Sternberg et al. 1986). The enrichment factor is an indication of the degree of isotopic fractionation between the parent and intermediate compound during a specific synthesis reaction, and it is derived from the fractionation factor a, through the relationship e = 1000 (a-1), where a reflects the ratio of the rate constants for the heavy/light isotopes. In other words, the different rates at which these species react reflects the extent of changes expected in the isotopic composition of a particular compound during its reaction. In this case, this means: d 18 O source water = (d 18 O cellulose -27.4) 9 1.274. Thus, the oxygen isotope composition of the cellulose apparently reflects that of the source water and we can calculate the isotopic composition of the source water, using the known enrichment factor. (2) the oxygen isotope composition of cellulose is suitable for inferring environmental variables. In one case in May 2016, when d 18 O bog water and d 18 O calculated did not match, the isotope composition of the cellulose appears to reflect the value in earlier precipitation, perhaps mixed with older bog water. The d 18 O cellulose sampled in May 2016 did not reflect the isotopic composition of the bog water sampled at the same time, which could indicate that the plant had Table 1 Mean monthly precipitation (mm) and temperature (°C) for Miercurea Ciuc (1961Ciuc ( -2015 Organic matter accumulation during the Holocene The 12,000-year record of organic matter accumulation rate in the Mohos peat bog can be examined in parallel with the d 18 O-and d 13 C-inferred environmental variables to determine the response of the peat bog to changing environmental conditions ( Fig. 6; Table 4). The accumulation rate of organic matter was highest (48 mg/cm 2 /yr) at the end of Younger Dryas, between 11,800 and 11,600 cal yr BP, which represents a transitional phase from a lake to a peat bog (Fig. 6). After 11,600 cal yr BP, the accumulation rate of organic matter decreased continuously to a low of 0.5 mg/cm 2 /yr, ca. 11,100 cal yr BP, after which a short period of higher (8-12 mg/cm 2 /yr) accumulation rate occurred until 9800 cal yr BP, at which time the bog reached a steady-state phase when the organic matter accumulation rate was fairly constant (7-10 mg/cm 2 /yr) until ca. 7200 cal yr BP. The cellulose stable isotope (d 18 O, d 13 C) values display a pattern that tracks the organic matter accumulation rate, but the isotope values increased markedly after 11,800 cal yr BP, fluctuated somewhat until about 9600 cal yr BP, then also reached a rather steady phase until about 7200 cal yr BP. After 7200 cal yr BP the organic matter rate decreased (0.8 mg/cm 2 /yr), but then between 6800 and 5500 cal yr BP, increased again (8-10 mg/cm 2 / yr). After 5500 cal yr BP, a minor decrease (6 mg/ cm 2 /yr) occurred until 4700 cal yr BP, and again the d 13 C values followed this trend. Between 4750 and 3700 cal yr BP, a continuous decrease of organic matter accumulation rate (to 3 mg/ cm 2 /yr) was observed, a time during which there were fluctuating, but decreasing stable isotope (d 18 O, d 13 C) values (4600-3700 cal yr BP), which were associated with drier and somewhat colder climate in the area. From 3700 to 3200 cal yr BP, a continuous increase in peat accumulation rate occurred (22 mg/cm 2 /yr). A brief peak (40 mg/cm 2 /yr) in organic matter accumulation rate occurred between 1850 and 1450 cal yr BP (AD 100-550), overlapping in part with more negative d 18 O and d 13 C values between 1650 and 1200 cal yr BP, which can also be associated with a short cooler and drier period. There was a brief period of high organic matter accumulation rate (47 mg/cm 2 /yr) about 600 cal year BP (AD 1350), after which values decreased continuously to about 8 mg/cm 2 /yr by ca. 150 cal year BP (AD 1800). (Grootes and Stuiver 1997). Grey bands represent colder periods inferred from the peat core stable oxygen isotope results from the Mohos peat core cold-warm fluctuations. The d 13 C values show no significant cyclicity longer than 450 years. A 50-450year cycle was established by about 2000 cal yr BP, which could be related to dry-humid climate fluctuations, as lower solar activity characterized that time (Lüdecke et al. 2015). The apparent fluctuations may, however, also reflect background noise that cannot be totally excluded. Between 6000 and 3000 cal yr BP, accumulation (OM) was characterized by strong power spectral density, which indicates 1000-year periodicity Fig. 7 Wavelet analysis of: a d 18 O cellulose ; b d 13 C cellulose ; c Organic matter accumulation rate. Areas outlined in black are significant at the 95% confidence level. The shaded areas marked with the dashed line show the cone of influence, outside of which results may not be significant (Fig. 7c). The last 3000 years of OM were also characterized by a significant millennial cycle (1000 years). A similar millennial cycle was detected with respect to dust deposits in the Mohos peat bog (Longman et al. 2017), which the authors connected to fluctuations in local-regional dust sources related to the North Atlantic Oscillation (NAO), associated with the end of the African Humid Period. This millennial cycle might also be present in the stable isotope (d 18 O, d 13 C) records, but such cycles are not significant because of the low number of samples. The millennial cycle in d 13 C values between 6000 and 0 cal yr BP are similar to the cyclicity observed in the accumulation rate, indicating a strong connection between the d 13 C record and organic matter accumulation rate. Therefore, we infer that the peat growth and accumulation were strongly connected to humidity in the Mohos peat bog. This millennial cycle is also seen in the d 18 O values by the area of power spectral density, which is not significant, but occurs in the last * 3500 years of the Holocene. Changing environmental conditions, e.g. temperature or humidity, appear to have influenced the production or decomposition of Sphagnum (Breeuwer et al. 2008). In the case of the Mohos peat bog, fluctuations observed in the stable isotope values can be connected to cold/warm or dry/wet climate periods of the past, and could reflect the effects of solar dynamics, particularly as they affected North Atlantic circulation. Discussion Local and regional Holocene climate history inferred from peat cellulose isotope records The continuous record derived from the 10-m core recovered from the Mohos peat bog can be compared to other evidence for climatie variability during the last 12 millennia. The deepest part of the core is older than 11,800 cal yr BP, when accumulation of clastic sediments occurred, reflecting a transition between a lake and a bog at the end of the Pleistocene Epoch. This period encompasses the last sharp decline in temperature and marks the end of the Pleistocene (Dansgaard et al. 1993;Alley 2000;Rasmussen et al. 2006). We compared our carbon and oxygen stable isotope data from cellulose in the Mohos peat bog to d 18 O values from the Greenland Ice Core Project (GRIP). We found weak positive correlations between d 18 O and d 13 C in the peat record and d 18 O in the Greenland ice, r = 0.43 and r = 0.44, respectively. The correlations are much stronger for specified intervals, e.g. when the period between 11,700 and 8500 cal yr BP is considered (r = 0.64 and r = 0.81, respectively), and even stronger for the shorter interval, from 11,700 to 9000 cal yr BP (r = 0.69 and r = 0.89, respectively). Overall, comparison of the d 18 O and d 13 C records from Mohos bog cellulose and the GRIP d 18 O records (Fig. 6) suggests that the climate records from the two sites can be correlated since the end of the Younger Dryas (Dansgaard et al. 1993;Grootes and Stuiver 1997;Alley 2000;Rasmussen et al. 2006). The greatest depth in the peat core where Sphagnum was found dated to 11,800 cal yr BP, which correlates with the onset of the Holocene. Increased temperature and precipitation ca. 11,700 cal yr BP were documented by stable isotope analysis of Romanian speleothems (Tǎmaş et al. 2005;Constantin et al. 2007). Coeval changes in other climate proxy records (pollen, diatoms, chironomids, and geochemical composition) were detected in lake sediments from the Southern Carpathians (Pál et al. 2018). Although it has been shown that a much warmer period began after the Younger Dryas, there were also climate shifts during the Holocene, but of smaller amplitude (Mayewski et al. 2004;Wanner et al. 2011). Shortly after the Late Glacial-Holocene transition, evidence of intense, short-term climate fluctuations is preserved in lake sediments and speleothems. In general, these short periods span 100-300 years, but their durations and characterization are affected by the accuracy of different age-depth models. Therefore, it is difficult to identify local and regional differences in climate from this period. Hence, it is important to identify paleoclimate events from many different areas using as many climate proxies as possible. Changes detected in stable isotope records are of similar scale, and changes are also observed in many other proxy climate records, e.g. in pollen, chironomid, diatom and geochemical stratigraphies Braun et al. 2013;Tóth et al. 2018). Based on the stable isotope results from the Mohos peat bog, there were three cool periods (11,250-11,000, 10,950-10,450 and 10,200-9750 cal yr BP) during the early Holocene, when continuous, but weakening dry-wet climate fluctuations occurred. The first colder period is linked to the Preboreal Oscillation (PBO, 11,300-11,150 yr BP (Bjorck et al. 1996)), with maximum cooling at 11,250 yr BP (Fisher et al. 2002;Magny 2007). The PBO has been identified in many Central European lake records, which display evidence for higher lake levels between 11,250 and 11,050 yr BP (Magny 2004(Magny , 2007. Numerous studies, however, indicate that these cold events did not occur simultaneously in different areas. A significant decline was detected in the Greenland ice core d 18 O records around 11,400 yr BP (Rasmussen et al. 2006(Rasmussen et al. , 2007(Rasmussen et al. , 2014. According to the Poleva Cave (Romania) oxygen isotope record, cooling occurred earlier, between 11,400 and 11,200 yr BP (Constantin et al. 2007). Also, in the Southern Carpathian Mountains (Retezat Mts.), vegetation changes began slightly earlier, between 11,500 and 11,300 on the southern slope, and from 11,400 to 11,100 yr BP on the northern slope of this mountain range (Pál et al. 2018). Lower diatom productivity indicates the PBO occurred between 11,500 and 11,300 yr BP in the Retezat Mts. ). In addition, a chironomid-based mean July temperature reconstruction from the Retezat Mts. showed that mean temperature declined between 11,480 and 11,390 cal yr BP, at 10,200 cal yr BP, and again between 9800 and 9700 cal yr BP (Tóth et al. , 2015. At Lake Suchar Wielki (Poland), the first cooling occurred ca. 11,300-11,150 cal yr BP based on palynological records, and inferred water level suggests colder and wetter conditions at the time in this area (Fiłoc et al. 2018). We inferred a second cooling event from our stable isotope records, between 10,950 and 10,450 cal yr BP. Slightly later ecosystem changes were described from the Southern Carpathians, ca. 10,500 and 10,300 cal yr BP (Pál et al. 2018), and d 18 O in speleothems from northwest Romania recorded similar cooling periods from 11,000 to 10,700, and from 10,500 to 10,200 cal yr BP (Tǎmaş et al. 2005). A cooling event at 10,300 cal yr BP was also recorded in Poleva cave (Constantin et al. 2007). After ca. 9000 cal yr BP, stable isotope records indicate rather stable warm conditions, which coincide with the Holocene Climatic Optimum, roughly 9000-5500 yr BP (Andersen et al. 2004). A dramatic cooling event described from many areas at 8200 cal yr BP is the most widely detected climate event of the Holocene Epoch (Alley et al. 1997;Thomas et al. 2007). The event appears as a significant decrease in d 18 O values in the Greenland ice cores (Rasmussen et al. 2014). The 8.2-ka cold event (Alley et al. 1997;Thomas et al. 2007), however, is not observed in Mohos peat core d 18 O and d 13 C results. Strongest evidence for this Early Holocene event comes from the North Atlantic region. Whereas the disruption in climate appears clearly in the Greenland ice cores, as well as sediment and other records from the temperate and tropical North Atlantic, it is less evident in this region. Few climate modeling experiments have addressed the 8.2-ka event, and most studies of proxy records across this event lack the time resolution to fully characterize the anomalies (Constantin et al. 2007;Magyari et al. 2009;Dragusin et al. 2014). Lack of evidence in the Mohos peat bog for this abrupt decrease in global temperature at 8.2 ka means that this event had little impact at this particular location. Longman et al. (2017) described an * 8200-BP event in the Eastern Carpathians based on testate amoebae and a dust record that indicates Saharan desertification. At nearby Lake Santa Ana, there is evidence for a decrease in water depth ca. 9000 cal yr BP and poor diatom preservation beginning ca. 8600 cal yr BP (Magyari et al. 2009). Surprisingly, this apparently global climate event is not detectable in many speleothem d 18 O records from the Carpathians (Onac et al. 2002;Tǎmaş et al. 2005;Constantin et al. 2007;Dragusin et al. 2014), despite the fact that pollen (Feurdean et al. 2007) and other climate proxies suggest climate changes during this period. After 7200 cal yr BP, the peat accumulation rate in the Mohos bog declines, but increased again between 6800 and 5500 cal yr BP coincident with the Holocene Climatic Optimum (Andersen et al. 2004). The stable isotope (d 18 O, d 13 C) values indicate a somewhat colder and drier period between 7200 and 6700 cal yr BP, however the dry conditions persisted longer, until 5500 cal yr BP, i.e. the end of the Holocene Climatic Optimum (Kalis et al. 2003). This dry period coincided with reduced peat accumulation in the Mohos peat bog between 7300 and 6000 cal year BP, as discussed by Hubay et al. (2018a, b). From the same peat bog, Longman et al. (2017) reported a very dry period between 7400 and 6600 cal yr BP, based on testate amoebae. Furthermore, a dry phase was detected in mire surface conditions between 7550 and 4500 cal yr BP (Cristea et al. 2014). From 5500 to 4500 cal yr BP, climate displayed little variation, and there were no significant environmental changes in the area. An increase in water level at nearby Saint Ana Lake, ca. 5350 cal yr BP, however, was inferred from pollen, but both of these observations are in contrast with NW Romanian records (Feurdean et al. 2008), which indicated warmer and possibly drier conditions between 5500 and 3200 cal yr BP. Fluctuating, but decreasing temperature was inferred from less positive d 18 O values between 4500 and 2600 cal yr BP. The interval is marked by a short, but strong dry and cold period between 3100 and 2850 yr BP, which corresponded to the Neoglacial (3300-2500 cal yr BP (Wang et al. 2012). That interval is widely recognized as the period of the Bronze Age in Europe (Wanner et al. 2011). Longman et al. (2017 recorded the Bronze Age through dust events in Mohos peat record. A cold period at 4200 cal yr BP was also detected in a stalagmite from Poleva Cave (Constantin et al. 2007). This period can be characterized as having an overall wet climate in this area (Magyari et al. 2001). Many studies from the Northern Hemisphere describe this period as cold and dry. Nevertheless, pollen spectra and stalagmites from Romanian records suggest the climate was cool and humid (Magyari et al. 2001;Constantin et al. 2007;Dragusin et al. 2014). During the Roman Warm Period, between 2500 and 1600 cal yr BP (Wang et al. 2012), d 18 O values inferred from the Mohos record were more or less stable, and an increasingly dry climate can be inferred from the d 13 C values. Other European and North Atlantic records suggest this period was unusually warm (Desprat et al. 2003;Cristea et al. 2014). A period of lower-magnitude cooling and drying occurred between 1600 and 1200 cal yr BP, revealed by more negative stable isotope values. A similar dry period occurred ca. 1430 cal yr BP in the Maramures Mountains (Cristea et al. 2014). Between 1000 and 700 cal yr BP (AD 950 to 1250), relatively higher d 18 O values in the Mohos record led to an inference for slight warming, which was associated with the Medieval Warm Period (Mann et al. 2009), which was also recognized by Longman et al. (2017), using the dust record in the Mohos peat bog. Between 600 and 150 cal yr BP (AD 1350-1800), a period of cooling after the Medieval Warm Period can be inferred, and can be correlated with the Little Ice Age (LIA) (Mann et al. 2009). Evidence for this colder period has been found at many sites across Romania, in the form of tree-ring (Popa and Kern 2009) and pollen records (Fârcaş et al. 2013;Geanta et al. 2014). The LIA is well documented in sediment from Tatra Mountain Lake Toporowy Staw Ni_ zni, Southern Poland (Gasiorowski and Sienkiewicz 2010). A rapid increase in organic matter accumulation rate was detected about 600 cal year BP (AD 1350), which thereafter decreased continuously to 150 cal yr BP (AD 1800), marking a period of slight cooling that can also be linked to the Little Ice Age (Mann et al. 2009). A decline in stable isotope values during that time reflects the somewhat colder and dryer climate. In summary, stable isotope values in cellulose from the Mohos bog displayed smaller-amplitude fluctuations during the Holocene Epoch, compared to the terminal Pleistocene, but nevertheless indicate that climate was not uniformly warm and stable throughout the Holocene, as has been proposed. Results from the Mohos area display correlations with Holocene paleoclimate changes inferred from other studies in the Carpathian region. The assumed global 8.2-ka event, however, was not observed in the Mohos peat core d 18 O and d 13 C values. This does not mean that it did not occur in the area. It may simply be that our core sampling resolution was not high enough to capture what may have been a relatively short-lived climate event. Conclusions A 10-m core from the ombrotrophic Mohos peat bog was analyzed for stable isotopes (d 13 C and d 18 O) in Sphagnum cellulose and organic matter accumulation rate. The core spans the last 12,000 years and thus provided a record of changes between warm-cold and dry-wet conditions, from the late Pleistocene to present. After 11,800 cal yr BP, which corresponds to the end of the cold Younger Dryas, there was a rapid transition into the warmer Holocene. Whereas the Holocene was not characterized by large-magnitude shifts in climate conditions, there were some climate fluctuations that correspond to oscillations identified in Europe and elsewhere around the globe. Several cool (11,250-11,000; 10,950-10,450; 10,200-9750; 7200-6700; 3100-2900; 1600-1200; 600-150 cal yr BP) and warm intervals (9000-5500; 4500-2600; 2500-1600; 1000-700 cal yr BP) were inferred from the Mohos record. A notable exception is that the important 8.2-ka event did not appear in the Mohos peat core d 18 O and d 13 C results. Organic matter accumulation in the Mohos peat bog was examined in conjunction with d 18 O and d 13 C values in Sphagnum cellulose to elucidate the response of the peat bog to changing environmental conditions. Cyclicity within the proxy records was investigated using wavelet analysis, and a 1000-year cycle was detected in the organic matter accumulation rate data and in the d 13 C record of the last 6000 years. Overall, all short or long cooling periods in the Holocene Epoch are correlated with short periods of lower organic matter accumulation in the Mohos peat bog. Similarly, drier periods were also linked to lower peat accumulation rate. Presumably, warmer and wetter environmental conditions favor Sphagnum growth. There is a strong positive correlation between peat accumulation rate and the d 13 C and the d 18 O values, indicating that isotope ratios in the sphagnum are controlled by the same climate factors (humid-dry and cold-warm) that influence the growth of peat. We treat the timing of climate changes cautiously, as it is very difficult to achieve sufficient resolution to identify climate events of \ 100-yr duration. In the context of ongoing anthropogenic climate change, peat bogs are important environmental sinks for fixed (organic) carbon (Harenda et al. 2018). Climate change in the future may have large consequnces for peat bog carbon storage, as climate variables like temperature and humidity influence both the production and decomposition of Sphagnum (Breeuwer et al. 2008). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Funding Open access funding provided by ELKH Institute for Nuclear Research.
2021-06-11T14:19:14.095Z
2021-06-10T00:00:00.000
{ "year": 2021, "sha1": "8bae054dcdfa5d45f1e43ae4348379dbd0c5816e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10933-021-00202-z.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1564d2e1247db5c4cab26a98530cb66ff964974d", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [] }
270458693
pes2o/s2orc
v3-fos-license
The therapeutic role of γδT cells in TNBC Triple-negative breast cancer (TNBC) is a subtype of breast cancer that presents significant therapeutic challenges due to the absence of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) expression. As a result, conventional hormonal and targeted therapies are largely ineffective, underscoring the urgent need for novel treatment strategies. γδT cells, known for their robust anti-tumor properties, show considerable potential in TNBC treatment as they can identify and eliminate tumor cells without reliance on MHC restrictions. These cells demonstrate extensive proliferation both in vitro and in vivo, and can directly target tumors through cytotoxic effects or indirectly by promoting other immune responses. Studies suggest that expansion and adoptive transfer strategies targeting Vδ2 and Vδ1 γδT cell subtypes have shown promise in preclinical TNBC models. This review compiles and discusses the existing literature on the primary subgroups of γδT cells, their roles in cancer therapy, their contributions to tumor cell cytotoxicity and immune modulation, and proposes potential strategies for future γδT cell-based immunotherapies in TNBC. Introduction Triple-negative breast cancer (TNBC) is a subtype of breast cancer distinguished by negative expression of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) expression, comprising approximately 15-20% of all breast cancer cases (1).TNBC is notably aggressive, with a high rate of recurrence and metastasis, leading to poor prognoses and significant impacts on women's physical and mental health (2)(3)(4).Its characteristics, including a high mutation rate, extensive T cell infiltration, and elevated expression of programmed death-ligand 1 (PD-L1), make it a focal point in immunotherapy research (5,6).While immune checkpoint inhibitors (ICIs) have shown some effectiveness in treating TNBC (7), most patients with advanced disease do not respond well, complicating the search for new therapeutic targets (8).Currently, tumorinfiltrating lymphocytes (TILs)-including subsets of helper CD4+ cells, B cells, NK cells, gdT cells, and myeloid cells-are considered crucial biomarkers in tumor immunotherapy (9).Notably, gdT cell infiltration is regarded as the most favorable prognostic indicator (10).gdT cells originate from hematopoietic stem cells (HSCs) within the bone marrow and hold significant promise in the field of tumor immunotherapy.gdT cells are instrumental in tumor immunotherapy as they recognize and destroy tumor cells independently of specific antigen stimulation, unlike ab T cells, which are part of adaptive immune responses (11).gdT cells serve as a crucial link between innate and adaptive immunity, functioning as the frontline defense against tumors and playing pivotal roles in tumor progression (12,13).These cells possess innate-like receptors enabling rapid responses to diverse pathogens, facilitating early immune defense even in the absence of prior antigen exposure (14).Furthermore, gdT cells actively participate in tissue surveillance at barrier sites, contributing significantly to the maintenance of tissue h o m e o s t a s i s ( 1 5 , 1 6 ) .D e p e n d i n g o n t h e t u m o r microenvironment, different gdT cell subsets can display either anti-tumor or pro-tumor activities.Predominantly, gdT cells eliminate tumor cells by recognizing tumor-associated antigens via their T cell receptors (TCR) and can also augment the antitumor efficacy of other immune cells by secreting cytokines or expressing co-stimulatory molecules (17).These characteristics also facilitate their integration into combination therapies, including chemotherapy, radiotherapy, or other immunotherapies, to enhance treatment outcomes. Although gdT cells have demonstrated therapeutic potential in various cancers, their role and effectiveness in treating TNBC remain in the exploratory stage.This article reviews the current research on gdT cells in TNBC treatment, discusses their possible therapeutic mechanisms, and examines the integration of this unique immune cell type into existing treatment paradigms, offering new hope for TNBC patients.By extensively analyzing the biological characteristics of gdT cells, their molecular interactions with TNBC, and the latest developments in preclinical and clinical research, we can enhance our understanding of this strategy's potential and challenges, thus paving the way for future research and the formulation of new treatment strategies. Overview of gdT cells 2.1 Origin and distribution of gdT cells T lymphocytes originate from pluripotent stem cells in the bone marrow.During embryonic and neonatal stages, some pluripotent or pre-T cells migrate to the thymus where, under the influence of thymic hormones, they differentiate and mature into immunologically active T cells.These mature T cells are then distributed to thymus-dependent areas of peripheral immune organs through the bloodstream and can recirculate through lymphatic vessels, peripheral blood, and tissue fluid, performing cellular immunity and immune regulation functions (18,19).gdT cells, unique innate immune cells characterized by the expression of the gd heterodimer T cell receptor, are relatively rare, constituting only 1% to 5% of peripheral blood T lymphocytes and are primarily found in mucosal tissues such as the skin, respiratory tract, digestive tract, and uterus (20).Human gdT cells originate in the medulla of the normal fetal thymus at 7-8 weeks, undergoing a developmental process similar to abT cells, which includes functional TCR expression and negative selection to achieve self-tolerance.Unlike abT cells, some gdT cells do not undergo positive selection, making them unrestricted by Major histocompatibility complex (MHC) in their antigen recognition and killing capabilities (21,22).Various functional characteristics of gdT cells start to form in the thymus and gradually mature in the periphery.In the thymus, precursor cells differentiate into gdTCR+ thymocytes, which then exit to join the peripheral circulation as circulating gdT cells.These cells enter peripheral lymphoid organs and continue to develop under the influence of various hormones released by the thymus until they acquire the capabilities of mature immune cells (23, 24). Genetic characteristics of gdT cells T cells are classified into abT cells and gdT cells based on the type of TCR expressed.gdT cells are a subpopulation of T cells characterized by their g and d chains in the T cell receptor, comprising 0.5-5% of all T cells.Unlike ab T cells, which rely on the recognition of target antigens presented on MHC molecules by the ab TCR to develop and function, gdT cells operate in an MHCindependent manner.ab T cells differentiate into effector cells upon recognizing peptide-MHC (pMHC) complexes, enabling cytotoxic activity or cytokine production to defend against pathogens and tumors (22).In contrast, gdT cells do not require antigen processing and presentation by antigen-presenting cells (APCs) for activation, allowing for rapid early immune responses (25).gdT cell effector functions are activated by TCRs and natural killer receptors (NKRs) in response to stress-induced self-ligands (26).Moreover, similar to conventional ab T cells, gdT cells can differentiate into various effector profiles and produce different chemokines and cytokines, including IFN-g, TNF-a, IL-17, IL-21, and IL-22 (27).Additionally, human gdT cells may possess antigen-presenting capabilities; for instance, blood Vg9Vd2 T cells can respond to microbial and tumor signals and initiate CD4+ and CD8+ T cells, akin to dendritic cells (DCs) (28). Classification and biological characteristics of gdT cells gdT cells are primarily classified into three main subgroups based on the TCRd chain: Vd1T cells, Vd2T cells, and Vd3T cells (29, 30).Vd1T and Vd3T cells are predominantly found in mucosal and tissue environments, such as the skin, intestines, liver, and spleen.Vd1T cells demonstrate significant anti-tumor effects in conditions such as colorectal cancer, multiple myeloma, and chronic lymphocytic leukemia (31,32), yet they can also exert potent immunosuppressive effects when infiltrating tumors (33).The role of Vd3T cells in tumors remains less understood (34).Vd2T cells, the most abundant subgroup, represent 50%-90% of all gdT cells in peripheral blood and typically pair with Vg9 TCR to form Vg9Vd2T cells, frequently utilized in clinical settings.gdT cells exhibit robust anti-tumor activities, making them valuable in adoptive immunotherapy for cancers.They are also capable of eliminating cancer stem cells in various tumors, including colon cancer, ovarian cancer, and neuroblastoma (35)(36)(37)(38). 3 Recognition pathways and functions of gdT cells 3.1 T Cell receptor-mediated recognition pathway gdT cells identify various antigens through a TCR-dependent mechanism to detect and activate against tumor cells (Figure 1).Specifically, the Vg9Vd2TCR recognizes non-peptide phosphoantigens (P-Ag) such as microbial-derived (E)-4hydroxy-3-methyl-but-2-enyl pyrophosphate (HMBPP), an intermediate in the non-mevalonate pathway of isoprenoid biosynthesis, which is a potent activator of gdT cells (44, 45).Similarly, host-derived isopentenyl pyrophosphate (IPP) acts as a P-Ag and stimulates Vg9Vd2T cell responses.Bisphosphonates inhibit the farnesyl pyrophosphate synthase (FPPS) in the isoprenoid biosynthesis pathway in target or APCs, causing IPP and its metabolites to accumulate and be targeted by Vg9Vd2T cells (46, 47).Mookerjee-Basu et al. and Scotet et al. have documented that tumors sensitive to Vg9Vd2 display and interact with a complex similar to mitochondrial ATP synthase, specifically F1-ATPase (48,49).The activation of Vg9Vd2T cells is further enhanced in the presence of apolipoprotein (apoA-I), as F1-ATPase can complex with apoA-I to present phosphoantigens recognizable by Vg9Vd2 TCR (50).Research indicates that detecting P-Ag in target cells necessitates a surface protein with intracellular and extracellular domains, specifically butyrophilin 3A1 (BTN3A1 or CD277), which binds HMBPP to its intracellular B30.2 domain.This binding induces an extracellular conformational change, facilitating the recognition of target cells by Vg9Vd2T cells through inside-out signaling (51,52).Additionally, few ligands for Vd1 TCR have been identified, with lipid antigens presented by the MHC-like molecule CD1d binding to Vd1 TCR (Figure 1), a relationship elucidated by the crystal structure of the Vd1 TCR with CD1d-sulfatide (53).This capability of gdT cells to recognize tumor cells via TCR underscores their integral role in both innate and adaptive immunity. NK cell receptor pathway The mechanisms by which gdT cells identify tumor cells are not limited to TCR interactions but also include their reliance on NKRs (54) (Figure 1).These cells primarily identify tumor cells through NKRs such as NKG2D, DNAM-1, NKp30, NKp44, and NKp46, which bind to specific ligands present on tumor cells (55).NKRs not only regulate the activation and function of NK cells but also facilitate immune surveillance by gdT cells, enabling the distinction between transformed and infected cells.NKG2D, a Ctype lectin receptor, binds ligands that are typically absent in most normal tissues but are overexpressed on tumor cells, thereby enabling gdT cells to recognize and eliminate tumor cells (56).Identified ligands for NKG2D in human cells include MHC class I chain-related proteins (MICA/MICB) and six UL16-binding proteins (ULBP1-6) (57).Contrary to previous beliefs that natural cytotoxicity-triggering receptor (NCRs) were exclusive to NK cells, recent data reveal their presence in T cells and NK-like cells (58, 59).Although Vd1+ and Vd2+ cells naturally lack NCR expression, it can be selectively enhanced in Vd1+ cells through AKT-dependent signaling triggered by gc cytokines (IL-2 or IL-15) and TCR stimulation (60).The NCRs expressed in Vd1+ gdT cells (Figure 1), predominantly NKp44 and NKp30, endow these cells with heightened abilities for targeted cytotoxicity against tumor cells and for secreting IFN-g (55, 61). CD16 pathway One pathway through which Vg9Vd2T cells exert their antitumor effects involves CD16, also known as FcgRIII (Figure 1), a low-affinity type III receptor that specifically binds to the Fc portion of Immunoglobulin G (IgG) (62).This receptor mediates antibodydependent cellular cytotoxicity (ADCC) and cytokine production, including TNF-a (63).Research indicates that Vg9Vd2T cells treated with zoledronic acid and IL-2 can express CD16 (64).The interaction of CD16 with the Fc portion enables Vg9Vd2T cells to detect and destroy tumor cells expressing IgG through ADCC activation (65, 66).Depending on the presence or absence of CD16 expression, Vg9Vd2T cells are classified into two types: CD16− and CD16+.The CD16− subset produces higher cytokine levels, expresses fewer killer inhibitory receptors (KIRs), and exhibits lower cytotoxicity, while the CD16+ subset has higher KIR levels and significant direct cytotoxic capabilities (67).The presence of CD16 enhances the recognition capabilities of Vg9Vd2T cells against IgG-expressing tumor cells, particularly in the CD16+ subgroup, which shows enhanced direct cell-killing ability. 4 Anti-tumor effects of gdT cells gdT cells have the unique capability to recognize and destroy tumor cells without relying on traditional antigen presentation mechanisms, which is particularly advantageous in targeting tumors like TNBC that lack specific antigen presentation.gdT cells can directly lyse tumor cells via two independent pathways (Figure 2): firstly, by secreting perforin and granzymes (68,69); secondly, by inducing cell death through the Fas/FasL pathway and tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) (70,71).On their surface, gdT cells express FasL, which can trigger programmed apoptosis in tumor cells by forming a Fas trimer upon binding to Fas (72).This interaction leads to the activation of the death effector domain (DED) and Fas-associated death domaincontaining protein (FADD), subsequently activating downstream caspases that result in cellular destruction and death (73)(74)(75).Similarly, TRAIL induces apoptosis via caspase activation through FADD (71,75).gdT cells also indirectly exert anti-tumor effects by activating other immune cells and possess antigen-presenting capabilities that stimulate the activation and proliferation of abT cells (76, 77) (Figure 2).They promote the proliferation and differentiation of CD8+ T cells and regulate TNF-a and IFN-g secretion, enhancing tumor clearance rates (78,79).Additionally, research by Bansal et al. found that gdT cells expressing CXCL13 and CXCR5 demonstrate follicular helper T cell (Tfh) characteristics, particularly in the presence of IL-21, which aids B cell support (80,81).Caccamo et al. discovered that Vd2gdT cells can activate B cells (Figure 2), leading them to produce significant amounts of immunoglobulins even without antigen stimulation, and facilitate the formation of germinal centers (82,83).Moreover, Maniar et al. showed that zoledronic acid-activated gdT cells enhance NK cell-mediated cytotoxicity (Figure 2) against tumor cells, a process reliant on the interaction between CD137 ligand on gdT cells and CD137 on NK cells (84).Furthermore, gdT cells can modulate the production of cytokines such as interferon-g by NK cells, influenced by DC-like cells (85). The interaction between gdT cells and DCs is reciprocal (Figure 2).gdT cells facilitate the maturation of DCs, while mature DCs trigger the activation and proliferation of gdT cells, thereby enhancing their anti-tumor capabilities (86).Typically, Fas/ FasL mediates cell apoptosis; however, notably, DCs express high levels of the Fas inhibitor cFLIP.Elevated cFLIP levels convert the pro-apoptotic signal of Fas into one that promotes dendritic cell maturation and function (87).Additionally, activated gdT cells secrete IFN-g and TNFa, stimulating the expression of CD86 and MHC-I molecules on DCs surfaces, which supports further DCs maturation (88).Conversely, mature DCs boost gdT cell proliferation and augment their cytotoxic and immunoregulatory abilities by secreting cytokines such as IL-1b, IL-12, IL-18, IFN-g, and TNF-a (89).DCs also facilitate contact-dependent activation of gdT cells through CD86-CD28 interactions and the expression of specific gdT cell activation ligands, like P-Ag (90).These intricate interactions underscore the versatile role of gdT cells in modulating anti-tumor immune responses. The protumor function of gdT cells While gdT cells serve an effector role in anti-tumor immune responses, their function is frequently inhibited by the tumor microenvironment (TME), which may even lead these cells to differentiate into subgroups that foster tumor progression. gdT cells potentially regulate tumor immune responses.In breast cancer, Vd1+ gdT cells infiltrate and inhibit the proliferation of inactive T cells and the functionality of effector CD4+ and CD8+ T cells.They also suppress the maturation of DCs (Figure 2) and the proliferation of anti-tumor Vd2+ cells, fostering an immunosuppressive environment (100).Furthermore, studies Antitumor and protumor functions of gdT cells.gdT cells exert antitumor effects through various pathways, involving both direct and indirect mechanisms.Direct antitumor effects include lysing the tumor via the perforin-granzyme pathway, Fas/FasL pathway, and TRAIL signaling.Furthermore, they indirectly hinder tumor growth by promoting DC maturation, inducing B cell activation, enhancing abT cell activation and proliferation, and augmenting NK cell-mediated cytotoxicity against malignant cells.However, it is noteworthy that gdT cells can differentiate into gdTreg cells and gdT17 cells, promoting tumor growth.Specifically, gdTreg cells secrete IL-10 and TGF-b to suppress abT cell and NK cell responses to tumors while concurrently inhibiting DC maturation.gdT17 cells promote tumorigenesis by secreting IL-17 to recruit MDSCs and stimulate the production of VEGF, IL-6, and IL-8, inducing angiogenesis and expediting tumor growth. have shown that gdT cells can differentiate into FoxP3+ gdTreg cells the influence of gdTCR monoclonal antibodies and TGF-b (101).These gdTreg (Figure 2) cells, similar to traditional Treg cells, secrete inhibitory cytokines such as IL-10 and TGF-b, which suppress the immune response of abT cells and NK cells against tumors.Specifically, breast cancer-derived gdTreg cells can induce immunosenescence in naive T cells, effector T cells, and DCs.They also inhibit the proliferation of T cells from human peripheral blood mononuclear cells (PBMCs) (102), and by inducing CD86/CTLA-4 and PD-L1/PD-1 interactions, they alter the tumor environment's structure and reduce effector T cell activity (103).It is clear that the TME influences the phenotype and function of gdT cells, with gdT17 cells exacerbating inflammation, promoting angiogenesis, and recruiting MDSCs and other inhibitory cells, thereby enhancing tumor progression.Correcting the TME and reversing the negative regulatory effects of tumor-infiltrating gdT cells are critical for leveraging gdT cells in cancer immunotherapy.6 The role of gdT cells in TNBC gdT cells, as crucial components of TILs, play a significant role in regulating tumor immune responses.Notably in TNBC, research by Chabab et al. has revealed that gdT cell infiltration often exceeds that in other breast cancer (BC) types, a phenomenon closely linked to TNBC's higher mutation rate (104).Multiple studies suggest that the presence of TILs in TNBC often correlates with a more favorable prognosis (105,106).Wu et al. have shown that the abundance of Vd1+ gdT cells, rather than the total number of gdT cells, is critical in determining the treatment response in TNBC patients.These infiltrating Vd1+ T cells, with their cytotoxic capabilities and ability to produce IFN-g, operate through an intrinsic mechanism as they respond to MICA and cytokines IL-12 and IL-18.Further research indicates that the density of Vd1+ T cells positively correlates with patients' progression-free survival (PFS) and overall survival (OS) (107).Craven et al.'s data analysis also supports this finding, linking gdTILs with prolonged OS (108).Conversely, Janssen et al. highlight that in TNBC, Vd2+ T cells are the predominant gdTILs subgroup, actively contributing to anti-tumor effects by secreting IFN-g and TNF-a.Their studies also reveal that gdTILs are not sources of IL-17 in TNBC, unlike in colorectal cancer, suggesting that TNBC's gdTILs may not depend on IL-17 for promoting tumor growth (109).Moreover, evidence suggests that gdT cells may facilitate breast tumor development through their immunoregulatory functions, correlating with poor prognosis in breast cancer.Within the TME of human breast cancer, a minority (<20%) of Vd1 T cells that express CD73 and can produce IL-10, adenosine, and IL-8 exhibit immunosuppressive effects (110).These findings suggest that while some gdT cell subgroups are immunosuppressive, their impact is often masked by those with anti-tumor activity.Consequently, further investigation into the role of gdT cells in TNBC or its subtypes, and their influence on disease progression and treatment responses, is critically important. The therapeutic potential of gdT cells in TNBC TNBC presents significant diagnostic and treatment challenges due to its ambiguous biological characteristics.The IMpassion130 study marked a pivotal shift into the era of immunotherapy for breast cancer, making TNBC the most extensively studied cancer type in this domain (111).Recently, enhanced insights into the tumor microenvironment and immune evasion mechanisms have established immunotherapy as a viable approach for TNBC.gdT cells, which recognize tumor antigens without MHC restrictions and exhibit potent cytotoxic effects, can be substantially expanded in vitro and in vivo, demonstrating significant potential in tumor immunotherapy.The presence of TILs in TNBC has been linked to favorable prognoses, highlighting that adoptive cell therapy (ACT) provides new therapeutic avenues.ACT primarily encompasses therapies such as chimeric antigen receptor (CAR)-T, TCR, and TILs therapies, which all operate on similar principles (112). The induction and adoptive transfer of gdT cells, particularly targeting Vd2 and Vd1 subtypes, represent a promising avenue in cancer immunotherapy research.Commonly, the combination of P-Ag such as BrHPP and HMBPP or nitrogen-containing bisphosphonates (N-BP) like zoledronic acid (ZOL) with IL-2 is utilized for both in vivo and in vitro expansion of Vd2 gdT cells.This approach has been extensively implemented and clinically validated for therapeutic safety (21,(113)(114)(115).ZOL not only facilitates the transformation of Vd2T cells into TEM phenotypes but also significantly boosts their cytotoxic capabilities, thus enhancing tumor suppression (116).In vivo, Vg9Vd2 T cells stimulated by P-Ag or N-BP also demonstrate the capacity to target and eliminate multiple tumor cells.However, the clinical response rate is typically lower compared to that observed in cases of overt metastasis (117, 118).In the context of neoadjuvant therapy for breast cancer, combining letrozole with ZOL to expand gdT cells in vivo has demonstrated substantial patient benefits (119).Additionally, ex vivo expansion of gdT cells for adoptive immunotherapy has shown notable anti-tumor effects in animal models (120).Vitamin C (VC) and its derivatives also positively influence the proliferation and activation of Vd2T cells, particularly at high doses, where VC augments the cytotoxic effects of CD8+ T cells and synergistically boosts the efficacy of immunotherapy alongside immune checkpoint inhibitors (121-123).A novel protocol for Vd2 T cell expansion developed by Xu et al., integrating ZOL, IL-2, IL-15, and VC, has proven effective in enhancing cell proliferation, differentiation, and cytotoxicity, significantly curtailing tumor growth and extending survival in mice (124). Recent advancements in cancer immunotherapy have introduced an approach involving Delta One T (DOT) cells derived from Vd1+ T cells.These cells are activated through TCR agonists and cytokines, resulting in substantial proliferation (12,125).DOT cells have shown promising therapeutic effects against various tumor types, an outcome further enhanced by increased expression of NKp30, NKp44, NKG2D, and DNAM1 (126,127).Additionally, Raute et al. have found that primarily expanded Vd1+ and Vd2+ T cells can target triple-negative breast cancer stem cells derived from patients.However, these BCSCs may differentiate in vivo into cells that lack stem cell-like properties and gdT cell activation ligands, thereby escaping effective gdT cellmediated destruction.Nonetheless, gdT cells can still marginally kill these differentiated cells in vivo by recognizing P-Ag.Significantly, the cytotoxic effect is enhanced by ZOL, suggesting that a combination of gdT cells and ZOL might represent an effective strategy against both triple-negative BCSCs and non-stem tumor cells (128).In conclusion, the expansion of gdT cells offers new hope for treating patients with TNBC.Although this therapy remains under research, its potential for broad application in TNBC tr eat me nt holds sign i fi cant promise as an effec tive therapeutic option. CAR-T cell therapy, a form of adoptive cell therapy, merges the antigen specificity of antibodies with the effector functions of T cells, showing considerable potential to improve survival rates in TNBC patients (129).Upon reintroduction into the patient, CAR-T cells initiate cytotoxic immune responses by recognizing tumorassociated antigens.Initially employed in refractory hematologic malignancies (130), this technology has been extensively studied in TNBC, targeting antigens like ROR1, c-Met, EGFR, FRa, and MUC1.These targets provide specific foci for CAR-T cell therapy.However, the efficacy in solid tumors is often hampered by challenges such as tumor antigen heterogeneity, the immunosuppressive tumor microenvironment, and the limited infiltration and persistence of CAR-T cells within tumors (131,132).Moreover, research has also ventured into expressing CARs on other effector cells like CAR-gdT cells, which specifically recognize and target tumor cell surface antigens, delivering a cytotoxic response (133).Capsomidis et al. have shown that CAR-gdT cells not only migrate efficiently to tumor sites but also exhibit strong cytotoxicity directed by specific tumor antigens (134).Demonstrated in xenograft mouse model studies, these cells exhibit significant anti-tumor activity both in vitro and in vivo, suggesting they could form a novel treatment approach for TNBC (135). Abnormal signaling of immune checkpoint molecules has been observed to disrupt the normal function of the TCR and alter the phosphorylation levels of intracellular proteins via ITIM motifs and SHP-1/2.Consequently, this interference inhibits the proliferation and activation of gdT cells, leading to a reduction in their cytotoxicity (136).Therefore, the combined application of gdT cells and ICIs may emerge as an effective strategy to enhance the therapeutic efficacy of TNBC.Additionally, bispecific antibodies have demonstrated potential in augmenting the efficacy of gdT cell immunotherapy by significantly enhancing cytotoxicity through the fusion of tumor-binding and T-cell splicing structural domains (137).Oberg et al. reported that the administration of gdT cells expanded in vitro with specific bispecific antibodies effectively slowed the growth of pancreatic and colon cancers in preclinical models (138,139).Additionally, bispecific molecules (GABs) linking the extracellular domain of the tumor-reactive Vg9Vd2 TCR to a CD3 binding structure have been shown to promote Tcell infiltration into the tumor microenvironment, thereby inhibiting tumor growth in vivo (140).Thus, the bispecific splicer holds great potential as a form of gdT cell-based immunotherapy.If successful in clinical trials, this treatment could offer a powerful and relatively inexpensive therapeutic option for TNBC patients.Another innovative approach involves transducing ab T cells with a high-affinity Vg9Vd2 TCR, termed T cells with defined gdTCR (TEG) (12).TEG demonstrates the ability to target a wide array of solid and hematological tumors and, in addition to exerting cytotoxic effects, exhibits paracrine activity that induces functional maturation of dendritic cells (14).Consequently, TEG can effectively target a broad spectrum of tumor cells owing to the wide reactivity of the Vg9Vd2 TCR, thereby addressing the limitations of low persistence and impaired activation of gdT cells within the tumor microenvironment (12,125).These novel therapeutic strategies instill hope for patients with triple-TNBC by expanding treatment options and possibilities for the future. Major challenges and clinical implications of utilizing gdT cells One of the major challenges for gdT cells is their scarcity in the immune system, as they represent only a small fraction of the total T cell population (20).This limited presence hampers the ability to fully exploit their therapeutic potential, especially when compared to the more abundant ab T cells.Additionally, the specific tissue distribution of gdT cells, predominantly located in the peripheral regions of non-lymphoid tissues, complicates their accessibility and clinical application.Despite considerable efforts, expanding various clones of gdT cells to clinically relevant numbers remains a significant obstacle to their widespread use in cellular immunotherapy (141).However, the Vd1 subpopulation has been shown to predict a favorable prognosis in triple-negative breast cancer, as supported by protein or gene level analyses (9,107).Another significant challenge is the limited role of gdT cells in the TME.Although several studies have shown that gdT cells can modulate immunosuppressive cells within the TME, the scarcity of nutrients, the presence of suppressor molecules, and hypoxia may still constrain their therapeutic potential (142,143).Suppressive molecules produced by tumor cells and other cells in the TME, such as TGF-b (144), prostaglandin E2 (PGE2) (145), adenosine (146), and soluble NKG2D ligands (147), can interfere with gdT cell proliferation and function. Despite these challenges, ongoing clinical trials are assessing the safety and antitumor efficacy of gdT cells.However, the clinical utility of Vg9Vd2 T cells may be hindered by susceptibility to T cell exhaustion and activation-induced cell death (AICD) (148).Moreover, in rare cases, stimulation of gdT cells with phosphoantigens and ZOL may lead to adverse effects including fever, fatigue, eosinophilia, thrombosis, elevated liver transaminases, hyperglycemia, gastritis, musculoskeletal pain, and nephrotoxicity (149, 150).Thus, the safety and tolerability of gdTcell require meticulous consideration and monitoring throughout design and implementation.In conclusion, despite numerous challenges, gdT cell therapy holds significant promise as a novel immunotherapeutic approach.By comprehensively investigating the tumor microenvironment, devising effective therapeutic strategies, and employing advanced immunological techniques to expand and activate gdT cells, the efficacy of this therapy can be enhanced to yield improved clinical outcomes for patients with tumors.9 Summary and future perspectives TNBC, characterized by the absence of ER, PR, and HER2 expression, limits patients' options for hormone or targeted therapies, thereby necessitating new treatment strategies.gdT cells, key components of immune defense, can target and destroy tumor cells independently of traditional MHC-mediated antigen presentation, thereby exerting significant anti-tumor effects.However, their potential pro-tumor activities, including suppressing anti-tumor responses, enhancing tumor angiogenesis, and secreting IL-17, are subjects of ongoing debate (68,151).Although the tumor microenvironment is thought to recruit numerous gdT cells that may promote tumor progression, singlecell RNA sequencing from fresh breast cancer tissues and patients' peripheral blood reveals that gdT cells generally correlate with favorable clinical outcomes, with tissue-infiltrating gdT cells being more active and cytotoxic than their blood counterparts (152).Recent mouse studies have noted pro-tumor and pro-metastatic effects in gdT cells producing IL-17, although such cells are rare in humans.Evidence suggests that gdTILs contribute minimally to IL-17 secretion compared to Th17 and CD4+ T cells in the TME (109).This underscores the potential of gdT cell-based immunotherapy as a novel strategy for breast cancer treatment.Research indicates that gdT cells can inhibit TNBC progression through direct cytotoxic actions and by modulating immune responses.The presence of gdTILs in the TNBC microenvironment is strongly associated with favorable patient prognoses, underscoring their vital role in antitumor immunity.While various methods to expand gdT cells have shown promise in anti-tumor therapy, specific studies on their application in TNBC remain limited.Furthermore, enhancing the activity or specificity of gdT cells through CAR technology presents opportunities to improve their therapeutic potential in TNBC treatment. Future research will persist in investigating the use of gdT cells in treating TNBC, with a particular emphasis on effectively expanding and activating these cells, overcoming the immunosuppressive tumor microenvironment, and enhancing their tumor-homing capabilities.Moreover, the use of engineered gdT cells, such as CAR-gdT cells and TEGs, either alone or in combination with checkpoint inhibitors, holds promise for enhancing the response rate and anti-tumor effects of gdT cell therapy.Considering the complexity and therapeutic challenges of triple-negative breast cancer (TNBC), researchers must explore novel methods or techniques to achieve effective expansion of both Vg9Vd2 T cells and Vd1 T cells.Specifically, the advancement of CAR-gdT cell therapy necessitates additional clinical data to support its use in TNBC treatment.As our understanding of the tumor microenvironment and immune evasion mechanisms expands, coupled with cutting-edge immunotherapies, TNBC treatment is poised to become increasingly personalized, significantly enhancing patient prognosis and quality of life. FIGURE 1 FIGURE 1The tumor cell recognition of gdT cells.gdT cells recognize tumor cells based on the expression of gdTCRs and NKRs.Specifically, Vg9Vd2 TCR detects elevated intracellular levels of phosphorylated antigens, such as IPP, via BTN3A1, facilitating efficient tumor cell elimination.Additionally, they interact with tumor cells through the binding of NKG2D and DNAM-1 receptors to their respective ligands (MICA/B and ULBPs).Conversely, Vd1 T cells recognize lipid antigens presented by CD1d via their Vd1 TCR and utilize NKG2D and NCRs (NKp30 and NKp44) along with their corresponding ligands for tumor cell recognition.Moreover, Vg9Vd2 T cells effectively target and eradicate tumors via the CD16-mediated ADCC mechanism.
2024-06-14T15:15:10.883Z
2024-06-12T00:00:00.000
{ "year": 2024, "sha1": "f730ac6e0114f9570f462bf91ff540d60dace3eb", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2024.1420107/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dfe8cc070269d6d101dde4fb8bae309d49d68a90", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253800850
pes2o/s2orc
v3-fos-license
Transfusions and early outcomes in anaemic patients undergoing off- or on-pump coronary artery bypass grafting Abstract We retrospectively compared transfusion rates and early outcomes in 1621 consecutive patients with preoperative anaemia undergoing off-pump coronary artery bypass grafting (OPCAB) or on-pump coronary artery bypass grafting (ONCAB) surgery using a propensity score analysis with inverse probability of treatment weighting. Endpoints were transfusions, early morbidity, and mortality. Surgeries were performed by 45 dedicated OPCAB and/or ONCAB surgeons during the 10-year study period. Operative data did not differ significantly between study groups with the exception of a more frequent use of bilateral internal mammary artery revascularization approach in OPCAB patients than ONCAB patients. OPCAB was associated with fewer transfusions and lower risk for the need of postoperative renal replacement therapy, but higher risk of wound infections than ONCAB. Perioperative stroke risk and 30-day and 1-year mortality did not differ significantly between the groups. Our data in a ‘real-world setting’ indicate that in patients with preoperative anaemia both ONCAB and OPCAB are feasible surgical approaches regarding early morbidity and mortality. INTRODUCTION In cardiac surgery, preoperative anaemia is diagnosed in about 20% of patients [1]. Anaemia is associated with increased risk of acute kidney injury (AKI), stroke, infections and early mortality [1]. However, it remains currently unclear whether anaemia itself or increased transfusions of red blood cells (RBCs) trigger these adverse outcomes [2,3]. Randomized controlled trials (RCTs) demonstrated that off-pump cardiopulmonary bypass grafting (OPCAB) surgery is associated with significantly fewer RBC transfusions than on-pump cardiopulmonary bypass grafting (ONCAB) [4]. We hypothesized that in patients with preoperative anaemia these differences in transfusions, and probably in clinical outcomes, would be even more distinct than in patients without anaemia. METHODS This retrospective cohort analysis of prospectively collected data included consecutive patients who underwent isolated OPCAB or ONCAB at our institution between January 2009 and December 2019. Patients with preoperative anaemia were selected from the entire group of patients who underwent coronary artery bypass grafting at our institution. Patients with concomitant or any previous cardiac surgery, low ejection fraction (<30%), preoperative dialysis, haemodynamic instability (preoperative cardiogenic shock), emergency status and patients undergoing minimally invasive direct coronary artery bypass grafting were excluded. An inclusion criterion was, according to the WHO definition, a haemoglobin (Hb) concentration below 12 g/dl (females) or 13 g/dl (males). A total of 1621 patients were finally included in our data analysis (OPCAB = 1188; ONCAB = 433). The study was approved by the local ethics committee on 20 April 2020 (AZ 2020-628). Surgery was performed as described before [5]. During the 10year study period, 45 dedicated OPCAB and/or ONCAB surgeons performed the operations, with ONCAB and OPCAB procedures being done by 34 and 43 surgeons, respectively. Emergency surgery was, according to institutional standards, the only exclusion criterium for OPCAB surgery. In the OPCAB group, cell saving was used as a replacement for cardiotomy suction during the grafting procedure. For patients < _70 years, the institutional transfusion threshold was an Hb value of < _8-9 g/dl. However, the threshold was raised to < _9-10 g/dl in patients who required high inotropic support or were aged 70 years and over. During cardiopulmonary bypass, the critical value for RBC transfusions was an (expected) Hb of 7-8 g/dl. End points were RBC transfusions, early morbidity and mortality. Because of the nonrandomized group assignment, we generated a propensity score (PS) for each patient. For the creation of ADULT CARDIAC V C The Author(s) 2022. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/bync/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com the PS, we used the multivariable logistic regression model with the type of surgery (ONCAB or OPCAB) as the binary dependent variable. The model comprised all baseline covariates of Table 1. After the PS was established, we applied inverse probability of treatment weighting [6]. Post-weighting balance in covariates was evaluated using standardized mean differences. To compare continuous and categorical perioperative outcome data, the Mann-Whitney U-test and Fisher's exact test were applied, respectively. Clinical outcome data are presented as relative risks and 95% confidence intervals. To account for multiple testing, the Benjamini and Hochberg false discovery rate method was considered to adjust the P values as previously described [7]. The false recovery rate was set at 5%. P values <0.05 were considered statistically significant. We performed all analyses using IBM SPSS Statistics version 24 (IBM Corporation, Armonk, NY, USA). RESULTS In the weighted study population, all baseline characteristics assessed were well-balanced ( Table 1). The percentage of transfused patients and the number of transfused RBC units were significantly lower in the OPCAB group than in the ONCAB group (Table 2). Likewise, prolonged mechanical ventilatory support, prolonged intensive care unit (ICU) stay, and the risk of stroke and haemofiltration were significantly lower in the OPCAB group than in the ONCAB group. However, use of the bilateral internal mammary artery (BIMA) revascularization approach and readmission for wound infection were significantly higher in the OPCAB group than in the ONCAB group. Thirty-day and 1-year mortality did not differ significantly between study groups ( Table 2). After applying the Benjamini and Hochberg FDR method, prolonged mechanical ventilation, prolonged ICU stay and the perioperative risk of stroke became nonsignificant. DISCUSSION Data challenge our primary hypothesis that, in the condition of preoperative anaemia, the difference in the number of patients transfused would be more distinct between the two surgical strategies. Due to the dilutional effect of CPB, the high transfusion rate of almost 100% in our anaemic ONCAB patients was an expected finding. However, the high transfusion rate of 90% in our OPCAB patients was unexpected and probably, at least in part, due to the liberal transfusion trigger in our elderly patients. Preoperative anaemia is regarded as an important risk factor in cardiac surgery [1]. Whether the slightly lower transfusion rate in our OPCAB group can explain differences in renal complications between OPCAB and ONCAB surgery remains unclear. Notably, in a subgroup of a meta-analysis of high-risk patients in RCTs, OPCAB was associated with significantly lower transfusion rates and lower risks of postoperative AKI than ONCAB [4]. The higher risk of wound infections in our OPCAB group was an unexpected finding. However, inherent to the strategy of non-aortic or lessaortic touch OPCAB surgery, more patients in the OPCAB group underwent a BIMA approach than those in the ONCAB group [8]. A recent meta-analysis showed that BIMA grafting, particularly in diabetic patients, is associated with an increased risk of wound healing impairment and postoperative infections [9]. In our study, differences in RBC transfusions and early morbidity outcomes between OPCAB or ONCAB patients did not translate into significant group differences of 30-day or 1-year mortality. Data support results of the aforementioned metaanalysis of RCTs in the subgroup of high-risk patients [4]. Strengths of our study are the relatively large and homogeneous study cohort and use of the statistical inverse probability of treatment weighting approach. Limitations are the retrospective and monocentric study design, and the study group differences regarding the use or non-use of a CPB system and the no-or less 'aortic touch' approach, the latter resulting in an increased BIMA use [5,8]. Thus, we cannot definitively exclude the possibility that non-considering perioperative data for the PS such as BIMA use has biased study results. However, our data more reflect the 'real-world' situation rather than randomized trials which accurately balance groups in this regard. In conclusion, the number of transfused anaemic patients was statistically significant, but only modestly lower in OPCAB surgery than in ONCAB surgery. The type of surgery was also associated with risks of AKI and infections. Since early mortality did not differ significantly between groups, our data indicate the feasibility of both surgical approaches.
2022-11-24T06:17:19.322Z
2022-11-23T00:00:00.000
{ "year": 2022, "sha1": "7a4edf78c8cd35901e16decd2460a5cb17e7db78", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e8e1a556ce09e01a7f4cfd082cd981c0ed358a29", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225593534
pes2o/s2orc
v3-fos-license
An Uplifted over Logistics Costs Efficiency by the Hub and Spoke System at Cikarang Dry Port This research aimed to discovered and analyzed those efficiency of logistics costs by hub and spoke system at Cikarang dry port that provides ports which included logistics services with logistics companies and supply chains so it would contributes to reducing logistics costs and dwelling time at Tanjung Priok Port. This research uses Factor Analysis and AHP (Analitycal Hierarchy Process) method as a method to find out all factors which influence most towards logistics performance and how to reduce those logistics costs at Cikarang Dry Port. Factor Analysis result starting from 14 elements into 8 elements which divided into 3 factors such as transportation, administration and inventory costs. The weight of loading and unloading costs is 0.24709, the weight of container costs is 0.20384 and the weighting of stacking fees is 0.14429. So AHP results was obtained from factors and elements of logistics costs Cikarang Dry Port which has most influence are F1 (loading and unloading costs), F6 (custom service fees), F8 (forwarding service costs), F2 (goods inspection service costs), F4 (stacking fees) , F12 (service quality that needs to be improved), F3 (container tariffs), F5 (loading and unloading labor rates). INTRODUCTION Economic activities especially in world of trade which closely related to export and import activities are main activities in distribution of goods. The presence of industries in hinterland region encourages the creations of dry port concept that has functions like a seaport in general, and support to export, import and distribution of goods also commodities produced. Dry Port is one of the logistics infrastructure, whereas dryport acts as a node in transportation network and supporter of economic activity. Cikarang Dry Port plays an important role of national connectivity by simplifying access between port and hinterland (for example an industrial area) as well as reducing traffic and congestion at Port of Tanjung Priok. CDP provides a port and logistics services which integrated with dozens of logistics companies and supply chains so it is expected to be able to reduce the density of loading and unloading currents and contribute to reducing dwelling time which will have an impact on reduction of container buildup at Tanjung Priok Port. Based on 2019 performance index logistic report, the waiting time at Tanjung Priok Port is still at 3-4 days, it is still inferior against Singapore which has very fast dwelling time movement with only 1 day. Country/Port Dwell ( (2019) By long waiting time often has results in shortages of goods, fluctuations and high disparity in prices of goods between regions. Beside that the high waiting time at Indonesia's logistics performance is seen to be low, which only has position 46 in 2018. The low performance of Indonesian logistics was influenced by low performance of logistics at PT. Cikarang Dry Port Indonesia. The logistical performance assessment based on six aspects, such as efficiency of customs & border management clearance (customs), the quality of trade and transportation infrastructure, an ease of international shipping arrangements, the competence and quality of logistics services, capabilities to do track & tracing and frequency of timely deliveries. The Logistics Performance Index (LPI) also highlights over logistical costs in Indonesia, which still high at 23.5% in 2018. The high cost of domestic logistics in Indonesia is not only caused by high cost of land and sea transportation but also due to other factors that related to regulations, human resources, logistics processes and management which still inefficient and lack of professionalism of national logistic service providers and resulting to inefficient domestic freight forwarding industry. Therefore, to reduce these problems and improve efficiency of logistic costs, a strategy to strengthen maritime logistics systems is needed to create a cheaper costs for logistics, one of way by applying concept of collectors and feeders (hub and spokes). A. Logistics According to Li, X. in Karosekali and Santoso (2019) logistics is management of flow of goods movement start from original point and ends at point of consumption to meet certain demands. Whereas the understanding of logistics according to the Council of Supply Chain Management Professionals in Karosekali and Santoso (2019) logistics is part of supply chain management in planning, implementing and controlling the flow and storage of goods, information, and services which is effective and efficient from the original point to destination point according to consumer demand. Bowersox in Karosekali and Santoso (2019) said that there has 5 components which shaping logistic system, such as: structure facility location , transportation, inventory, communication and handling & storage. With good logistics management it will be very effective to increase the company's marketing efforts by providing an efficient transfer of product to customers, time and place utility for the product. (Lambert and Stock in Karosekali and Santoso, 2019). B. Sizing of Logistics Performance According to Sorooshian and Yin (2013) SCM is network management of organizations from up to down which includes relations between two or more companies and flow of material both of information and resources. While logistics is a process of planning, implementing, and controlling procedures for transportation and storage of goods efficiently and effectively. Therefore its important to sizing up supply-logistics chain performance by apply it well through customer satisfaction surveys. Beside that, role of distribution network and its management is very important to meet consumer demand thereby increasing sales and profits. (Haryotejo, 2015). C. Crossdocking There are several types of crossdocking that can generally be applied, such as pre-packed cross docking, and intermediate handling cross docking. Meanwhile, in crossdocking warehouse management scenario it has 3 types, such as: manufactured cross docking, distributor cross docking and retail cross docking. (Abdillah in Karosekali and Santoso, 2019). D. Dry Port Concept According to Roso (2008) dry port concept is briefly characterized as an integrated transport terminal that located in some distance from seaport, connected to the seaport by road transportations such as, train or waterway and offers services available at sea ports, such as custom clearance, container maintenance, storage, forwarding, etc. He added that main purpose of dry port was to move activities from sea port to Dry Port to reduce congestion and achieve other benefits. E. Warehouse According to Rushton in Karosekali and Santoso(2019) stockroom or warehouse is an important component of modern supply chain. The supply chain involves activities in various stages such as sourcing, production and distribution of goods from handling raw materials and processed goods to finished products. Warehouse could be described as part of a company's logistics system that has functions to store products and provide information about status and condition of material or inventory stored in warehouse so the information will always up-to-date and easily accessible by anyone with an interest. F. Transportation According to Ritonga, et. al. (2015) Transportation is a process of movement of people or goods from one place to another by using certain system to meet human needs by moving and interacting. He further said that transportation costs are influenced by two factors, such as product-related factors to determine product classification for needs of manufacturing level and market-related factors to decide the level of competition, market location, applied regulations and balance of goods traffic in an area. G. Stock According to Stevenson W. J. & Chuong S.C in Karosekali and Santoso(2019) inventory is stock or storage of goods. Inventories are not only necessary for operations but also contribute to customer satisfaction. Heizer and Render (2014) added that inventory has several functions,one of that is to eliminate risk of delays in delivery of raw materials or goods needed by the company, eliminate risk of availability of material which is not good so it should be returned, eliminate risk of seasonal price increases or inflation, and to store raw materials produced seasonally so the company won't have difficulties if the material is not available on the market. H. Scheduling According to Stevenson W. J. & Chuong S.C in Karosekali and Santoso(2019) said that scheduling related to timing of the specific use of resources of the organization. Scheduling mostly related to the use of equipment, facilities, and human activities. Scheduling occurs within each organization regardless of the nature of its activities. Effective schedule could get good result by savings costs and increased productivity. I. Factor Analysis According to Karosekali and Santoso(2019) factor analysis is an analysis used to reduce or summarize a number of variables to be fewer, but does not reduce the meaning of the original variables. Factor analysis aims to confirm the structure of factors analyzed based on the concept (theory) or measure the construct validity which shows how well the results that obtained from the use of meters accordance to theories. Another goal of factor analysis is to get a measure (in the form of a score) of latent variables based on several measurable variables. Based on the purpose of factor analysis, there are two types of factors analysis which is exploratory factor analysis and confirmatory factor analysis. J. Analytical Hierarchy Process (AHP) According to Saaty Karosekali and Santoso(2019) Analytical Hierarchy Process (AHP)is a method which seen details a complex or unstructured situation into components then organizes parts or variables of these components into a hierarchical arrangement and gives numerical value to this consideration to determine which variable has the highest priority. Furthermore, he added that AHP is useful to complex problems which not structured, do not have enough written data, such as problems: planning, discovering alternatives, prioritizing, selecting policies, source allocation, determining needs, forecasting results, system design, performance recognition and optimization in problem solving. K. Theoretical Framework The framework for this research could be seen as follows: III. METHODOLOGY This type of research were conducted by quantitative research because in this research has several analyzes of numerical data. Among the statistical analyzes which carried out in the modeling stage of trip generation / pull of motion to determine the influence of socio variables of goods loading and unloading flows in Cikarang Dry Port. Based on research title, there has two types of variables that attributes in these research which is responsibility accounting (X) and cost control (Y). The population that used in this research was logistics service users priority lane, red lane, yellow lane and green lane as many as 120 companies who caught used logistics services in Cikarang Dry Port. The researchers took a sample of logistics service users from red line as many as 30 companies because its most expensive one than the others. The data collection techniques used interviews that conducted in logistics section, especially domestic import section, direct and indirect observations on the field at domestic imports in Cikarang Dry Port and Cikarang Dry Port logistics service's users and questionnaires distributed to respondents. In this research the authors used the Factor Analysis method which effective to reduce or summarize the number of elements to be fewer, but did not reduce the meaning of the original one. Beside that, the authors also use Analytical Hierarchy Process (AHP) method which could detailing complex or unstructured situation into components which arranged into hierarchical arrangement and provides numerical values in consideration to discovers which elements who has the highest priority. A. Validity and Reliability Test According to the results over validity test, it was found that entire value of r results > r table (0.3610). This means that all statement items were declared valid. The results of validity test could be seen in Table 3 below. Then, the reliability test results showed from that Cronbach's Alfa result 0.861 > 0.60. Then it could be defined that results of these measurement of variables were reliable to use in subsequent analyzes,such as factor analysis. B. Measure of Sampling Adequacy (MSA) Measure of Sampling Adequacy (MSA) used to discover whether an element is sufficient for further analysis. This value could be seen from the anti-image correlation matrix value. The MSA estimation value which process was carried out four times until finally all variables have a MSA value > 0.5, there are 8 variables that will be used for the next factor analysis process. C. Kaiser Meyer-Olkin (KMO) Measure of Sampling Adequacy and Bartlett's Test of Sphericity The Kaiser Meyer-Olkin Index (KMO) Measure of Sampling Adequacy and the Bartlett's Test of Sphericity significance value were used to examine those accuracy of use of factor analysis. From these results of tests that have been done, it can be seen that the KMO value is 0.665 or between 0.5-1 and the significance value is 0.009 < 0.05 so it could be interpreted that factor analysis is appropriately used. D. Shapping Factor After found those elements and selected and correlation estimation has been fulfilled as for requirements ,and next step is to creating a factor to determined the structure that underlies to the link between the initial variables. The method that used in shapping factor was the principal component analysis method. The two main steps in shapping the factors were determining the number of factors and rotating the factors formed. These results could be seen in Table 7 below. The first criteria used is the eigenvalue, from the table above, an eigenvalue greater than 1 in factors 1, 2 and 3 is obtained. With this criterion, the number of factors used is 3 factors. The second criterion is the determination based on the percentage value of total variance that can be explained by the number of factors to be formed. From the table above, interpretation can be made relating to the cumulative total variance of the sample. If the elements are summarized into several factors, then the total value of variance that can be explained is as follows:  If all 8 elements are extracted into 1 factor, the total variance that can be explained is 2.788 ⁄ 8 x100% = 34.845%.  If the 8 elements are extracted into 2 factors, the total variance that can be explained is 1.425 ⁄ 8 X 100% = 17.815%, and the cumulative total variance for the 2 factors is 34.845% + 17.815% = 52.660%  If all 8 elements are extracted into 3 factors, the total variance that can be explained is 1.134 ⁄ 8 X 100% = 14.169%, and the cumulative total variance for the 3 factors is 34.845% + 17.815% + 14.169% = 66.829% Thus the extraction of 3 factors obtained can be stopped and has met the second criterion. The third criterion is the determination based on the scree plot, from the results of the scree plot test it is known that the scree plot begins to level off at the extraction of the initial elements into 3 factors. From the combination based on these three criteria, it could be said that the most appropriate factor extraction is 3 factors. E. Community Communality is basically the amount of variance of a variable that can be explained by existing factors. Based on the results of the community test, it is known that extraction values for all variables are> 0.50. Thus, it can be concluded that all variables can be used to explain factors. F. Component Matrix After knowing that the 3 factors are the most optimal number, the matrix component table shows the distribution of the 8 elements on the 3 factors formed while the figures in the table are factor loadings, which shows the correlation between an element and factor 1 factor 2 and factor 3. The process of determining which elements will be included into which factors, carried out by making a large comparison of the correlation of each row. G. Rotation The process of rotation in the results of this study aims to obtain factors with loading factors that are clear enough for interpretation. The results obtained indicate that the loading factor values between an element and several factors are sufficiently differentiated and ready to be interpreted. All elements have a high loading factor on one factor and have a loading factor that is small enough for other factors. Based on Table 10 it appears that the F1 element has the highest loading factor value at factor 1 which is 0.708. According to the guidelines previously mentioned, the value has been considered significant because it is greater than 0.50. While the loading factor value of F1 elements in factors 2 and 3 is very small, so this element is included in factor 1. Element F2 has the highest loading factor value in factor 2, 0.843. According to the guidelines above, the value has been considered significant because it is greater than 0.50. While the loading factor value by factors 1 and 3 is very small, so this element is included in factor 2. Likewise, the determination of the other elements in factor 3. The following is Table 11 shows the results of grouping elements into factors. I. Logistics Cost Evaluation by Hierarchy Structure at PT. Cikarang Dry Port Hierarchical structure used to find out cost factors that most influence to logistics costs at PT. Cikarang Dry Port on current. The costs was obtained by grouping by factor analysis in previous stage then arranged into hierarchical form as can be seen in Figure 4 below. J. Pairwise Comparison Matrix Paired comparison matrix at level 2 were obtained from the results of a questionnaire that is part of the AHP. This matrix aimed to see the comparison of each cost and level of important of each cost. Next, the value on each cell was divided by sum of the results in each column. The results often called normalization matrix where the sum of numbers contained in each column will produce 1. N. Determination of Priority Weighting After obtained values from geometric mean, the partial weighting and consistency of the matrix which is the output of the AHP step will be sought. To get the priority list, the researchers used Super Decisions software . With initial step taken from this software to build the AHP hierarchy. Then the hierarchy and connection between levels in hierarchy were found out and the matrix of geometric averages that have been obtained in previous manual calculation will be input to the software aswell. After all geometric averages were input then the priority results would be obtained from the research by AHP method which seen as in Table 20. Table 20, it can be seen that 3 cost elements with highest contained and evaluation materials on logistics costs at PT. Cikarang Dry Port are elements of Loading and Unloading Costs (F1), Container Costs (F3), and Stacking Fee Rates (F4). As for the biggest cost factor contained is the quantity of Transportation Cost Factor which is equal to the Cost of Loading and Unloading Goods (F1) + Customes Service Fee (F6) + Forwarding Services Tariff (F8) = 0.24709 + 0.13984 + 0.03943 = 0.42636 (42.63%), followed by the Inventory Cost Factor for the Container Fee (F3) + TKBM Services Tariff (F5) = 0.20384 + 0.09461 = 0.29845 (29.84%) and finally the Factor Administrative costs of Goods Inspection Fee (F2) + Stacking Fee (F4) + Service Quality that needs to be improved (F12) = 0.10932 + 0.14429 + 0.02159 = 0.2752 (27.52%). A. Conclusion Based on these results that have been described above, therefore several conclusions which could be drawn from this research are as in follows:  The results of the factor analysis which started from 14 cost elements into 8 cost elements and were divided into 3 factors, which is transportation costs, administrative costs and inventory costs.  The results from AHP and obtained the most important cost which is transportation costs, when viewed from the total priority weighting owned is 0.42636 consisting of loading and unloading costs (F1), custom service costs (F6) and forwarding service costs (F8). B. Suggestion Based on the results that has been described above, as for suggestions the authors could drawn several recommendation such as:  The establishment of several new regulations and policies related to logistics cost elements at PT. Cikarang Dry Port which has most impact over the efficiency of logistics costs so it could be reduce logistics costs at PT. Cikarang Dry Port.  Find out and Add another cost elements for logistics at PT. Cikarang Dry Port which not include in cost elements that carried out in this research.  Try to discovered and compare this research by using other decision support methods such as ANP, Topsis, and others.
2020-07-23T09:04:27.682Z
2020-07-19T00:00:00.000
{ "year": 2020, "sha1": "becc29cd83f2c5f277577dc7241542e7591ed9a6", "oa_license": null, "oa_url": "https://doi.org/10.38124/ijisrt20jul084", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e3d2564ff1b8f9ea17882c1311b6f212743ed77a", "s2fieldsofstudy": [ "Business", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
221188057
pes2o/s2orc
v3-fos-license
Variation in human mobility and its impact on the risk of future COVID-19 outbreaks in Taiwan Background As COVID-19 continues to spread around the world, understanding how patterns of human mobility and connectivity affect outbreak dynamics, especially before outbreaks establish locally, is critical for informing response efforts. In Taiwan, most cases to date were imported or linked to imported cases. Methods In collaboration with Facebook Data for Good, we characterized changes in movement patterns in Taiwan since February 2020, and built metapopulation models that incorporate human movement data to identify the high risk areas of disease spread and assess the potential effects of local travel restrictions in Taiwan. Results We found that mobility changed with the number of local cases in Taiwan in the past few months. For each city, we identified the most highly connected areas that may serve as sources of importation during an outbreak. We showed that the risk of an outbreak in Taiwan is enhanced if initial infections occur around holidays. Intracity travel reductions have a higher impact on the risk of an outbreak than intercity travel reductions, while intercity travel reductions can narrow the scope of the outbreak and help target resources. The timing, duration, and level of travel reduction together determine the impact of travel reductions on the number of infections, and multiple combinations of these can result in similar impact. Conclusions To prepare for the potential spread within Taiwan, we utilized Facebook’s aggregated and anonymized movement and colocation data to identify cities with higher risk of infection and regional importation. We developed an interactive application that allows users to vary inputs and assumptions and shows the spatial spread of the disease and the impact of intercity and intracity travel reduction under different initial conditions. Our results can be used readily if local transmission occurs in Taiwan after relaxation of border control, providing important insights into future disease surveillance and policies for travel restrictions. Background As COVID-19 continues to spread around the world, understanding how patterns of human mobility and connectivity affect outbreak dynamics, especially before outbreaks establish locally, is critical for informing response efforts. In Taiwan, most cases to date were imported or linked to imported cases. Methods In collaboration with Facebook Data for Good, we characterized changes in movement patterns in Taiwan since February 2020, and built metapopulation models that incorporate human movement data to identify the high risk areas of disease spread and assess the potential effects of local travel restrictions in Taiwan. Results We found that mobility changed with the number of local cases in Taiwan in the past few months. For each city, we identified the most highly connected areas that may serve as sources of importation during an outbreak. We showed that the risk of an outbreak in Taiwan is enhanced if initial infections occur around holidays. Intracity travel reductions have a higher impact on the risk of an outbreak than intercity travel reductions, while intercity travel reductions can narrow the scope of the outbreak and help target resources. The timing, duration, and level of travel reduction together determine the impact of travel reductions on the number of infections, and multiple combinations of these can result in similar impact. Conclusions To prepare for the potential spread within Taiwan, we utilized Facebook's aggregated and anonymized movement and colocation data to identify cities with higher risk of infection and regional importation. We developed an interactive application that allows users to vary inputs and assumptions and shows the spatial spread of the disease and the impact of intercity and intracity travel reduction under different initial conditions. Our results can be used readily if local transmission occurs in Taiwan after relaxation of border control, providing important insights into future disease surveillance and policies for travel restrictions. As the number of cases globally due to community transmission grows relative to the number of imported cases, attention has turned to more local measures to decrease spread, such as cancellations of mass gatherings, business closures, and local travel restrictions. 11 Mobility data can provide critical information for responding to outbreaks and understanding the impact of travel restrictions. 12 Recent studies have analyzed the effects of human mobility and travel restrictions on disease spread during the COVID-19 pandemic. 6,[13][14][15][16] Here, to prepare for COVID-19 and its impact, in collaboration with Facebook Data for Good, we describe the metapopulation models we've built that include human movement data to better understand the high risk areas of disease spread and assess the potential impact of local travel restrictions in Taiwan. Movement data and geographic unit We incorporated two different sources of mobility data from Facebook into our models: Facebook colocation data and Facebook movement data. Facebook's newly developed colocation matrices (Facebook colocation data) give the probability that people from two different geographic units will be in the same 600 m × 600 m location for five minutes using data over the course of a week. Facebook's regular movement data (Facebook movement data) aggregates the number of trips Facebook users make between locations every eight hours ( Figure S1). 17 Mobility data between January 26 th and June 30 th were used. Facebook movement data were disaggregated by weekdays (Monday to Friday), weekends, and holidays (Lunar New Year, Ching Ming festival, and Dragon Boat Festival). Facebook colocation data included weeks containing holidays and weeks containing only regular (i.e. non-holiday) days. The geographic unit used in this study was at the centrally-governed level of "city" (here "city" indicates city, county or special municipality in Taiwan). Shape files were downloaded from Government open data platform (https://data.gov.tw/dataset/7442). We excluded three cities outside of the main island of Taiwan from the analysis due to their low connectivity with the main island, leaving 19 cities. Metapopulation Models We developed susceptible-latent-infectious-recovered (SLIR) models of the spread of COVID-19 throughout Taiwan. We ran the models stochastically to understand the initial stages of disease spread. We ran the model until either (1) it reached n cumulative infections or (2) the total number of infections became 0 to estimate the probability of having more than n infections (denoted by Pn,k, where k represents the number of initial infections), the time it takes to reach n infections (denoted by Tn,k), and the standard deviation of infection numbers at Tn,k (denoted by Vn,k). To assess the initial stages of the outbreak, we used n=1000 and k=3 as our baseline values. Let Si, Li, Ii, Ri be the number of susceptible, latent, infectious, and recovered individuals in location i, respectively, and Ni be the total population in location i. Let DL (=3.5) be the latent period, and DI (=3) be the duration of infectiousness. 13 Because transmission rates can change with non-pharmaceutical interventions, as shown in previous studies, we vary R0 in our model (R0 =2.4, 1.2, or 0.9). 13,18,19 We modified spatial models from previous studies 20-23 and constructed two metapopulation models, a contact model and a residence model, with the former using Facebook colocation data and the latter using Facebook movement data. In the "contact model", we assumed that contact rates (and therefore transmission rates) varied among locations and was proportional to colocation probabilities (Cij, the probability that a person from location i collocates with a person from location j) from Facebook colocation data. We scaled R0 by Cij*Nj (for j not equal to i) or Cii*(Ni-1), standardized to Cii*(Ni-1) in Taipei. In the "residence model", we first estimated the proportion of time people living in location i spend in location j (Pij) based on Facebook movement data (see details in Supplementary Methods), and modeled the transmission dynamics by considering both that (1) non-travelers get infected by infectious visitors to their home location (the first part in the following equation) and that (2) naïve travelers get infected when they travel (the second part in the following equation). . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . https://doi.org/10.1101/2020.04.07.20053439 doi: medRxiv preprint The remaining equations are the same across the two models. In addition to using different movement data, the major difference between two models is that the transmission rate within each city (R0/DI) varies with colocation matrices in the contact model, while it remains constant in the residence model. In this sense, the contact model is similar to the traditional density dependent model, where contact rates (and therefore transmission rates) vary with population density, and the residence model is similar to the frequency dependent model. 24 As it is unclear which is most appropriate for COVID-19, we used both and compared the results. Risk of infection and regional importation We defined three connectivity measures relevant for disease transmission, risk of infection, risk of regional importation, and source of importation. Using Facebook colocation data, we defined R0ii as intracity R0 and ∑ #!" "1! as intercity R0 for location i. The sum of intracity R0 and intercity R0 reflects total risk of infection and was standardized to the highest value. Risk of infection for location Similarly, using Facebook movement data, we defined 4∑ 3 !" 4 ! 5 ! "1! 5 ! 6 as risk of regional importation (i.e. importation from other cities within Taiwan) for location i, where qj represents the average number of subscribers in location j and mji represents the average number of people moving from location j to location i per unit of time in Facebook movement data. Source of importation was defined as the number of travelers from each location i and standardized to the highest value. Source of importation for location Facebook colocation data from regular days were used to calculate risk of infection, and weekday movement data were used to calculate risk of regional importation and source of importation. Modeling travel reduction To assess the potential effect of travel restrictions at multiple levels, we modeled either intra-city travel reductions, inter-city travel reductions, or a combination of both travel reductions ("overall reduction" in texts and figures) for 1, 2, 3, or 6 months or for the whole period of time. Travel reductions started from the beginning of the simulations or when there were 10, 20, 30, 50, or 100 accumulated infections. The proportion of reduction is denoted by G. In the contact model, intracity reduction was modeled by R0ii*(1-G) for all i, and intercity reduction was modeled by R0ij*(1-G) for all i not equal to j. In the residence model, intracity reduction was modeled by R0*(1-G) and intercity reduction was modeled by Pij*(1-G) for all i not equal to j and Pii +(1-Pii)*G for all i. Varying human mobility across space and time in Taiwan We quantified how intercity and intracity mobility varied at the centrally-governed level in the past five months using Facebook mobility data. On average, intracity movement was 7-fold of intercity movement, and intracity colocation probability was over 200-fold of intercity colocation probability. We quantified the difference in connectivity between locations by three measures: risk of infection, risk of regional importation, and source of importation ( Figure 1 and Table S1). Risk of infection identifies locations with larger colocation probabilities. If assuming contact rates were proportional to colocation probabilities, disease transmission rates are expected to be higher in locations with higher risk of infection, such as Taipei City, New Taipei City, and Kaohsiung City. Risk of regional importation represents the relative number of travelers and local people, and higher values indicate higher possibility that travelers will transmit virus to local people. Source of importation calculates how much travelers from each location contribute to other locations. Viruses are expected to spread more quickly if initial local infections in Taiwan occur in locations with higher values of source of infection. Taipei and New Taipei City are cities with the highest risk of regional importation and source of importation, respectively. These three measures quantify different but related aspects of mobility and connectivity that are relevant for disease transmission, and while well-connected cities tend to have high values for all three measures, there are still some differences among them. We found intercity mobility between some of the locations first decreased and then increased in the past few months, which is consistent with the changes in the number of local cases in Taiwan and global number of cases ( Figure S2 and Figure S3), indicating the level of change that can happen without travel restrictions imposed by the government. We also observed significant changes during holidays. Lunar New Year was within the early stages of the SARS-CoV-2 outbreak, and for most of cities pairs (95%), colocation probabilities during Lunar New Year was significantly higher than regular days, as expected during holidays. However, the proportion of city pairs with higher than usual intercity connectivity during the Ching Ming Festival, which occurred at the time when the number of cases was increasing dramatically globally and the number of local cases was just starting to decrease, decreased to 67%. Dragon Boat Festival was at the time the number of local cases remained zero for more than a month, and the proportion of city pairs with higher than usual intercity mobility increased to 76%. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The impact of the location of initial infections on the risk of the spread At the end of June 2020, most cases in Taiwan were imported or linked to imported cases. Therefore, we used meta-population models parameterized by human mobility data from Facebook to simulate the spread of SARS-CoV-2 under a variety of initial conditions, including both the number of initial infections and their locations. We developed a web-based interface (https://roachchang.shinyapps.io/TW_CoV_Dynamics/) to show the geographic distribution of infections given different initial conditions, which can be readily used to inform targeted surveillance and control if SARS-CoV-2 starts spreading locally in Taiwan. Because other disease-relevant hygiene behaviors, such as hand washing, mask wearing, or social distancing, may also have changed due to the awareness of COVID-19, we explored different transmission rates (R0 =2.4, 1.2, or 0.9). We considered different aspects of disease spread -the probability of outbreak, the speed of spread, and the geographic range of outbreak. We estimated the probability of having more than 1000 infections (denoted by P1000,k, where k represents the number of initial infections) using stochastic simulations and used this to represent the probability of an outbreak. As expected, we found that, if we assumed that the transmission rates varied among cities (contact model), the probability of having more than 1000 infections also varied with the locations of initial infections, with the cities with larger risk of infection showing larger P1000 ( Figure S4A and Table S2). In simulations where 1000 infections were reached, the time it took to reach 1000 infections (denoted by T1000,k) was also shorter for cities with larger risk of infection ( Figure S4B). When assuming that the transmission rates in different cities were the same (residence model), the probability of having more than 1000 infections and the time to reach 1000 infections did not vary much with the locations of initial infections ( Figure S5 and Table S3). The effect of intercity connectivity, however, was reflected in the variation in infection numbers across cities at T1000 (denoted by V1000). The variation in infection numbers was lower in cities with higher values of source of importation ( Figure S4C) as the chance of spreading the virus to other cities was higher. In both models, well connected cities played more important roles, as they spread the virus to other cities more quickly and more widely. The impact of varying mobility on the risk of spread Above results were based on mobility data on regular days. Given that human mobility varied significantly in the past few months without travel restrictions imposed by the government, we further quantified the impact of varying mobility in Taiwan on the risk of spreading SARS-CoV-2. The impact was mainly reflected in the geographic range of infections in both models. When initial infections occurred in or around Lunar New Year, the speed of disease spread was enhanced ( Figure 2). Because mobility during Ching Ming Festival and Dragon Boat Festival differed less from regular days and these two holidays only lasted four days, it only led to minor differences in the geographic range of infections ( Figure 2). In the contact model, the probability of local outbreak was higher if initial infections occurred in or around Lunar New Year, and this impact was more apparent when initial infections were in locations with lower risk of infection ( Figure S6). The effect of travel restrictions . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. We examined the impact of varying mobility that occurred naturally during holidays on the disease spread above. Here we explored the level to which travel restrictions imposed by the government could potentially reduce the spread of SARS-CoV-2 in Taiwan at the initial stage of an outbreak. In both the contact model and the residence model, decreasing intracity movement had a much larger impact on P1000,3 ( Figure 3) and T1000,3 ( Figure S7) than decreasing intercity movement. The impact of reducing intercity travel was most evident in influencing how widespread the virus was: the infections were located in only a few cities at T1000,3 if intercity travel was reduced ( Figure S8). We then investigated the impact of duration and timing of travel reductions ( Figure 4, and details at https://roachchang.shinyapps.io/TW_CoV_Dynamics/). The probability of local outbreak decreased with increased duration of intracity travel reduction, but not change with the duration of intercity travel reduction. The results suggest that higher levels of reduction and longer periods of reduction for intracity travel can have similar impacts. For example, a 60% intracity travel reduction for 20 days had similar outcomes as a 70% reduction for 10 days. While P1000,3 did not change with the length of intercity travel reduction, longer intercity travel reduction led to slower progression of the outbreak (higher T1000,3) in the contact model and more clustered infections (higher V1000, 3) in both models ( Figure S9). Furthermore, among the parameters we used, it was the best to reduce travel as early as possible to reduce the risk of outbreak ( Figure S10). DISCUSSION By utilizing aggregated human mobility data from Facebook, we characterized how mobility patterns in Taiwan changed since the emergence of COVID-19, and built metapopulation models to understand the potential spread of SARS-CoV-2 in Taiwan and assess the potential impact of travel restrictions. We identified the top cities with the highest risk of infection as well as the top cities with the highest importation risk from other cities based on Facebook data and population sizes. We made a web-based interface showing the geographic distribution of infections at different time points (T100, T500 and T1000) in the initial stages of the outbreak given different locations of initial infections. We demonstrate that these modeling results based on empirical mobility data can be obtained before an outbreak occurs, and can be readily used to help the public avoid high-risk areas, help public health professionals identify surveillance targets, and inform decisions on travel restrictions, providing one of the key elements for COVID-19 preparedness. Consistent with previous findings showing that international or domestic travel bans are less effective than social distancing, 6,25 we found that intracity travel reduction has a higher impact on disease dynamics than intercity travel reduction and increasing the length of intracity travel reduction increases the impact. Intercity travel reduction, however, influences the variation in infection numbers across cities and can reduce the number of cities that have infections at the initial stage of the outbreak. While intercity travel did not decrease the probability of outbreak, containing the infections to a few cities has important public health impacts, as this means surveillance system can focus on fewer cities and control efforts can be more targeted. Practically, intercity reduction might be easier to implement than intracity reduction. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . Intracity travel reduction in our model is effectively the same as any measure that can reduce contact rates between individuals, such as social distancing, or transmission probability given contact, such as hand washing or wearing facemasks. These measures have been shown to be effective in reducing the transmission of respiratory viral pathogens in both modeling and empirical studies, [26][27][28][29][30] and should be encouraged. It has been shown that contact rates can be reduced by more than 70% during a lockdown. 31 Our study found that similar probabilities of outbreak can occur with various combinations of length, level, and timing of travel reductions. Health officials can therefore take into consideration feasibility of different interventions, impact on society, and the capacity of the healthcare system to determine the optimal interventions and their duration. 5 Because the volume of travel in and around holidays can increase the speed of virus spread, our results suggest that it is important to avoid travel or reduce the impact of travel through measures such as limiting social interactions and wearing facemasks when taking public transportation to reduce the spread of the virus. We showed that Facebook mobility data can be used to track how the volume and pattern of travel change through time as the outbreak progresses, and we can incorporate any change in human mobility into the metapopulation models in nearly real time to help fight COVID-19. 12,32 Moreover, our model utilizing human mobility data from Facebook is not limited to intercity or intracity level, or Taiwan. Facebook mobility data are also calculated at finer geographic scales (such as towns) and for other countries, and our model can be easily applied in these settings to understand disease dynamics of COVID-19. CONCLUSIONS In Taiwan, most cases to date were imported or linked to imported cases. To prepare for the potential spread within Taiwan, we utilized Facebook's aggregated and anonymized movement and colocation data to identify cities with higher risk of infection and regional importation. We showed that both intracity and intercity movement affect outbreak dynamics, with the former having more of an impact on the total numbers of cases and the latter impacting geographic scope. The timing, duration, and level of travel reduction together determine the impact of travel reductions on the number of infections, and multiple combinations of these can result in similar impact. These findings have important implications for guiding future policies for travel restrictions during outbreaks in Taiwan. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. DECLARATIONS Ethics approval and consent to participate This study received ethical approval from Research Ethics Committee of National Tsing Hua University (REC #: 10812HM119). Aggregated, anonymized data were analyzed, and no individuals were enrolled into this study. Availability of data and materials Modeling output data are in the manuscript or can be accessed through publicly available webbased interface (https://roachchang.shinyapps.io/TW_CoV_Dynamics/) . Due to the Data Sharing Agreement with Facebook, readers can not access the original Facebook mobility data used in this study. Competing interests The authors declare that they have no competing interests. Funding This study was supported by the Ministry of Science and Technology in Taiwan (MOST 109-2636-B-007-006). MCC, YAL, and HHC were supported by Yushan Scholar Program, CSL was supported by the Ministry of Science and Technology in Taiwan (MOST109-2636-B-007-004), RK was supported by National Institute of General Medical Sciences (U54GM088558), COB was supported by National Institute of General Medical Sciences (R35GM124715-02). The funders had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funders. Authors' contributions HHC, CSL, and COB designed the experiments. HHC and RK created the models. MCC ran the simulations. MCC and HHC performed analysis. HHC, RK, and MCC interpreted the results and wrote the manuscript. YAL and CSL collected data for the models. All authors have read and approved the manuscript. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . Figure 2. The impact of holiday travel on the disease spread. The speed of disease spread, quantified by the probability of spreading to 2 or more cities when it reaches 50 infections, from simulations with initial infections in Taipei City (representing big cities) or Pingtong County (representing small cities) are shown. The impact of Lunar New Year (10-day) was larger than Ching Ming Festival (4-day) and Dragon Boat Festival (4-day). Initial infections occurred either in (blue) or before holidays (red: 7-day; green: 14-day). R0=2.4. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . Figure 3. The impact of travel reduction on the probability of having 1000 infections. P1000,3 from simulations with initial infections in Taipei City (representing big cities) or Pingtong County (representing small cities) using both contact and residence models are shown. The difference between big and small cities was more significant in the contact model than in the residence model. Intracity and intercity travel reduction reduced P1000,3, while the impact of intercity travel reduction was minor. Here travel reduction was applied during the whole time and R0=2.4. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . The color represents the level of reduction in P1000,3 (white to red represents smaller to larger reduction). As the duration of intracity travel reduction increased, P1000,3 decreased in both models. P1000,3 did not change with the duration of intercity travel reduction. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. Estimating Pij We built a travel model to estimate the proportion of time people living in location i spend in location j (Pij) by fitting the model to the Facebook movement data. Xij represents the proportion of people living in location i currently in location j, and 67 8 represents the equilibrium state of Xij, and its value under the fitted model is used as our estimate of Pij. People living in location i travel with probability Fi, and the probability that a traveler from location i travels to location j is denoted by Tij. Travelers go back to their home location at probability ! per unit of time. Mij,t,t+1 represents the number of people moving from location i to location j between time t and t+1. 9,9:; = ! !! ( ) ! !" + " "! ( ) " !!, 9,9: For simplicity, we assumed that the majority of travel is work-related travel and on average travelers spend eight hours in the travel destination ( ! =1 given the unit of time is 8 hours) and that Tij is proportional to Mij, leaving Fi the only parameters to be fitted. We used a gradient descent algorithm to find the local optimum solution for Fi, where the cost function is defined by the sum of the squared difference between normalized mij and the normalized value of Mij from the model. We calculated 67 8 under fitted parameters to obtain estimates of Pij. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . https://doi.org/10.1101/2020.04.07.20053439 doi: medRxiv preprint Figure S6. The impact of holiday travel on the probability of outbreak. The probability of outbreak (P1000) increased with mobility during Lunar New Year (10-day). The impact of Ching Ming Festival (4-day) and Dragon Boat Festival (4-day) is less apparent. Initial infections occurred either in (blue) or before holidays (red: 7-day; green: 14-day). R0=2.4. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . https://doi.org/10.1101/2020.04.07.20053439 doi: medRxiv preprint Figure S7. The impact of travel reduction on time to reach 1000 accumulated infections. If initial infections were in a big city, it took less time to reach 1000 infections in the contact model. The difference between big and small cities was not significant in the residence model. Intracity and overall travel reduction delayed the time to reach 1000 infections in both models, while intercity reduction did not. For some conditions, P1000,3 was 0 and no bar was shown. Here travel reduction was applied during the whole time and R0=2.4. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . Figure S10. P1000,3 when travel reduction started at different conditions. P1000,3 when travel reduction started from the beginning of the simulations (denoted by 0), or when there were 10, 20, 30, 50, and 100 infections in both contact (left) and residence (right) models. Two different lengths of travel reduction duration were shown: (A) 10 days (B) 1 month. Only intracity travel reduction was shown here because intercity travel reduction only had minimal impact on P1000,3 and the results from overall reduction and intracity reduction were qualitatively similar. It was best to reduce travel at the beginning if the duration was for 10 days or 1 month. Here initial infections were in Taipei city and R0=2.4. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted August 12, 2020. . https://doi.org/10.1101/2020.04.07.20053439 doi: medRxiv preprint
2020-04-16T09:02:25.618Z
2020-04-11T00:00:00.000
{ "year": 2020, "sha1": "617fef9db97e1adf489634493fb798ad6d52909c", "oa_license": "CCBY", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/08/12/2020.04.07.20053439.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "bb88a5e35249b57f05a06f2506783e1218d2ef46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
2541828
pes2o/s2orc
v3-fos-license
A Comparison of Micro-CT and Dental CT in Assessing Cortical Bone Morphology and Trabecular Bone Microarchitecture Objective The objective of this study was to evaluate the relationship between the trabecular bone microarchitecture and cortical bone morphology by using micro-computed tomography (micro-CT) and dental cone-beam computed tomography (dental CT). Materials and Methods Sixteen femurs and eight fifth lumbar vertebrae were collected from eight male Sprague Dawley rats. Four trabecular bone microarchitecture parameters related to the fifth lumbar vertebral body (percent bone volume [BV/TV], trabecular thickness [TbTh], trabecular separation [TbSp], and trabecular number [TbN]) were calculated using micro-CT. In addition, the volumetric cancellous bone grayscale value (vCanGrayscale) of the fifth lumbar vertebral body was measured using dental CT. Furthermore, four cortical bone morphology parameters of the femoral diaphysis (total cross-sectional area [TtAr], cortical area [CtAr], cortical bone area fraction [CtAr/TtAr], and cortical thickness [CtTh]) were calculated using both micro-CT and dental CT. Pearson analysis was conducted to calculate the correlation coefficients (r) of the micro-CT and dental CT measurements. Paired-sample t tests were used to compare the differences between the measurements of the four cortical bone morphology parameters obtained using micro-CT and dental CT. Results High correlations between the vCanGrayscale measured using dental CT and the trabecular bone microarchitecture parameters (BV/TV [r = 0.84] and TbTh [r = 0.84]) measured using micro-CT were observed. The absolute value of the four cortical bone morphology parameters may be different between the dental CT and micro-CT approaches. However, high correlations (r ranged from 0.71 to 0.90) among these four cortical bone morphology parameters measured using the two approaches were obtained. Conclusion We observed high correlations between the vCanGrayscale measured using dental CT and the trabecular bone microarchitecture parameters (BV/TV and TbTh) measured using micro-CT, in addition to high correlations between the cortical bone morphology measured using micro-CT and dental CT. Further experiments are necessary to validate the use of dental CT on human bone. Introduction Human bones are generally classified into cortical bone (synonymous with compact bone) and cancellous bone (synonymous with trabecular bone or spongy bone). The two types are classified based on porosity and the unit microstructure. Cortical bone is much denser than cancellous bone with a porosity ranging between 5% and 30% [1]. Cortical bone is primarily located in the shaft of long bones and forms the outer shell around cancellous bone (vertebrae or pelvis). Cancellous bone is considerably more porous than cortical bone with a porosity ranging between 30% and 90% [1]. It is located at the end of long bones and vertebrae, and in flat bones such as the pelvis. Bone quality and quantity are affected by numerous factors, such as age, hormones, arthritis, and exercise. Clinically, orthopedic physicians commonly use dual energy X-ray absorptiometry (DXA) to measure the bone mineral density (BMD) of the femoral neck or spine for determining patients' bone strength [2]. Bone strength is affected by both geometric parameters and densitometric parameters. However, DXA provides only areal BMD information [3,4], and does not include geometric parameters, such as size and shape. Although quantitative computed tomography (QCT) can provide both geometric and densitometric parameters [2,5], the clinical application of this technique is not extensive because of the cost and high radiation dosage. Recently, dental cone-beam computed tomography (dental CT) has been widely used to evaluate alveolar bone density prior to dental implant placement [17][18][19][20][21][22][23]. Nomura et al. [24] indicated that dental CT could be used to evaluate bone mineral content based on the voxel values. In addition, numerous researchers have used the grayscale of dental CT to represent bone density (bone density in grayscale value), which is also called radiographic bone density [20][21][22][23]. However, most of these researchers have used dental CT in dental-related research or clinical trials. In our previous study [16], we indicated that dental CT is superior to DXA for predicting cortical bone fracture loads in rat femurs and tibias. Nevertheless, the relation between the bone density in grayscale measured using dental CT and trabecular bone microarchitectures is still unclear. Therefore, the purpose of this study was to evaluate the relationship between cortical bone morphology and trabecular bone microarchitecture by using micro-CT and dental CT. Specimen preparation Sixteen femurs and eight fifth lumbar vertebrae were collected from eight 4-month-old healthy male Sprague Dawley rats. All rats were killed by carbon dioxide asphyxiation, the entirety of the femurs and fifth lumbar vertebrae were harvested from every rat within 20 min. The bone specimens were wrapped with gauze soaked in saline and stored in a 220uC freezer. The study procedures were conducted in strict accordance with the recommendations provided in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. We obtained animal research ethics approval from the Research Ethics Committee of the Taichung Veterans General Hospital (Permit Number: La-1021069). Micro-CT measurement The micro-CT images of each femur and fifth lumbar vertebrae were obtained using a Skyscan 1076 micro-CT device (Skyscan, Aartselaar, Belgium) (Fig. 1a). The scanning parameters were set at 49 kV, 200 mA, 500 ms, and a voxel resolution of 18.27 mm. The micro-CT images were imported into CTAn software (Skyscan) to measure the four parameters of trabecular bone microarchitecture: BV/TV, TbTh, TbSp, and TbN of the cancellous bone in the fifth lumbar vertebral body (Fig. 1b). Furthermore, four parameters of cortical bone morphology, TtAr, CtAr, CtAr/TtAr, and CtTh of the femoral diaphysis, were calculated ( Fig. 1c) using ImageJ (Rasband, W.S., ImageJ, U.S. National Institutes of Health, Bethesda, MD, USA). The parameters of trabecular bone microarchitecture and cortical bone morphology measured in this study are listed in Table 1. Dental CT measurement A dental CT device (AZ 3000, Asahi Roentgen, Japan) was used to obtain dental CT images of each femur (Fig. 2a). The scanning parameters were set at 85 kV, 3 mA, and a voxel resolution of 100 mm. In the dental CT approach, we used only one grayscale value (volumetric cancellous bone grayscale value, vCanGrayscale) to represent the cancellous bone of the fifth lumbar vertebral body because the resolution of dental CT is not sufficiently high for detecting the trabecular bone structure of rats. In addition, because identifying the border between the cortical and cancellous bone of a vertebral body is difficult, we first segmented the vertebral body (including the inner cancellous bone and outer cortical layer) and then eroded the segments using 3 voxel (0.3 mm) to exclude the cortical bone. Finally, the vCanGrayscale of the fifth lumbar vertebral body could be obtained (Fig. 2b). In addition, using a similar micro-CT approach, the TtAr, CtAr, CtAr/TtAr, and CtTh of the midshaft of the femurs (in the same region as that used for the micro-CT scans) were calculated (Fig. 2c). All measurements obtained in the dental CT approach were calculated using ImageJ. Statistical analysis The mean, standard deviation, and coefficient of variation (CV) were calculated for all measurements. The Shapiro-Wilk test was used to determine if the measurements conformed to a normal distribution. The Pearson correlation coefficients (r values) between the vCanGrayscale measurements obtained using dental CT and the four trabecular bone microarchitecture parameters (BV/TV, TbN, TbTh, and TbSp) were calculated. Paired-sample t tests were used to compare the differences between the measurements of the four cortical bone morphology parameters (TbAr, CtAr, CtAr/TbAr, and CtTh) measured using micro-CT and the dental CT measurements. In addition, the Pearson correlation coefficients (r values) between these parameters measured using the two approaches were calculated. All statistical analyses of the data were performed using OriginPro software (version 8, OriginLab, Northampton, MA, USA). The level of statistical significance was set as p,0.05. Relation between the trabecular bone microarchitecture parameters measured using micro-CT and dental CT The trabecular bone parameters of the fifth vertebral body measured using micro-CT and dental CT are listed in Table 2. All of the experimental data were normally distributed based on the Shapiro-Wilk test analysis. In the dental CT approach, the CV of the vCanGrayscale was 32.112%, which is higher than the CV of the four trabecular bone microarchitecture parameters (24.509%, 6.974%, 19.088%, and 20.769 for BV/TV, TbTh, TbSp, and TbN, respectively.) The correlation coefficient between the grayscale measured using dental CT and the BV/TV measured using micro-CT was 0.84 (p,0.01), which is slightly higher than the value of 0.84 (p,0.01), which was the correlation coefficient between the grayscale measured using dental CT and the TbTh measured using micro-CT (Fig. 3ab). These two coefficient values (both equal to 0.84) are all highly positive correlations. In addition, the correlation coefficients between the grayscale measured using dental CT and the TbN and TbSp measured using micro-CT were 0.67 (p = 0.07) and 20.38 (p = 0.36), respectively. However, both correlation coefficient values were nonsignificant (p.0.05) (Fig. 3cd). Relation between the cortical bone morphology parameters measured using micro-CT and dental CT The cortical bone morphology parameters of the femoral diaphysis measured using micro-CT and dental CT are listed in Table 3. All of the experimental data were normally distributed based on the Shapiro-Wilk test analysis. The TtAr parameter, which was measured using micro-CT (9.2260.47 mm 2 ), was significantly (p,0.01) larger than that measured using dental CT (8.8260.59 mm 2 ). However, the CtAr/TtAr and CtTh parameters, both measured using dental CT (0.7160.05 for CtAr/TtAr, 0.8760.07 for CtTh), were significantly (p,0.01) larger than that measured using micro-CT (0.6660.04 for CtAr/TtAr, 0.7160.03 for CtTh). For the CtAr parameter, no significant difference between the two approaches was observed (6.2860.61 for dental CT and 6.1160.34 for micro-CT). The correlation coefficient between the TtAr, CtAr, CtAr/TtAr, and CtTh measured using micro-CT and dental CT was 0.90 (p,0.01), 0.76 (p,0.01), 0.79 (p,0.01), and 0.71 (p,0.01), respectively (Fig. 4). All of these values indicated highly positive correlations. Discussion Measuring bone quality, quantity, and strength is a clinically crucial topic. Although micro-CT can be used to obtain trabecular bone microarchitectures and cortical bone morphology, this technology cannot be employed in measuring human bones because of the size limitations of such devices. Dental CT is becoming widely used in recent years. However, most previous studies on adopting dental CT to assess bone quality and bone quantity have been mainly concerned with presurgical dental implant assessments. According to our extensive research, no Similarly, the assessments of the four cortical bone morphology parameters of the femoral diaphysis in rats conducted using micro-CT and dental CT were highly correlated (r ranged from 0.71 to 0.90). In laboratory experiments, the femoral diaphysis is one of the most frequently examined regions for measuring cortical bone strength because the region can be used to conduct three-point and four-point bending tests to measure the structural stiffness of the cortical bone [16,25]. In addition, the femoral head and spinal vertebral body are generally selected to represent cancellous bone tissue [6,8,10,11,13]. This study adopted the fifth vertebral body of rats instead of the femoral head mainly because a rat's femur is small and contains insufficient cancellous bone. To prevent the image quality from being affected by the partial volume effects of dental CT, the fifth vertebral body of rats was used in this study as the sample of cancellous bone tissue. However, because the exterior of a vertebral body is covered by a thin cortical layer and the interior is covered by the cancellous bone tissue (Fig. 1b), dental CT with limited resolution cannot be used to accurately determine the trabecular bone microarchitecture in cancellous bone. Therefore, we first segmented the entire vertebral body, and then eroded the segments by 0.3 mm to represent the cancellous tissue inside the vertebral body. Since Layton et al. [26] pioneered the use of micro-CT in analyzing the bone morphology of guinea pigs in 1988, micro-CT has been considered the gold standard for assessing bone morphology and microstructures [15]. Numerous bone parameters can be measured using micro-CT. In 2010, Bouxsein et al. [15] indicated that BV/TV, TbTh, TbSp, and TbN are the most crucial indices of trabecular bone microarchitecture parameters, and TtAr, CtAr, CtAr/TtAr, and CtTh are the most critical indices of cortical bone morphology parameters. Therefore, these eight parameters were adopted as the indices for assessing trabecular bone microarchitecture and cortical bone morphology parameters. According to a previous study, a rat's trabecular bone thickness and trabecular bone separation are approximately 50 mm and 150 mm, respectively [27]. The resolution of the micro-CT used in this study was 18.3 mm, which was sufficient for measuring trabecular bone microarchitectures. However, the resolution of the dental CT employed in this study was 100 mm, by which the trabecular bone microarchitectures could not be determined. Therefore, we adopted the vCanGrayscale to represent the cancellous bone tissue of the fifth vertebral body in rats. Regarding the trabecular bone microarchitecture parameters of the cancellous bone, previous studies have used the femoral head of a rat as the typical region of interest [6,10]. Because of the limited resolution of dental CT, the fifth lumbar vertebral body of rats was used in the experiment. In a comparison between this study and previous studies in which the third to fifth lumbar vertebral bodies of rats were measured, the BV/TV values (22.95%65.63%) of the rats' fifth lumbar vertebral body scanned using micro-CT in this study were lower than those (29.18%63.6%) measured by Ito et al. [8] or those (37.6%65.0%) measured by Yao et al. [13]. This difference existed mainly because the 4-month-old rats used in this study were younger than the 10-and 6-month-old rats selected in the studies of Ito et al. [8] and Yao et al. [13], respectively. In addition, although the ratio of TbSp to TbTh in this study was fairly close to that reported in Ma et al. [11], the ratio of TbTh to TbSp in this study was greater than those obtained in previous research [8,11,13]. These value variations may have resulted from differences in the rat species used, rat age, and experimental design. Regarding the cortical bone morphology parameters, micro-CT has been rarely applied to measure the four cortical bone morphology parameters (TtAr, CtAr, CtAr/TtAr, and CtTh) of a rat's femoral diaphysis. In this experiment, the TtAr and CtAr values of a the femoral diaphysis of rats measured using micro-CT were 9.2260.47 mm 2 and 6.1160.34 mm 2 , respectively, which were only approximately half of the values (18.6460.45 mm 2 and 11.6760.21 mm 2 , respectively) obtained by Sibilia et al. [28]. However, this study and the study of Sibilia et al. reported similar CtAr/TtAr values. In addition, the CtTh values (0.7160.03 mm) obtained in this study were lower than those (1.09560.03 mm) of Sibilia et al. In addition to the aforementioned differences in rat age, rat species, and experimental design, the partial volume effects of peripheral quantitative computed tomography (pQCT; resolution = 70 mm) used by Sibilia et al. [28] may have caused Table 3. Cortical bone parameters of the femoral diaphysis: the four cortical bone morphology parameters (TbAr,CtAr, CtAr/TbAr, and CtTh) measured by micro-CT and dental CT. In recent years, dental CT has been widely used in dental clinical practices mainly because dental CT is not only inexpensive and involves low radiation doses [18,29,30], but it also possesses higher spatial resolutions for precisely measuring bone shapes and contours than traditional computed tomography does. Hashimoto et al. [31] indicated that both the magnification and distortion of dental CT are extremely small (error , 0.1 mm). In addition to employing dental CT to observe tissue shapes and contours, several recent studies on the application of grayscale values measured using dental CT for determining bone density have demonstrated that bone quality and quantity have a specific relationship with grayscale values [22,23]. However, because dental CT is generally used in dental clinics and by dentists or dental radiologists, most studies have been restricted to the dental field, which consequently reduced the applicability of dental CT to other orthopedic fields. Therefore, this study aimed to adopt dental CT to measure the cortical bone morphology parameters of a rat's femoral diaphysis and the grayscale values of the cancellous bone density of a rat's fifth vertebral body. Previous studies have indicated that the image quality of dental CT is less stable than that of traditional computed tomography, and that the Hounsfield unit scale is not a suitable image unit in dental CT. Moreover, the image quality can be affected by the scanning position [19,32]. Nevertheless, flat panel detectors have been used in most dental CT devices recently, which has substantially improved the image quality of dental CT [24]. Nomura et al. [33] also proved that the grayscale values measured using dental CT have a strong correlation with the concentrations of iodine solutions. Furthermore, Nomura et al. [24] demonstrated that dental CT can be employed to determine bone mineral content. In addition, dental CT has been adopted in numerous studies for assessing the bone quality and quantity of alveolar bone before dental implant placements. In this study, the vCanGrayscale values measured using dental CT represented the cancellous bone density of the fifth vertebral body, and exhibited high correlations (r) with the trabecular bone microarchitecture parameters, specifically BV/TV (0.84) and TbTh (0.84), measured using micro-CT. Therefore, dental CT can be clinically applied to scan patients' bones and indirectly estimate the trabecular bone microarchitecture (particularly the parameters of BV/TV and TbTh) by calculating the grayscale values of cancellous bone density. Additionally, although vCanGrayscale values were moderately correlated with TbN values (r = 0.67), the correlation exhibited no statistical significance (p = 0.070). Moreover, no correlations existed between the vCanGrayscale and TbSp values. Several scholars have adopted QCT or pQCT to measure the morphology parameters of the femur [2,3,5,34], and proved that QCT and pQCT can provide not only bone densitometric parameters as DXA does, but also bone geometric parameters for accurately predicting bone strength. However, scant studies have involved the use of dental CT for measuring the cortical bone morphology parameters of long bone diaphysis. In this experiment, the TtAr values (8.8260.59 mm 2 ) measured using dental CT were smaller than those (9.2260.47 mm 2 ) measured using micro-CT and, conversely, the CtAr/TtAr and CtTh values (0.7160.05 mm and 0.8760.07 mm, respectively) measured using dental CT were greater than those (0.6660.04 mm and 0.7160.01 mm, respectively) measured using micro-CT. Although differences existed in the absolute values measured using micro-CT and dental CT, the absolute values obtained using the two methods exhibited strong correlations. These absolute values should considerably decrease when the methods are applied in human clinical trials (thicker bones). However, further experiments are required to verify this assumption. In this study, the experiment has several limitations. Because fresh human cadaver bones were difficult to obtain, the rat bones most commonly used in experiments were selected as the experimental specimens. Nevertheless, additional human bone experiments are required before these methods are applied to human bodies. In this study, only the femoral diaphysis was used to assess cortical bone, and cancellous bone was evaluated merely based on the fifth vertebral body. Consequently, the effectiveness of using other bone regions to test cortical and cancellous bone by using dental CT still requires further evaluation. In addition, all of the bone specimens were scanned in vitro by using micro-CT and dental CT, which generated clearer images than in vivo bone scanning did. Nonetheless, further experimental investigations are required for analyzing such differences. Conclusion Based on the experimental setup and limitations, the following conclusions were derived from this study: 1. High correlations between the vCanGrayscale (grayscale value of the cancellous bone of the fifth vertebral body) measured using dental CT and the trabecular bone microarchitecture parameters (BV/TV and TbTh) measured using micro-CT were observed. 2. The absolute value of the cortical bone morphology parameters (TtAr, CtAr/TtAr, and CtTh) may be different between the measurements obtained using the dental CT and micro-CT approaches. However, high correlations between these four parameter measured using micro-CT and dental CT were demonstrated.
2016-05-04T20:20:58.661Z
2014-09-16T00:00:00.000
{ "year": 2014, "sha1": "4e9134a5121c5ad11ca41532c8277749b3f0046d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0107545&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e9134a5121c5ad11ca41532c8277749b3f0046d", "s2fieldsofstudy": [ "Medicine", "Biology", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
118694940
pes2o/s2orc
v3-fos-license
Geometric Formulation of Unique Quantum Stress Fields We present a derivation of the stress field for an interacting quantum system within the framework of local density functional theory. The formulation is geometric in nature and exploits the relationship between the strain tensor field and Riemannian metric tensor field. The resultant expression obtained for the stress field is gauge-invariant with respect to choice of energy density, and therefore provides a unique, well-defined quantity. To illustrate this formalism, we compute the pressure field for two phases of solid molecular hydrogen. The stress, or the energetic response to deformation or strain, plays an important role in linking the physical properties of a material (e.g. strength, toughness) with the behavior of its microstructure. In addition, the spatial distribution of stress is an invaluable tool for continuum modeling of the response of materials. The stress concept has been applied at atomistic scales as well. Over the last fifteen years, there has been a continuing trend toward understanding various structural and quantum-mechanical phenomena in materials in terms of their response to stress [1]. For example, the residual stress at equilibrium has been used to assess the structural stability of systems containing surfaces or strained interfaces. It has been demonstrated that the desire to minimize surface stress can give rise to reconstructions on high symmetry surfaces [2][3][4][5][6][7][8], and the stability of epitaxially grown bimetallic systems has been attributed to the formation of incommensurate overlayers, defects, and dislocations which minimize the stress near the metal-metal interface [9,10]. The stress can have significant effects on chemical reactivity as well. It has been shown that small molecule chemisorption energies and reaction barriers on certain strained metal and strained semiconductor surfaces are quite different from those on the unstrained surface [11,12]. Formally, studies of the above phenomena must include a quantum-mechanical description of the system's electronic degrees of freedom. Therefore, one must consider how a stress is defined quantum mechanically. Methods for calculating the stress in quantum mechanical systems have been developed since the birth of quantum theory itself [13]. However, research in developing formalisms for determining the quantum stress in solid-state systems has recently been revitalized. This is mainly due to ever-increasing opportunities to perform accurate and efficient quantum-mechanical calculations on systems which exhibit stress mediated phenomena. The stress is a rank-two tensor quantity, usually taken to be symmetric and therefore torque-free. Two useful representations of the stress tensor are the volume-averaged or total stress, T αβ , and the spatially varying stress field σ αβ (x). The two representations are related since the total stress for a particular region in a system is the stress field integrated over the volume. Nielsen and Martin developed a formalism for calculating the total quantum stress in periodic systems [14]. They define the total stress as the variation of the total ground-state energy with respect to a uniform scaling of the entire system. This uniform scaling corresponds to a homogeneous or averaged strain over the entire system. They further demonstrate that the total quantum stress is a unique and well-defined physical quantity. Their formulation has been successfully implemented to study a variety of solid state systems [2,3]. Other formalisms for determining the total quantum stress have been created as well [2,15,16]. Although these formalisms have provided important tools for studying quantum stress, the stress field is a more useful quantity that contains important information regarding the distribution of the stress throughout the system. A knowledge of the spatial dependence of the quantum stress is vital if one wishes to predict the spatial extent of structural modifications or understand phenomena at interfaces in complex heterogeneous systems. However, certain definitions of the quantum stress field suggest that it can only be specified up to a gauge. (This ambiguity manifests itself in classical atomistic models as well.) It has therefore been asserted that the quantum stress field is not a well-defined physical quantity, even though physical intuition may suggest otherwise. A traditional way to develop a quantum stress field formalism is to consider the stress field's relationship with the force field. From this perspective, the stress field can be defined as any rank-two tensor field whose divergence is the force field of the system: (Note that the Einstein summation convention for repeated indices is used throughout the Letter.) One can add to σ αβ a gauge of the form where A αβγ is any tensor field antisymmetric in β and γ, and recover the same force field, thereby demonstrating the non-uniqueness of this stress field definition. General formulations for computing non-gauge-invariant stress fields in quantum many-body systems have been derived by Nielsen and Martin, Folland, Ziesche and co-workers, and Godfrey [14,[17][18][19]. There have been several attempts to overcome this problem of non-uniqueness. For example, the stress field formalism of Chen and co-workers has been applied to numerous solid state systems to determine the local pressure around a region [20]. However, their method assumes that the potential is pair-wise only. Several ab-initio quantum stress field formulations have been developed, as well. Ramer and co-workers developed a method to calculate the resultant stress field from an induced homogeneous strain [21]. They incorporate the additional constraint that the field must be the smoothest fit to the ionic forces. This method cannot be used to calculate the residual stress field at equilibrium, nor can it determine the energy dependence on strains which do not have the periodicity of the unit cell. Filippetti and Fiorentini developed a formulation of the stress field based on the energy density formalism of Chetty and Martin [22,23]. Since this formulation is based explicitly on the energy density, which is not gauge-invariant, the resultant stress field is not unique. Mistura succeeded in developing a general gauge-invariant formalism for pressure tensor fields of inhomogeneous fluids within classical statistical models using a Riemannian geometric approach [24]. This Letter extends Mistura's work, developing a Riemannian geometric formalism for computing gauge-invariant stress fields in quantum systems within the local density approximation (LDA) of density functional theory (DFT). We show that the response of the total ground-state energy of a quantum system to a local spatially varying strain is a unique and physically meaningful field quantity which can be determined at every point in the system. Using a procedure well known in continuum theory (see for example Ref. [25]), one can formally relate a Riemannian metric tensor field g αβ (x) with the strain field ǫ αβ (x): where ∂ α ≡ ∂/∂x α [26]. Here the strain field is defined in terms of a vector displacement field u α which maps coordinates x α in the non-deformed system, to the coordinates x ′ α = x α + u α in the deformed system. The stress field σ αβ (x) and strain field obey a virtual work theorem expressing the energy response to variations in the strain: where g(x) is the determinant of g αβ (x). It can be shown that the stress field is related to the functional derivative of the energy with respect to the metric field [24]: We now derive the quantum stress field of a many-electron system in the presence of a fixed set of classical positive charged ions using local density functional theory [27,28]. The ground state electronic charge density of the system is written as , where φ i are single-particle orthonormal wavefunctions. For this derivation, we assume orbitals with fixed integer occupation numbers. The extension to metals with Fermi fillings is straightforward, simply necessitating use of the Mermin functional instead of the total energy [29]. The total charge density of the system can be written as a sum over all ionic charges and n: where Z i is the charge of the i-th ion located at position R i , and the presence of √ g insures proper normalization of the delta function. The energy of the system can be written as the following constrained functional: Here E k is the single particle kinetic energy, E Coulomb is the classical Coulomb interaction between the total charge density and itself, and E xc is the exchange-correlation energy of the electrons. The appearance of the last term in Eq. 7 is due to the orthonormality constraint of the orbitals. (We choose a unitary transformation on {φ i } which enforces orthogonality.) One can express E as an integral over an energy density [23]. The choice of energy density gauge will not affect the derived stress field, since all our results depend only on the total energy E. For convenience, we express the energy terms in Eq. 7 as the following: where F α = −∂ α V is the electric field due to the Coulomb potential V generated by ρ, and ε LDA (n) is the LDA exchange-correlation energy density. To obtain the electronic groundstate energy, we require δE/δφ * i = 0 with the additional constraints of a fixed metric (δg αβ = 0) and a fixed ionic charge density (δρ = −δn). This implies that the orbitals must obey the Euler-Lagrange equations which can be considered the Kohn-Sham equations in curvilinear coordinates. Also, a leastaction principle for E Coulomb requires that ρ and V obey the Poisson equation: We now vary the total energy with respect to the metric. It can be proven that we do not need to consider variations in the electronic wavefunctions, charge density and potentials, since all such variations would vanish due to Eq. 9 and Eq. 10. This is the same principle used in the derivation of the Hellmann-Feynman force theorem and the energy-momentum tensor (the variation of the action with respect to metric) in general relativity [30][31][32]. If a different electromagnetic gauge is chosen, variations with respect to the potential V would have to be computed explicitly; this change has no impact on the stress field in Eq. 11. Performing the variation of the total energy with respect to the metric gives the stress field in local density functional theory as: where we have used the relation ∂ √ g/∂g αβ = − 1 2 √ gg αβ . Using Eq. 9, we can rewrite Eq. as When Eq. 12 is evaluated at the Euclidean metric (g αβ = δ αβ ), it gives the stress field at zero applied strain, and the {φ i } are then solutions to the standard Kohn-Sham equations. From here on, we will refer to σ αβ with an implied evaluation at the Euclidean metric. It is important to note several key features of the form of σ αβ . First, the Coulombic contribution to the quantum stress field is equivalent to the classical Maxwell stress field. This Coulombic term can be obtained by Filippetti and Fiorentini's formalism in Ref. [22] if one chooses the Maxwell gauge. Also, the contribution of the exchange-correlation energy to our stress field is only in the diagonal (pressure-like) terms, which is the proper behavior for local density functionals [14] and is identical to the exchange-correlation stress derived in Ref. [22]. However, the kinetic contribution to Eq. 12 is unique to our derivation. Our stress field contains diagonal terms which are similar to the symmetric and antisymmetric kinetic energy densities. By integrating the stress field over all space, we can obtain the total stress T αβ : which is identical to the expression derived by Nielsen and Martin [14]. In order to demonstrate the utility of our stress field formalism, we compute within DFT the pressure field, 1 3 (σ 11 + σ 22 + σ 33 ), for two phases of solid molecular hydrogen under external hydrostatic pressure of 50 GPa. [33]. Both structures consist of stacked two-dimensional triangular lattices of hydrogen molecules, with the molecular axis parallel to the stacking direction and a repeat unit of two layers. The m-hcp structure has alternating layers shifted so that each hydrogen molecule is directly above triangular hollow sites in the neighboring layers. The second structure (belonging to the Cmca space group) has a different shift, so that each molecule lies directly above midpoints between nearest-neighbor pairs of molecules in adjacent layers [34]. The energetics and electronic properties of both structures have been studied extensively from first principles [35,36]. Examination of the pressure field permits us to rationalize the energy ordering of these structures. The m-hcp structure is energetically favored by 60 meV/molecule. Figure 1A shows a contour plot of the pressure field for the Cmca structure. The pressure is tensile (greater than zero) through the interstitial region, indicating that contraction is locally favorable. The pressure is greatest in the volume directly above and below each molecule, averaging 3 eV/Å 3 . This implies that the system would energetically favor increased intermolecular coordination. In Figure 1B we show a similar plot for the m-hcp structure. Again the pressure within the interstitial region is tensile. However, the pressure field has significantly rearranged, and the pressures in the regions above and below molecules have been reduced to approximately 2.25 eV/Å 3 . It is also clear from the charge density plots ( Figure 1C and 1D) that the reduction in pressure is correlated with an increase in bonding between molecules and increased charge delocalization. The pressure fields within the molecules are compressive, and they are several orders of magnitude larger than the interstitial features. However, because these large fields are very similar in both structural phases (and the free molecule), they are not important for understanding relative phase stability. Thus, changes in the pressure stress field highlight regions and charge density features that contribute to favorable energetic changes. We have developed a formulation for unambiguously determining the stress field in an interacting quantum system described by local density functional theory. The resultant expression for the stress field is gauge-invariant with respect to choice of energy density and can be obtained via a method analogous to the computation of Hellmann-Feynman forces. Our application of this formalism to solid molecular hydrogen demonstrates that a stress field analysis can associate energetics with particular microscopic structural features of a material. This stress field formulation has the potential to become an invaluable aid to the understanding of structural phenomena in complex solid-state systems. axis is the stacking direction for the layers, and the horizontal axis is the direction along which alternating layers are shifted. The plots are 6.794Å high and 7.206Å wide. Ten contours are shown, over a range of 0-3 eV/Å 3 for the pressure fields, and from 0-2.5 e/Å 3 for the charge densities.
2019-04-14T01:55:48.141Z
2000-06-17T00:00:00.000
{ "year": 2000, "sha1": "047cef488959204e87f39286950657b58be148cc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "047cef488959204e87f39286950657b58be148cc", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
6924632
pes2o/s2orc
v3-fos-license
Effect of a mixed reality-based intervention on arm, hand, and finger function on chronic stroke Background Virtual and mixed reality systems have been suggested to promote motor recovery after stroke. Basing on the existing evidence on motor learning, we have developed a portable and low-cost mixed reality tabletop system that transforms a conventional table in a virtual environment for upper limb rehabilitation. The system allows intensive and customized training of a wide range of arm, hand, and finger movements and enables interaction with tangible objects, while providing audiovisual feedback of the participants’ performance in gamified tasks. This study evaluates the clinical effectiveness and the acceptance of an experimental intervention with the system in chronic stroke survivors. Methods Thirty individuals with stroke were included in a reversal (A-B-A) study. Phase A consisted of 30 sessions of conventional physical therapy. Phase B consisted of 30 training sessions with the experimental system. Both interventions involved flexion and extension of the elbow, wrist, and fingers, and grasping of different objects. Sessions were 45-min long and were administered three to five days a week. The body structures (Modified Ashworth Scale), functions (Motricity Index, Fugl-Meyer Assessment Scale), activities (Manual Function Test, Wolf Motor Function Test, Box and Blocks Test, Nine Hole Peg Test), and participation (Motor Activity Log) were assessed before and after each phase. Acceptance of the system was also assessed after phase B (System Usability Scale, Intrinsic Motivation Inventory). Results Significant improvement was detected after the intervention with the system in the activity, both in arm function measured by the Wolf Motor Function Test (p < 0.01) and finger dexterity measured by the Box and Blocks Test (p < 0.01) and the Nine Hole Peg Test (p < 0.01); and participation (p < 0.01), which was maintained to the end of the study. The experimental system was reported as highly usable, enjoyable, and motivating. Conclusions Our results support the clinical effectiveness of mixed reality interventions that satisfy the motor learning principles for upper limb rehabilitation in chronic stroke survivors. This characteristic, together with the low cost of the system, its portability, and its acceptance could promote the integration of these systems in the clinical practice as an alternative to more expensive systems, such as robotic instruments. Electronic supplementary material The online version of this article (doi:10.1186/s12984-016-0153-6) contains supplementary material, which is available to authorized users. Background Motor impairments are a common consequence of stroke and a major cause of disability [1]. Specifically, upper limb paresis is among the most significant deficits and represents an important obstacle for independence [2]. Impairment of upper limb motor function is present in more than 80 % of stroke survivors, and moderate dexterity after six months is only expected in 30 to 40 % of the cases [3]. It is commonly assumed that recovery of motor function after a brain injury involves neural reorganization of spared areas in both hemispheres to take over functions previously driven by the injured areas [4]. In fact, brain plasticity and behavior are interrelated: on one hand, behavior is a result of reorganized brain activity [1,4]; on the other hand, adaptive neural reorganization is driven by skill-dependent experiences and behavior [4]. Nevertheless, reorganization is not driven by mere repetition. It only occurs when the experience implies learning [4]. Therefore, it can be deduced that motor rehabilitation should focus on driving plasticity by experiences that mean a challenge for the motor skills of the patients. In addition, motor learning principles, such as intensity, repetition, task-orientation, and feedback have proven to modulate the functional improvement after stroke [5][6][7][8][9]. Virtual Reality (VR) is an especially interesting research field since it allows to create computer-generated environments and provide customized experiences involving different sensory channels, commonly sight, hearing, and/or touch [10]. An increasing number of studies report promising results of its application to motor rehabilitation after stroke [10,11], specifically for upper limb [11][12][13]. First, movement kinematics when reaching, grasping, transporting, and releasing objects in a virtual environment are comparable to those in the physical world, thus suggesting that the training of arm movements in VR can be a feasible alternative [14]. Second, VR has been shown effective at improving upper limb movements for reaching and grasping tasks involving proximal segments and global arm movements, in individuals with stroke in both acute and chronic stages [11,13]. Third, distal fine motor control has also been effectively improved using VR, generally combined with robotic-like devices [2,15,16]. Fourth, controlled trials suggest that VR may be beneficial to improve upper limb function and performance in activities of daily living, to a greater extent than same dosage of conventional therapy [3]. Finally, mixed-reality systems involving virtual and tangible objects may be useful in improving both functionality and the kinematics of reaching [17,18]. Mixed-reality systems are particularly interesting because they combine interesting features of VR with tangible objects that subjects must manipulate. For instance, proprioceptive feedback has been suggested to exploit multimodal aspects of the observation of goal-oriented movements and the feedback on one's actions [12]. However, clinical research so far with these systems has mainly focused on shoulder and elbow training without specific involvement of hand and finger dexterity. Basing on the existing evidence, we have developed a mixed reality system that satisfies the motor learning and neural plasticity principles to promote the rehabilitation of task-directed movements of the paretic upper limb involving hands and fingers. The system fits the motor condition of each subject allowing the training of a wide spectrum of movements, from gross proximal movements to finger dexterity, while being portable and inexpensive, in contrast to robotic systems. The objective of this paper is twofold: first, to determine the clinical effectiveness of an experimental intervention with the system to improve the motor function of arm, hand, and fingers in individuals with chronic stroke; and second, to determine the acceptance of this intervention as defined by users' ratings of usability and motivation. Subjects All the outpatients who had suffered a stroke and presented a residual hemiparesis derived from the lesion, and were attending a long-term rehabilitation program in the Brain Injury Service of NISA Hospitals were potential candidates to participate in the study. Inclusion criteria were 1) age ≥ 35 and < 65 years old; 2) chronicity > 6 months; 3) no increase or slightly increase in muscle tone as defined by Modified Ashworth Scale [19] The exclusion criteria were 1) individuals with ataxia or any other cerebellar symptom; 2) orthopedic alterations or pain syndrome of the upper limb; 3) peripheral nerve damage affecting the upper extremities; 4) individuals whose visual or hearing impairment does not allow possibility of interaction with the system; and 5) individuals with severe hemispatial neglect. Ethical approval for the study was granted by the Institutional Review Board of NISA Hospitals. All the eligible candidates who agreed to take part in the study were required to provide informed consent. Hardware setting The mixed reality rehabilitation system consisted of a projective tabletop system that allowed multitouch interaction with the hands or via manipulation of tangible objects (Fig. 1). Essentially, the system consisted of a Kinect™ depth sensor (Microsoft®, Redmond, WA, USA) and a projector EB-1720 (Epson®, Suwa, Japan) separated 8 cm and attached to the upper plane of a rigid frame at 70 cm of height. The system was 95 x 70 x 40 cm and was fully portable. The sensor and the projector pointed down so that when the frame was placed on a table their field of view overlapped on its surface, thus defining an area of interaction of 55 x 40 cm 2 [24]. The system projects a virtual environment on that area, which reacts according to the users' movements, mimicking the interaction with the real world. In each exercise, the required movements of the upper limb segments, fingers, and tangible objects were detected from the depth information of the scene, tracked, and the interaction with the virtual objects was calculated to update the virtual environment ( Fig. 2) (See Additional file 1 for more information). Exercises The exercises consisted of a wide range of planar unimanual tasks that involved arm and hand movements, focused on the flexion and extension of the elbow, the wrist, and the metacarpophalangeal joint, and represented tasks that were likely to belong to the participant's motor repertory (previous to the onset), aiming to maximize the relationship with activities of daily living (Fig. 3). The interaction with some exercises required tangible objects of different sizes to be grasped and moved. Handles with different thickness were available. Within each exercise, participants had to perform a task (to grate an item, to dial a number, etc.) as many times as possible. The task, in turn, was achieved if a number of repetitions were performed accurately enough within a time interval. The system controlled compensation during the exercises, requiring those segments not involved in the movement to be fixed in certain position. For instance, in the grating exercise, the forearm had to remain still and on the table while flexing and extending the wrist. Otherwise, repetitions were not valid (See Additional file 1 for more information). The difficulty of the exercises was determined by adjusting the required speed, number of repetitions, and accuracy of the movements. Before the intervention, therapists defined different levels of difficulty for each exercise by varying these parameters. After each exercise, the success rate was estimated as the percentage of tasks successfully achieved. When the success rate was higher than 80 %, the system automatically increased the level of difficulty. When the success rate was lower than 20 %, the system decreased the level. Exercises provided audiovisual feedback of the virtual environment and showed information about the remaining time, the repetitions successfully completed, and the previous records achieved by the participant. During the exercises, positive audiovisual reinforcement was provided when a task was achieved. In case of the task was not achieved, a negative feedback was provided. After each exercise, the system provided the success rate achieved. Procedure A reversal (A-B-A) design was chosen to characterize the effects of the experimental intervention and to quantify the maintenance of gains. Phase A consisted of 30 training sessions of conventional physical therapy, and phase B consisted of 30 sessions of an experimental intervention with the mixed reality system. This design allowed to determine the effects of physical therapy, the effects of the experimental intervention, and the maintenance of gains after it when returning to physical therapy. The duration of both interventions was paired. In both phases, sessions were 45-min long and were administered three to five days a week. All the training sessions were supervised by a physical therapist, who in case of compensation, provided a tactile cue to correct the performance. No concomitant therapies were administered. The conventional physical therapy intervention included active upper extremity tasks equivalent to those trained by the mixed reality system, which involved shoulder, elbow, wrist, and fingers and grasping of different items (in the absence of a virtual feedback). For example, the exercise that simulated knocking on doors in the mixed reality system (Fig. 3c) was matched with repetition of knocking movements (flexion-extension of the wrist with the forearm still) on square-shaped pieces of paper placed on a table. Two two-minute breaks were allowed after 15 and 30 min of the beginning of the session. The difficulty of the training was determined by a physical therapist in a previous exploratory session. During the intervention, exercises gradually increased in resistance (weights) and in repetitions. The experimental intervention included eight exercises in randomized order (Fig. 3). Duration of the exercises was set to five minutes each. Two-minute breaks were allowed after the third and sixth exercise. The difficulty of the experimental intervention was also initially determined in a previous exploratory session, and was automatically adjusted by the mixed reality system during the intervention or by the physical therapist who supervised the sessions to correct one-time alterations related to pain, motor performance, or inattention. The thickness of the handles of the tangible objects was also determined in the exploratory session to fit the grasp opening of each subject. All the participants were assessed by a physical therapist, who was blind to the design of the study, 1) at the beginning of the initial phase A (A i ); 2) at the end of the initial phase A, which was the beginning of phase B (B i ); at the end of phase B, which was the beginning of the second phase A (B f ), at the end of the second phase A (A f ). In accordance with the International Classification of Functioning, Disability and Health [25], the assessment protocol evaluated 1) the body structures, with the Modified Ashworth Scale [26]; 2) the body functions, with a strength test with a dynamometer [27], the Motricity Index, and the upper extremity subscale of the Fugl-Meyer Assessment Scale [28]; 3) the body activities, with the Manual Function Test [29], the Wolf Motor Function Test [30], the Box and Blocks Test [31], and the Nine Hole Peg Test [32]; and 4) the participation, with the subscales of Quality of Movement and Amount of Use of the Motor Activity Log [33]. In addition, acceptance of the experimental system was assessed in B f with the System Usability Scale [34] and with four subscales of the Intrinsic Motivation Inventory [35]. The System Usability Scale is a simple ten-item scale that serves as a global assessment of subjective usability. It employs a Likert scale with scores ranging from 0 to 100. The Intrinsic Motivation Inventory is a multidimensional questionnaire structured into various subscales. Each subscale includes different questions rated on a seven-point Likert scale. In this study, this questionnaire was used to assess participant interest/enjoyment, perceived competence, pressure/tension, and value/usefulness measures. Scores approaching seven in each subscale represent positive values in terms of motivation, with the exception of the pressure/tension subscale, for which high scores represent high levels of tension. Statistical analysis For each scale and test, scores in all the assessments were compared using repeated measures analyses of variance (ANOVA). ANOVA findings that violated the sphericity assumption were accommodated by Greenhouse and Geisser's conservative degrees of freedom adjustment. Posthoc simple contrasts (Bonferroni) were conducted for each significant time main effect to determine the source of the significant difference. Data were confirmed to have a normal distribution using the Shapiro-Wilks normality test. The α level was set at 0.05 for all analyses (two-sided). All analyses were computed with SPSS for Mac, version 15 (SPSS Inc., Chicago, USA). Subjects A cohort of 108 individuals with stroke were examined for eligibility. A sample of 32 participants (29.6 %) satisfied the inclusion criteria in the study and accepted to participate. All of them were enrolled. Two subjects were discharged and dropped out the study, consequently, their data were not included for analysis. The final sample (17 men and 13 women) was aged 58.3 ± 10.1 years old and had a chronicity of 357.5 ± 270.1 days. Lesions were ischemic (n = 17) or hemorrhagic (n = 13), with a preponderance of right-sided occurrence (n = 17). Ischemic lesions presented total anterior circulation infarcts (n = 4), partial anterior circulation infarcts (n = 9), and lacunar circulation infarcts (n = 4). Clinical effectiveness Repeated measures ANOVA at every assessment of the clinical trial revealed a significant time effect in most of the scales that assessed the body activities (the Wolf Motor Function Test, the Box and Blocks Test, and the Nine Hole Peg Test) and in the participation, and a strong trend towards significance in the Fugl-Meyer Assessment Scale (Table 1). With respect to these scales throughout the therapy, post-hoc analysis showed significant improvement after the experimental intervention (from B i to B f ). However, this improvement was detected neither after the following conventional intervention (from B f to A f ) nor the previous (from A i to B i ), but in the Amount of Use subscale of the Motor Activity Log (Fig. 4). No significant differences were detected in either the body structures or functions. Acceptance With regards to the usability, scores in the System Usability Scale (79.13 ± 7.54 from a total score of 100) showed good acceptance of the experimental system. According to the Intrinsic Motivation Inventory, participants reported high levels of interest and enjoyment (5.73 ± 0.79 of 7), found themselves competent (5.21 ± 0.98) but not pressured (1.98 ± 0.58), and considered the intervention useful (6.17 ± 0.69). Discussion This study evaluates the effectiveness and acceptance of a low-cost mixed reality instrument that provides intensive task-oriented exercises for arm, hand, and finger function rehabilitation in a population of chronic stroke survivors with hemiparesis. Positive effects of the experimental intervention were detected in both activity and participation, and also influenced the progression of the participants. The significant improvement in timed tests related to activity after the experimental intervention must be highlighted, since task performance is considered an indicative of functional improvement in individuals with chronic stroke [36], and since movement speed and quality of movement are interrelated [37]. Our results supports previous findings using mixed reality systems in the Wolf Motor Function Test [17]. Interestingly, changes detected by the Wolf Motor Function Test have been reported to be of clinical importance [37]. The strong tendency towards statistical significance detected in the Fugl-Meyer Assessment Scale are also in line with previous reports [17,18]. The different nature of this scale and the Wolf Motor Function Test and the chronicity of our sample could have prevented greater effects. This scale has been shown to be more sensitive in the acute phase [38] and for chronicity of less than six months [39]. However, it may separate motor recovery from functional recovery and, therefore, may not be responsive to functional improvements in chronic populations [40]. The Fugl-Meyer Assessment Scale focuses on multijoint upper extremity function and examines synergy patterns that may no longer form the basis of our intervention [41]. Moreover, it is a 3-point scale and do not differentiate changes in the less affected extremity. In contrast, the Wolf Motor Function Test assesses the performance time involving single joint or interjoint movements, which were frequently engaged in our intervention. The significant improvement in the gross manual dexterity, assessed by the Box and Block Test, could have been facilitated by an improvement in control of the elbow and wrist synergies and the grasping mechanism promoted by the interaction with tangible objects, which supports previous findings [18]. In addition, the specific training of the flexion and extension of the wrist in different positions and the metacarpophalangeal and interphalangeal joint promoted by our system, could also explain the improvement detected in the Nine Hole Peg Test. It is important to highlight that previous research on stroke survivors involving some robotic systems has shown no improvement after intervention in the Box and Block Test [12,42] unless the wrist joint [43] or finger dexterity [44] are specifically trained. However, these two last robotic systems failed to provide improvement reflected in the Nine Hole Peg Test, even in acute phase [45]. This should highlight the benefits of our system, since it can promote hand dexterity, as measured by the Box and Block Test and the Nine Hole Peg Test, while being cheaper and more portable than robotic systems. Although clinical scales do not allow the ultimate distinction between true recovery and behavioural compensation [46,47], the results suggest effective motor learning and motor skill retention derived from the experimental treatment. We hypothesize that the improvement in the clinical condition of the participants could be explained by the nature of the exercises, which satisfied the motor learning and neural plasticity principles. First, exercises were intensive and repetitive, characteristics that have been reported to influence improvement [5]. Second, they represented meaningful tasks specially designed to address functional activities, which has been reported of major importance for motor rehabilitation [5,6] and is known to positively affect arm-hand function recovery and motor control in stroke patients [46]. Third, augmented extrinsic feedback, a major aspect of motor learning [6,8,46], was provided during the training in the visual, auditory, and tactile channels. Interestingly, auditory augmentation of visual feedback can be beneficial during the execution of upper limb movements [48]. Fourth, the training drove subject´s attention to the effect of the action, which has been reported to enhance learning [49]. Finally, the difficulty of the training was particularized to each participant in each session, which is essential for motor learning and neural reorganization [6,46,49]. Previous research has found that functional improvement, which has been associated with cortical reorganization by different neuroimaging studies [10,50], can occur at any time [12,51,52]. However, the chronicity of the sample, which ensured that the functional improvement was externally driven by the intervention [1,5], could have limited greater improvement. It is important to highlight though that clinical improvement provided by the experimental intervention was retained after the second A-phase, it is, after returning to physical therapy. The practice under varied conditions promoted by the experimental system could have supported this retention, which has been reported as a better indicator of motor learning than the performance during or just after the practice [6]. The limited results obtained in the body structure and in the body function domains may be related to taskspecific effects of motor learning [5,46]. In line with the tendency of the last decade to shift the efforts of handarm rehabilitation from the function level towards the activity and participation level [46], the mixed reality system was designed to train specific tasks that imply the use of the affected arm, hand, and fingers, without explicit focus on strength or joint movement. This orientation, together with the discrete nature of the Manual Function Test (with scores ranging from 1 to 4), and, again, with the chronicity of the sample, could have prevented significant improvement in these components. The positive reports on perception of improvement and on the use of the paretic arm after the experimental intervention evidenced by the Motor Activity Log, and the high scores about usefulness and enjoyment evidenced by the Intrinsic Motivation Inventory, could depict a relationship between acceptance of the intervention and its repercussion to daily life. This fact could be explained by the ability of the system to motivate patients, which would support previous studies [12,15,51,52]. Importantly, motivation is believed critical for learning [7,49], and is considered one of the basic principles that should be satisfied by any rehabilitation approach [6,9]. Finding the rehabilitation enjoyable is thought to increase the level of engagement, participation, and compliance [15], thus increasing the effectiveness of a rehabilitation program. These results must be interpreted taking into account the limitations of the study. First, the characteristics of the sample are inherently linked to the specialized neurorehabilitation service where the study took place, which could restrict the generalization of the results. Second, no kinematic analysis was performed. Consequently, although compensatory strategies were restricted during the intervention, they were not controlled during the assessment, which could have influenced the performance in the scales and tests. Third, although the physical therapist who assessed the participants' condition did not know the protocol, the therapists who administered and controlled the intervention were not blind. Fourth, the requirements of the system could restrict interaction of some individuals. Participants were required to have enough motor control to actively move the hemiparetic arm, hand, and fingers along the table and enough cognitive and communication skills to understand and follow instructions. Finally, the sample of the study (n = 30) actually can be considered as a small sample, which can also limit the extrapolation of the results. However, the improvement detected in our sample supports the clinical effectiveness of mixed reality interventions that satisfy the motor learning and neural reorganization principles to improve upper extremity motor ability and finger dexterity in chronic stroke survivors. The effectiveness of the system together with its low cost, its portability, and its acceptance could promote its integration in the clinical practice as an alternative to more expensive systems, such as robotic instruments. Conclusions The mixed reality intervention was shown to be effective and motivating for rehabilitation of the upper extremity motor ability and manual dexterity in chronic individuals with stroke. The low cost of the system, its portability, and its acceptance could promote its integration in the clinical practice as an alternative to more expensive systems. Author details
2017-08-03T02:10:39.258Z
2016-05-11T00:00:00.000
{ "year": 2016, "sha1": "1301ca993951b1011a8797f8fa863348de0232aa", "oa_license": "CCBY", "oa_url": "https://jneuroengrehab.biomedcentral.com/track/pdf/10.1186/s12984-016-0153-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c00ef08f8b4d482a65cb6a176b7a95da412b78ad", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
14223379
pes2o/s2orc
v3-fos-license
Do the Preferences of Healthcare Provider Selection Vary among Rural and Urban Patients with Different Income and Cause Different Outcome? Background Equal access to healthcare facilities and high-level quality of care are important strategies to eliminate the disparity in outcome of care. However, the existing literature regarding how urban or rural dwelling patients with different income level select healthcare providers is insufficient. The purposes of this study were to examine whether differences of healthcare provider selection exist among urban and rural coronary artery bypass surgery (CABG) patients with different income level. If so, we further investigated the associated impact on mortality. Methods A retrospective, multilevel study design was conducted using claims data from 2007–2011 Taiwan’s Universal Health Insurance Scheme. Healthcare providers’ performance and patients’ travelling distance to hospitals were used to define the patterns of healthcare provider selection. Baron and Kenny’s procedures for mediation effect were conducted. Results There were 10,108 CABG surgeries included in this study. The results showed that urban dwelling and higher income patients were prone to receive care from better-performance providers. The travelling distances of urban dwelling patients was 15 KM shorter, especially when they received better-performance provider’s care. The results also showed that the difference of healthcare provider selection and mortality rate existed between rural and urban dwelling patients with different income levels. After the procedure of mediation effect testing, the results showed that the healthcare provider selection partially mediated the relationships between patients’ residential areas with different income levels and 30-day mortality. Conclusion Preferences of healthcare provider selection vary among rural and urban patients with different income, and such differences partially mediated the outcome of care. Health authorities should pay attention to this issue, and propose appropriate solutions to eliminate the disparity in outcome of CABG care. Introduction Equal access to health facilities and to ensure an equal level of high-quality care for all patients who present to the healthcare system with the same clinical indications regardless of race, ethnicity, gender, or socioeconomic status, are important strategies to eliminate the disparity in outcome of care in every country. [1] Differences in health outcomes between income levels and residence locations have been documented, [2][3][4][5] with the existing literature pointing out low-income or rural dwelling patients are more likely to receive sub-optimal care and worse outcome. [6,7] However, the existing literature regarding how urban or rural dwelling patients with different income level select healthcare providers is insufficient. In Taiwan, the National Health Insurance Scheme was established in 1995, and covers 99% of the population. People in Taiwan enjoy full accessibility to medical care; any barriers to health care have been reduced, and indeed, even no longer exist. [8] Although previous studies have demonstrated that the National Health Insurance Scheme has brought several positive effects on health, [9] health disparities among rural and urban dwellers still exist, [10] and this is worthy of in-depth investigation. Coronary artery bypass surgery (CABG) is a high-risk surgery with mortality of around 5%. Many healthcare-related agencies and quality indicator projects selected this surgery to monitor the provider's performance, e.g. OECD Health Care Quality Indicators project, Agency for Healthcare Research and Quality (AHRQ) in U.S. Therefore, the current study takes CABG as an example, to investigate whether differences of healthcare provider selection exist among urban and rural dwelling patients with different income level. If so, we further investigated whether the relationships between urban and rural dwelling patients with different income levels and mortality rates are mediated by such differences. Study design This retrospective and cross-sectional study adopted a multilevel design to examine the relationships between patient residence's urbanization level with different income level, healthcare provider selection, and treatment outcomes among CABG patients after adjusting for patient-, surgeon-, and hospital-level covariates. Database We used data from the Taiwan National Health Insurance Research Database (NHIRD) between 2007 and 2011. The NHIRD includes all the original outpatient, ambulatory and inpatient care claims data and registration files for beneficiaries enrolled under the NHI program. This database covers the 23 million enrollees in the NHI program (approximately 99% of Taiwan's population). The NHI claims data provides de-identified, secondary patient-level demographic, administrative information and discharge status on every case, and this database can be accessed by the public for research purposes. Ethics Statement The protocol for this study was approved by the Institutional Review Board of the National Taiwan University Hospital (protocol #201412074W). The dataset we used in this study was secondary data; all information was de-identified by data owners. Study population and Exclusion criteria We restricted our analysis to hospitalization records in which patients had a procedure code indicating a CABG (ICD-9 CM procedure codes 36.1x-36.2x) [11] from January 1, 2008 to September 30, 2011. We excluded patients under the age of 18 years (n = 14) to restrict our evaluation to an adult population. Hospitalization records with missing data for gender (n = 3) were excluded. In addition, we also excluded patients who received surgeries from surgeons who never performed any CABG surgeries in the previous year (n = 27), for homogenizing the variation of surgeon's performance. Definition of variables Dependent variable: any cause 30-day mortality. The dependent variable in this study was 30-day mortality from any cause after hospitalization for CABG surgery; 30-day mortality was determined by linking inpatient admission records with the withdrawal certificate records. The only reason for being withdrawn from NHI coverage within 30 days of hospital admission would be death. Withdrawal dates are the same as the date deceased according to the death certificate. [12,13] Independent variable. Urbanization level: Patient's residential areas were linked to the urbanization level. However, Taiwan's NHI is an occupation-based social insurance scheme. Employees of large enterprises might be enrolled using the address of their company's headquarters rather than the actual address of residence. Following Chang et al, [14] the actual location of each patient was assumed to be where an individual had the most outpatient and pharmacy visits in this study. The location of each clinic and pharmacy was recognized as either urban or rural type according to the definition of urbanization published by Taiwan's National Health Research Institutes. All 365 townships in Taiwan were classified into seven clusters based on the following indicators: Population density (people/km2), proportion of people with a college undergraduate degree or above, proportion of elder people over 65 years of age, proportion of people who were agriculture workers, and the number of physicians per 100,000 people. Residential areas located in clusters of 1 to 3 were categorized as urban, and the others as rural. [15] Income level: Patients' insurance identification records were used to distinguish patients in the low-income group from those who were not. In Taiwan, the National Health Insurance scheme classifies the insured into six insured classifications, according to the insured's occupation. Households below the poverty line belong to classification 5. We used this information in NHIRD as a criterion to identify the income level. Furthermore, the low-income population accounted for 1% of the total population; therefore, we selected the top 1% of insured level (NTD 92,000/ USD 3,000) as the definition of high income. Lastly, we divided the remaining insured level in half-middle-high income (insured level is higher than NTD 28,000/ USD 900) and middle-low income. Mediator Variable: healthcare provider selection. The definition of patterns of healthcare provider selection was the combination of the provider's performance and distance to hospital. The definition of provider's performance and distance to hospital are as below: Providers' Performance: The risk-adjusted 30-day mortality rates, risk-adjusted surgical site infection (SSI) rates, and service volumes of each hospital and for each surgeon in the previous year before each CABG surgery were used to evaluate the quality of CABG. Data on patient gender, age, Charlson/Romano Comorbidity Index (CCI) and number of vessels obstructed were incorporated for risk adjustment. Nevertheless, too many indicators can make interpretation difficult. Therefore, a transformation algorithm was required to understand the meaning of quality indicators in a simple manner. In this study, we applied the k-means clustering algorithm to classify the quality of hospitals and surgeons in this study. K-means clustering algorithm was based on cluster analysis, it is a kind of data mining approach, and is also one of the most used methods for partitioning clusters. [16] We applied this approach in our previous work. [6,7] Surgeons and hospitals were assigned to "good performance" and "non-good performance" groups according to their distance to cluster centers. Patients who went to a "good performance" hospital and receive healthcare from a "good performance" surgeon were included in the "excellent care" group. If patients received care from a not good performance surgeon vs. a not good performance hospital, they would be included in the "not excellent care" group. The remainder (good performance surgeon vs. not good performance hospital, or not good performance surgeon vs. good performance hospital) were included in the "good care" group. Distance to hospital: In term of distance, this study obtained the coordinates of the center of town where hospitals were located in or patients resided in through the Geographic Information System (ArcGIS for desktop, version 10.3), and calculated the Euclidean distance between these two points. The distances to hospitals were classified into near-, middle-, and far-distance groups, and 15 and 30 km were used as cutoff points according to travelling time (<30 minutes and > 1 hour). Healthcare Provider Selection: After retrieving the information of providers' performance and distances to hospital, this study used the combinations of providers' performance and distances to hospital to produce nine patterns of healthcare provider selection; these were excellent performance-near distance, excellent performance-middle distance, excellent performance-far distance, good performance-near distance, good performance-middle distance, good performance-far distance, not good performance-near distance, not good performance-middle distance, and not good performance-far distance. Covariates. In addition to three important patient-level variables we mentioned above, this study also collected other patient-, surgeon-, and hospital-level data. First, patient-level variables included age, gender, Charlson/Romano Comorbidity Index, and number of obstructed vessels (as a proxy indicator for duration of operation [17] that were involved in the surgical operation. Second, surgeon-level variables included age. Third, hospital-level variables included hospital ownership and accreditation status. Statistical analysis. All statistical analyses were performed using SAS (version 9.4, SAS Institute Inc., Cary, NC, USA). In statistical testing, a two-sided p value 0.05 was considered statistically significant. The distributional properties of continuous variables were expressed by mean ± standard deviation (SD), and the categorical variables were presented by frequency and percentage. In bivariate analysis, potential predictors of 30-day mortality were examined using the chi-square test and the two-sample t-test as appropriate. To account for correlations of information within the healthcare provider, multivariable analysis was conducted by fitting multilevel or mixed-effects logistic/ multinomial logistic regression (for three levels of outcome variable) models to each patient's data and then estimating the effects of hospital-and surgeon-level predictors on the probability of 30-day mortality. In addition, we combined Baron and Kenny's mediation effect testing procedure [18] with the recommendations given by Mathieu et al [19] to examine the mediation effect among residential area with income level, healthcare provider selection, and 30-day mortality. Finally, Sobel's test was used to verify the significance of the mediation test. [20] Sensitivity analysis. The cutoff values of travelling distance were determined in subjective manner; this study also categorized it into three groups again by k-means clustering algorithm for sensitivity analysis. Results There were 10,108 CABG operations performed by 317 surgeons in 60 hospitals from January 1, 2008 to September 30, 2011 that were included. Table 1 demonstrates the results of descriptive analysis. Among these cases, 4,778 (47.27%) patients lived in urban areas, and the rest of them lived in rural areas. Around 70% of the studied patients received their surgeries in medical centers. Thirty-six percent of the patients went to a public hospital; the average hospital service volume, risk-adjusted SSI rates and risk-adjusted 30-day mortality rates were 147, 1.29% and 5.42% respectively. With respect to surgeon characteristics, the mean age of these surgeons being 44 years, the average surgeon service volume, risk-adjusted SSI rates and risk-adjusted 30-day mortality rates were 50.30, 1.27% and 4.92% respectively. Around one-fourth of the patients were female, and their mean age was 65 years. Around 60% of patients had more than two vessels being obstructed. One-fourth of the patients were classified as high or middle-high income, and half of them received excellent or good care; most patients selected the hospital nearby their residence location, the average travelling distance to hospital was 36 kilometers, and 584 patients (5.78%) died within 30 days after hospitalization. The data also showed that rural-dwelling patients were poorer, older and were more likely to have comorbidity issues. The results also revealed that the percentage of rural dwelling patients who received care with excellent quality was lower than that of urban dwelling patients (21.63% vs. 28.84%), but the percentage of rural dwelling patients who received care from provider with not good performance was higher (41.29% vs. 50.28%). The travelling distance to hospital of urban dwelling patients was shorter than that for rural dwelling patients. The 30-day mortality was found to be higher in rural dwelling patients (6.96% vs. 4.46%). Besides, the urban dwelling patients were prone to receive surgeries in medical centers and public hospitals than were rural dwelling patients. Regarding hospital performance, hospitals visited by patients from rural areas had lower service volumes (136 vs. 160) and similar risk-adjusted 30-day mortality rates (5.52% vs. 5.31%). However, these hospitals had better risk-adjusted SSI rates than those visited by urban dwelling patients (1.23% vs. 1.35%). Regarding surgeon performance, surgeons who served rural dwelling patients had lower service volumes (48 vs. 53) but better risk-adjusted SSI rates (1.19% vs. 1.35%). The surgeon-level risk-adjusted 30-day mortality rates were similar between surgeons serving the two patient groups. Table 2 shows that the distribution of patterns of healthcare provider selection. More than 50% urban dwelling patients went to the hospitals nearby their residence location, no matter what income level they had. However, the percentage of rural dwelling patients who went to the hospitals nearby their residence location was only around 30%. Besides, urban dwelling patients who went to the nearby hospital had a higher percentage of receiving higher performance provider's surgery than did rural dwelling patients, no matter what income level they had. Moreover, the differences of patterns of healthcare provider selection between different income levels among urban dwelling patients were quite similar than rural dwelling patients. Rural dwelling patients with lower income seemed to select a worse pattern of healthcare provider selection. (i.e. longer travelling distance and poorer provider's performance) Table 3 demonstrates the travelling distance to different provider's performance level among rural and urban dwelling patients with different income levels. In general, the travelling distances of urban dwelling patients was shorter than rural dwelling patients. The results also revealed that the rural dwelling patients needed to move farther, then received care from excellent performance provider, especially in patients with low and middle-low income levels. Table 4 demonstrates the mortality rate among rural and urban dwelling patients with different income levels in different patterns of healthcare provider selection. The results showed that the difference of mortality rate existed between rural and urban dwelling patients with different income levels. The results also revealed that the patterns of healthcare provider selection might cause the mortality difference between urban and rural dwelling patients with lower income, especially in patients with low and middle-low income who selected hospitals close by. Table 5 shows the preferences of patients' selection after adjusting covariates. In terms of level of provider's performance, low and middle-income patients who lived in rural area were prone to receive care from provider with poorer performance, comparing with high and middle-high income patients in urban areas, after adjusting covariates. On the other hand, in terms of travelling distance, the rural dwelling patients were prone to move a further distance to receive care than high and middle-high income patients in urban areas. However, the results also demonstrated the urban dwelling patients with low and middle-low income were prone to stay in their local area to receive their care than were high and middle-high income patients in urban areas. Table 6 demonstrates the results of mediation effect examination, using multilevel model. Model 1 aimed to verify any linkage between patient's residential area with income level and 30-day mortality. The results suggested that rural dwelling patients were associated with a higher 30-day mortality risk (aOR = 1.512, 95% CI = 1.104-2.072; aOR = 1.826, 95% CI = 1.221-2.730) than were urban dwelling patients with higher income. Model 2 shows the relationship between patient's residential areas with income level and patterns of healthcare provider selection. The results from this model indicated that rural dwelling patients were less likely to select better patterns of healthcare provider selection (aOR = 0.562, 95% CI = 0.504-0.626/β = -0.577, standard error = 0.055; aOR = 0.715, 95% CI = 0.623-0.819/β = -0.336, standard error = 0.070). Model 3 tested whether a mediation effect from patterns of healthcare provider selection existed within the relationship between patient's residential area with income level and postoperative 30-day mortality. The results indicated that when patient's residential area with income level and patterns of healthcare provider selection were both placed in the model, the rural dwelling patients who selected worse patterns of healthcare provider selection had higher mortality risk (aOR = 1.380, 95% CI = 1.007-1.894/ β = 0.322, standard respectively. The result of Sobel's test suggested a significant mediation effect, which meant the relationship between a patient's residential area with income level and the 30-day mortality was partially mediated by patterns of healthcare provider selection. The results of sensitivity analysis also showed that rural dwelling patients were less likely to select better patterns of healthcare provider selection (aOR = 0.635, 95% CI = 0.563-0.716/β = -0.455, standard error = 0.061; aOR = 0.820, 95% CI = 0.704-0.955/β = -0.199, standard error = 0.078), and the rural dwelling patients who selected worse patterns of healthcare provider selection had higher mortality risk (aOR = 1.403, 95% CI = 1.028-1.915/ β = 0.339, standard error = 0.158; aOR = 1.767, 95% CI = 1.188-2.627/ β = 0.569, standard error = 0.202). The results of Sobel's test also validated that the mediation effects still existed. (Table 7). Discussion Health is a natural right, and every government should provide sufficient and quality healthcare services for their people, and eliminate health disparities as much as possible. The issue of health inequity has long been studied; traditionally, researchers focused on accessibility of minor/disadvantage groups. In recent years, some discussions about eliminating health disparity have begun to advocate not only enhancing accessibility, but also improving quality of healthcare among minority groups to achieve health equality. [21][22][23] Recent studies have shifted their focus to explore whether inequity of quality of care exists among different patient characteristics, [24][25][26]; therefore, it is necessary to combine these two components together, when discussing the issue of health disparities, and it is also important to understand the patterns of healthcare provider selection under different settings. Therefore, the current study not only discussed accessibility, but also the level of provider performance. Furthermore, the travelling distance to hospital was also taken into account. It is a novel perspective of health inequity studies, and it provides a multidimensional point of view. The results showed that rural patients with lower-income were prone to receive care from providers with poorer performance, compared with higher-income patients in urban areas. The travelling distance was varied among urban and rural dwelling patients with different income. Compared with higher-income patients in urban areas, the travelling distance of rural dwelling patients was longer, and urban dwelling patients with lower income were prone to stay in local areas to receive care. The results also revealed that relationships between urban and rural dwelling patients with different income levels and mortality were partially mediated by healthcare provider selection. The island of Taiwan is shaped like a leaf that is narrow at both ends and is mountainous. The terrain in Taiwan is divided into two parts: the flat to gently rolling plains in the west, and the mostly rugged forest-covered mountains in the east. Ninety percent of the population in Taiwan lives in the west coastal plain, and most hospitals are also located in this area. Because the island is not too large, the travelling distance ought not to be a problem. An important surgery such as CABG must be undertaken in hospitals instead of clinics. Healthcare resources are not distributed equally, and most hospitals in Taiwan are located in or near to cities, especially medical centers. It might explain why rural dwelling patients move farther than do urban dwelling patients. Nevertheless, why are urban patients with lower-income level prone to receive their care in local hospitals? Why do urban patients with higher income select hospitals farther away? This phenomenon might be illustrative from two perspectives. Firstly, although the NHI reduced the economic barrier of healthcare, the barriers of family care still exist. Poorer households have more financial constraints than do richer households. Therefore, if a family member needs to receive surgery, he/ she would be cared for by other family members, rather than hiring a nurse aide. However, because of the economic status, it is also not easy to take work leave for taking care of family members; therefore, it might be more convenient for a family member to provide the caregiving if a local or nearby hospital is selected to receive surgery. Furthermore, the findings of this study supported the notion that patterns of healthcare provider selection might cause the difference of mortality between urban and rural dwelling patients with different income levels, which meant the quality of care and travelling distances played a part in CABG surgery. It also implied that regionalization or decentralization of healthcare resources should be rethought. Regionalized vis-à-vis decentralized resources have a long historical struggle debate [27], with many pros and cons in each model from different dimensions. [28][29][30] Since travelling distance is still a concern for some patients, health authorities can think about the feasibility of regionalization, and allow a certain number of surgeons and hospitals in each healthcare service area to have the privilege of providing such surgeries. The second perspective was the ability to select a better provider; we named this heath literacy in quality of care. Income level is usually used as one of the indicators of socioeconomic status, patients with higher socioeconomic status may select a better provider, [31] they may have better knowledge or be referred by friends or colleagues and so on. How to decrease this kind of information asymmetry is an issue worth discussing. Prior literature provides evidence that information asymmetry may also result in the difference in choosing medical services, especially in rural and urban dwelling patients. [32] Taiwan is a highly information-based society, where all kinds of information can spread rapidly via e-mail, web community, and the internet and so on. However, the gap between urban and rural areas still exists. The degree of information spread in rural areas is lower than urban areas. Apart from the lack of infrastructure, rural dwellers' characteristics themselves also form obstacles. Insufficient information might cause patients to select poorly performing surgeons/ hospitals. For decreasing this gap, health authorities should provide guidance (e.g. report card) to help patients select the optimal provider to receive their surgeries. There are still three limitations that should be addressed. 1. The cutoff value of income level: The premium of Taiwan NHI is based on insured monthly salary. [8] Existing studies that have used the NHIRD to discuss the health inequality issues on income level in Taiwan usually employed monthly insured level as the basis for classification, and classified them. However, some employers do not buy insurance for their workers based on their real salary levels. In this study, our categorization of income level is more elaborate. The cutoff points are more reasonable than previous studies. 2. Calculation of travelling distance: Although understanding how far a patient moves to receive care is an interesting issue, we could not obtain the actual travelling distance from the existing literature and other sources. Using GIS software to calculate the distance between patient and hospital is one of the significant contributions of this study. However, patient's ID and hospital's ID were de-identified in Taiwan NHIRD; we also could not retrieve the addresses from this information. Using the coordinates of the center of town where hospitals were located in or where patients reside should be the optimal approach in this study; however, bias was still unavoidable. Conclusions Health disparity issues have long been recognized throughout the world. However, there is a lack of studies that examine how patterns of healthcare provider selection affect disparity in healthcare outcomes between patients of rural and urban areas with different income level. The findings of this study showed the patterns of healthcare provider selection varied among urban and rural dwelling patients with different income and impacted the relationships between urban and rural dwelling patients with different income level and mortality. The findings of this study could serve as a valuable reference for health policymaking to improve the public's health.
2018-04-03T05:25:49.160Z
2016-04-07T00:00:00.000
{ "year": 2016, "sha1": "0368f210ded9d7a8f0c1413cd066c8e31389714a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0152776&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0368f210ded9d7a8f0c1413cd066c8e31389714a", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201398586
pes2o/s2orc
v3-fos-license
Experience of implementing new mental health indicators within information systems in six low- and middle-income countries Background Successful scale-up of integrated primary mental healthcare requires routine monitoring of key programme performance indicators. A consensus set of mental health indicators has been proposed but evidence on their use in routine settings is lacking. Aims To assess the acceptability, feasibility, perceived costs and sustainability of implementing indicators relating to integrated mental health service coverage in six South Asian (India, Nepal) and sub-Saharan African countries (Ethiopia, Nigeria, South Africa, Uganda). Method A qualitative study using semi-structured key informant interviews (n = 128) was conducted. The ‘Performance of Routine Information Systems’ framework served as the basis for a coding framework covering three main categories related to the performance of new tools introduced to collect data on mental health indicators: (1) technical; (2) organisation; and (3) behavioural determinants. Results Most mental health indicators were deemed relevant and potentially useful for improving care, and therefore acceptable to end users. Exceptions were indicators on functionality, cost and severity. The simplicity of the data-capturing formats contributed to the feasibility of using forms to generate data on mental health indicators. Health workers reported increasing confidence in their capacity to record the mental health data and minimal additional cost to initiate mental health reporting. However, overstretched primary care staff and the time-consuming reporting process affected perceived sustainability. Conclusions Use of the newly developed, contextually appropriate mental health indicators in health facilities providing primary care services was seen largely to be feasible in the six Emerald countries, mainly because of the simplicity of the forms and continued support in the design and implementation stage. However, approaches to implementation of new forms generating data on mental health indicators need to be customised to the specific health system context of different countries. Further work is needed to identify ways to utilise mental health data to monitor and improve the quality of mental health services. Declaration of interest None. Within the area of mental health, there is a worldwide initiative to expand access to care by integrating mental health into primary healthcare. 1 Scale-up of any global health programme requires routine monitoring of key indicators. 2 Member states of the World Health Organization (WHO) have committed to reporting and monitoring national-level indicators for implementation of the global Mental Health Action Plan, 2013-2020. 3 However, most low-and middle-income countries (LMICs) do not yet have adequate mental health indicators to monitor their in-country programmes. 4,5 There is a pressing need to develop evidence-based mental health indicators for local programme monitoring and to understand 'how' data on these indicators can be collected in routine LMIC settings. 6 The 'how' question can be addressed through assessment of implementation of procedures to collect data on key mental health indicators, with particular consideration of the acceptability to patients and contextual feasibility. 7 Attending to the 'how' of implementation can tangibly improve mental health service monitoring and is crucial for the viability of ongoing efforts to scale-up mental health services in LMICs. 8 Development of mental health indicators in Emerald programme As part of the Emerald programme (Emerging Mental Health Systems in LMICs), 9 we established a set of key indicators for mental health programme monitoring, through a Delphi process and through building consensus among a broad range of stakeholders across six LMICs: Ethiopia, India, Nepal, Nigeria, South Africa and Uganda. 10 The final set of indicators covered mental health service utilisation for priority disorders, unmet needs of people with mental health problems, the quality of services provided and the associated financial risk to the person and their family. The selected indicators allowed measurement of key dimensions of universal health coverage, including the proportion of the target population receiving appropriate mental healthcare at district level in the six Emerald countries. Implementation of mental health data collection forms at a primary care level was evaluated quantitatively to assess their utility and validity. 11 In this study, we present findings from a qualitative study aiming to explore the acceptability, sustainability, feasibility and perceived costs of implementing the new mental health data collection forms in the context of integrated primary mental healthcare services in the six Emerald countries. A pre-existing conceptual framework, the Performance of Routine Information System Management (PRISM) framework, was used to assess the performance of these indicators. The PRISM framework describes the inputs of health information systems as determinants affecting the process leading to better-quality health management information systems (HMISs). 12 Method Study design A cross-country qualitative study was conducted with a framework approach. Semi-structured interviews were conducted with 128 key informants across the sites. A qualitative approach was used to achieve rich and detailed understanding of interviewees' points of view. 13 Settings The study was carried out in each of the six Emerald LMICs where a district-level mental healthcare plan was being scaled up to integrate mental health into primary care and reduce the treatment gap for priority disorders. Integration of mental health within primary care in Ethiopia, India, Nepal, Uganda and South Africa was led by Programme for Improving Mental Health Care (PRIME), 14 and by the EuropeAid programme in Nigeria. The district mental healthcare plans have been described previously; 15 in brief, they included training of primary healthcare workers in the WHO's Mental Health Gap Action Programme 16 or PC101 (in South Africa) 17 for primary care workers, combined with community and health system interventions to support this task-sharing model of care. Once the district mental healthcare plans had been implemented and running for about 12 months, the new mental health indicators and forms (health facility pro forma available upon request) were introduced. For this study, the term HMIS refers to a system of collecting, processing and analysing routine health data that already exists in the country's setting. At the primary care level in the six Emerald countries, the initial data collection component of the mental health information system is paper-based and managed by health workers (mostly nurses). However, the subsequent data compilation becomes electronic. At the district level and above, mental health data in India, Nepal, Nigeria and South Africa are compiled electronically. Ethiopia largely relies on paper forms; however, there are some instances where electronic HMISs have been piloted. Data collection in health facilities in all six countries is managed by health workers, most often nurses. The final list of indicators, type of forms or registers used for data collection, and the focal person responsible for implementing the new forms in each of the six countries are described in Table 1. Before introducing the new procedures for collecting the indicators, strategies such as 2-day training courses for health workers/managers, demonstration sessions and monthly supervision visits were used. The new mental health indicators had already been implemented for 6-8 months before this qualitative study was conducted. Sampling Participants for interviews were identified and recruited based on their roles and responsibilities within primary healthcare facilities. Interviews were conducted with key informants, including health facility staff responsible for collecting mental health data (nurses, HMIS officers, record officers), clinicians, programme managers, facility heads/managers, supervisors and case managers in the study districts (Table 2). Health managers and medical officers/clinicians from the PRIME scale-up facilities were approached separately. The health managers did not have any role in choosing the clinicians or vice versa. Those who consented were included in the interview. Interviews were kept confidential and anonymised. Procedures and instruments Data were collected in each of the six countries between February and August 2017. A semi-structured topic guide was developed in The topic guide was based on a subgroup of the key implementation outcomes identified by Proctor et al, 7 namely acceptability, sustainability, feasibility and cost. Definitions for each of these implementation outcomes are depicted in Table 3. Previously developed monitoring and evaluation topic guides from the MIND ME project (https://www.mhinnovation.net/innovations/ mind-me-africa) were also referred for the development of the topic guides. 2 Ethical considerations Organisational and ethical permissions from the appropriate incountry institutions, as well as cross-country approval from King's College London and the WHO Institutional Review Boards, were obtained before approaching participants in each country. All participants provided informed consent. Data analysis Individual semi-structured interviews were transcribed verbatim for the analysis. Translations to English were carried out for interviews conducted in local languages. The data analysis was underpinned by thematic analysis principles. 18 The process started with open coding, where initial descriptive codes were applied to the data. These initial codes were subsequently grouped into broader categories, reflecting emerging common themes and underpinning latent constructs (parent themes). At this stage of the analysis process it was noted that these parent themes corresponded with the input domains outlined in the PRISM conceptual framework. 12 At this point, a decision was made to use a framework approach to proceed with data analysis, 19 with the PRISM framework inputs guiding subsequent analysis. These inputs, summarised as parent themes for this study, were categorised by the PRISM framework into technical, organisational/environmental and behavioural determinants. The PRISM framework also details elements within each of these inputs; for this study, these were considered as subthemes within the three parent themes (see Table 4 for an overview of the integrated framework). An analysis framework reflecting these parent themes and subthemes was circulated to country researchers (D.G., J.A., J.M., N.M., C.H., S.A.) by a simple spreadsheet. This spreadsheet was subsequently populated with data (author summaries, participant summaries and quotes) by the country researchers. Finally, these data were synthesised by the lead researcher (S.A.). Results We first report findings on the technical factors to influence implementation of the new mental health indicators. We then discuss the role of organisational/environmental factors, presenting similarities and differences between the processes in each country. Finally, we elaborate on the behavioural components that emerged as enabling or hindering the integration of mental health data collection into primary care in the six countries. The following analyses were conducted at country level; analysed data were collated at cross-country level and are described here to compare the similarities and differences across countries. However, wherever necessary, cadre-specific responses are also highlighted in the section below. Technical influences Interviewees in all countries perceived that the new mental health forms led to generation of mental health data by making it easier to document a patient's records. Across countries, for many of the interviewees, this was the most significant achievement of the programme. One of the programme coordinators in India reported: 'For the first time in 15 years we are getting some sort of monthly reports from districts and even from CHCs [community health centres]. The DMHP [District Mental Health Programme] is quite old in Sehore district and we have for the first time been able to build such data system.' (ID-05, Madhya Pradesh, India). Similarly, in Ethiopia, a mental health focal person described the importance of mental health indicators in his health centre: 'We record on the register and follow up cases. For example, the guidelines state that the patients with epileptic seizures who take medications for 2 years should stop taking the medications if they do not show signs and symptoms of seizure and epilepsy anymore. So, to follow this up, it is necessary to record this on the register. In my opinion, in this regard the register is very good.' (ID-01, Ethiopia). Most interviewees in all six countries agreed that the new indicators were clear and easy to understand, and they experienced improved accuracy of their reporting over time, which was partly because of the familiarity with using the form as an integral part of their work. As per a respondent in South Africa: 'The mental health referral form used in South Africa refers to a one-page form where nurses are expected to tick impression, diagnosis etc. Initially when the nurses first made use of the referral form, there were minor issues with completeness and Acceptability: Perception among implementation stakeholders that a given treatment, service, practice or innovation is agreeable, palatable or satisfactory Sustainability: The extent to which a newly implemented treatment is maintained or institutionalised within a service setting's ongoing and stable operation Feasibility/utility: The extent to which a new treatment or an innovation can be successfully used or carried out within a given agency or setting Cost: The cost impact of an implementation effort Implementing mental health indicators in Emerald countries accuracy of the form, e.g. nurses would tick "other" but would not provide a narrative. It has improved now.' (ID-02, South Africa). However, despite the simplicity and familiarity with the new mental health forms, some respondents in India, Uganda, Nepal and South Africa expressed concerns about the additional time spent on filling out the forms. For example, in Ethiopia, health workers highlighted that the low level of literacy in the rural population lengthened the data-recording time. In Nigeria, health workers suggested that the recording time varied and extended up to 20 min, again highlighting that this was often when the patient was illiterate. One respondent at a health post in Nepal elaborated how additional time for reporting mental health indicators was a major concern for them. 'Mental health reporting takes time but we do not have proper time, we cannot manage time according to the situation because so many patients are coming to the health post with so many types of disease, and for different types of service so that we have difficulty to manage proper time to record the information in this register. That is our problem.' (ID-11, Nepal). Respondents' views on the time burden varied with the kind of information the health workers collected. Financial indicators on cost of medicine and out-of-pocket expenditure were said to be particularly difficult to collect by most respondents across countries. Some respondents referred to the sensitivity of asking people to divulge information on financial indicators. In Ethiopia, infrequently used indicators such as alcohol use disorder were found to be less important, mainly because health centres are not a preferred point of contact for the management of such disorders. In Nepal and India, indicators on severity of illness and functional assessment were difficult to collect, as these indicators were perceived to be more time-consuming than others. Respondents reflected on the iterations of the forms that occurred during the initial phase of implementation. On one hand, some mental health system indicators were dropped, but on the other hand, certain additions were made to the existing list of indicators. For example, indicators on comorbidities were added in Uganda, Nigeria and Ethiopia, and an indicator measuring 'where patients are referred from' was added in Nepal based on the requirements of their health facilities. An indicator relating to the rural/urban divide was added in Ethiopia because it was considered a key equity indicator by the Federal Ministry of Health. Inclusion of a 'history taking' indicator in the new mental health forms was recommended in South Africa because of its importance in diagnosing patients with mental disorders. In some countries, health supervisors and managers indicated that using the new mental health forms had improved their monitoring competencies. For example, health managers in South Africa were able to disseminate the findings from the new mental health forms through internal meetings. Similarly, in Uganda, a clinical officer reported their plans to compile mental health data at the end of the month and reflect upon it in health facility staff meetings. In three countries (Ethiopia, India and Nepal), there was no reported evidence to support use of data in improving services. However, in Nigeria, respondents were optimistic about the usefulness of mental health data collected by these new forms. In Nigeria, a respondent mentioned: 'After collating it per facility, you know that we can collate it monthly, we can collate it every three months, we can use it every 6 months, we need to know where the problem is, what the problem and where the problem is, so and we know how to address it, how we can fix it, then we know, ah! Then who are our main targets.' (ID-02, Nigeria). Correspondingly, in Uganda, a senior medical officer pointed out the importance of routine mental health data for organisational planning: 'This information [from the Mental Health HMIS] will help us to plan well for patients with mental health problems in our hospital. Now we have a shortage of drugs and it is because the government is not really aware that these are conditions that are affecting its people.' (ID-05, Uganda). Overall, interviewees conveyed that an improvement in mental health reporting at the facility level would enable better programme monitoring. This was a motivation to continue using the indicators. Organisational influences Coordinating mechanisms within/across departments A need to understand and account for coordination issues within/ across departments was an active issue in the implementation of the new mental health forms, and was emphasised explicitly by four out of the six Emerald countries (Nepal, India, Ethiopia, South Africa). In Nepal, the non-involvement of district officials delayed implementation. As a health worker in Nepal pointed out: HMIS section focal person of the DPHO [district programme health officer] was not involved in our [implementation of Emerald forms] process, so it created difficulties in coordination. The DPHO are aware that they need to keep the record but no concrete mechanism/plan is in place to collect and store the record.' (ID-07, Nepal). Similarly, in India, unclear directives from the state health directorate delayed the allocation of mental health tasks, such as recording and counselling for mental health patients, to the existing nurses/health workers and created confusion. In South Africa, a lack of coordination between prescribers and non-prescribers made access to out-patient department registers difficult, leading to infrequent and incomplete reporting. Issues also arose from parallel reporting systems in countries such as Ethiopia and India. Nurses at the district-hospital level in India used the new forms for reporting for the National Health Mission but also continued reporting in parallel for the district metal health programme. Resource demands in introducing mental health forms Despite a strong sense of the importance of the new forms, the additional time taken to incorporate this change within routine practice, by overstretched health workers, was expressed by respondents in India, South Africa, Nepal and Uganda. Health workers collecting data mentioned that a cause of delayed reporting was linked to the type of illness, as people affected by certain mental disorders require longer consultation and reporting time. As described by a nurse in Uganda: 'The biggest challenges I face to finish my records is, now that it is after a long explanation that some people may realize that they have a condition.' (ID-01, Uganda). Often, concerns about availability of space, 20 counsellors (Uganda) and specialists, 20 and the timely supply of essential psychotropic drugs (Ethiopia, India, Nepal, South Africa, Uganda) had an indirect effect on reporting. Correspondingly, procurement of forms, registers and other basic administrative issues delayed the reporting in two (South Africa, India) out of the six Emerald countries. To strengthen the information systems for mental health, all countries except South Africa utilised additional in-service training of health workers. Further, training on mental health indicators of staff at higher organisational levels, such as within the Department of Health, were suggested in Uganda and Ethiopia. In all six countries, the primary care facilities were being run by the government. Minimal or no additional cost was anticipated in the initiation of mental health reporting. Health workers in Uganda, Nepal, Nigeria, South Africa and India, however, anticipated additional printing costs. In Nepal, the additional humanresource costs of additional staff required for data reporting were mentioned. In Ethiopia, respondents did not consider the minimal additional cost for introducing mental health indicators to be prohibitive, but rather highlighted the importance of committing to sustain the scale-up initiative. To create a more sustainable environment for mental health reporting, all countries suggested the need for supervision for quality assessment and for motivating non-specialist workers to collect mental health data at primary care facilities. Success of the implementation of the new data system was attributed to the supervision of health workers through Emerald review meetings in Uganda, case manager visits in India and regular review visits to complete out-patient department registers in Ethiopia. Integration of mental health indicators within routine information systems In relation to the adoption of mental health indicators within the pre-existing health information systems, all country respondents reported that integration was possible. The following enabling factors for integration were described: (a) the need to report on mental health data (all countries); (b) the simplicity of the forms (Nigeria, Uganda); (c) reducing duplication by embedding into previous reporting systems (India 20 ) and (d) the perception that integration would increase demand of mental health services (Nigeria). At the time of data collection in Ethiopia, some mental health indicators (measuring prevalence and treatment rates for behavioural disorders, epilepsy and other mental disorders) were already included in the HMIS. However, more comprehensive inclusion of mental disorders (e.g. to separate psychosis and depression) was considered important by respondents in Ethiopia. Three countries either did not report on the process of integration (South Africa) or reported poor likelihood of complete integration (India, Nepal): 'Yes, it will be hard to integrate everything. We now have a different register and we can know what the case, whom we should call is. But if all of these go into the compiled register, then we have to distinguish the cases. There is a different register from the Government of Nepal for tuberculosis, leprosy, so if the register of mental health is made that way, then it can happen but compiling it together might be difficult.' (ID-05, Nepal). Similar to Nepal, some respondents from India perceived partial integration to be feasible and others anticipated the need for alternative strategies to achieve district-, state-and national-level integration. For example, for district and other lower levels of the health system, training modules for management of information systems and combined training needs were reported to be prerequisites for adequate integration. Four out of six countries (India, Nepal, Ethiopia, South Africa) commented positively with regard to the usability of the new forms in the future. In Nepal and Ethiopia, health workers perceived that the new data system would be useful for monitoring individual patient cases. In India, respondents saw the new data system as providing some baseline information on the coverage of mental health services in the future. Behavioural influences The level of knowledge, competence, confidence and motivation of health workers who were implementing the health information systems were all seen to affect the likelihood of implementation. Measures such as on-the-job training of health workers (all countries) and brief pamphlets for health providers to prompt the intervention (India, 20 ) improved knowledge on mental health indicators and their implementation. In terms of competency, all countries reported self-sufficiency over the new forms, which over time resulted in forming habits to complete them. Two out of the six countries said they had a system of reporting even before actual service delivery was initiated. In South Africa, the confidence of healthcare providers increased with the development and availability of resources such as the PC101 guideline and referral forms. However, in Nepal and Uganda, health workers demanded incentives for the new role. In Nigeria, experience in implementing similar information systems for other programmes assisted in boosting confidence in implementing the new forms: 'We are already used to routinely documenting patient records for other patients. For such [mental health] patients that just came to the hospital for the first time, we record …. [demographic data], their number is on it. So, when they come back, that small card helps us to fish out their main card. So basically, we have been very sure on how to complete the new forms.' (ID-01, Nigeria). Overall findings In this cross-country qualitative study conducted in two South Asian and four sub-Saharan African countries, we explored the experiences of front-line health workers in implementing new forms to generate data on mental health indicators for monitoring the scale-up of integrated mental health programmes in primary healthcare. We found that there were a number of barriers and Implementing mental health indicators in Emerald countries facilitators that affected implementation of the new forms. Some of the facilitators and barriers overlapped across the studied countries, whereas others did not. Overall, the new indicators were found to be feasible in the primary care facilities. Our results show that barriers to measuring new mental health indicators related to the time consumed in recording some indicators (particularly severity of illness and functionality), overstretched health workers, poor coordination within and across departments and poor service delivery (owing to lack of medication, space and counsellors), which indirectly affected data capture. On the other hand, simplicity of the forms, motivation and competence of health workers and, to an extent, perceived use of mental health indicators for monitoring and programme management, were reported as facilitators for better implementation outcomes. Implementation strategies such as training courses to assist initial use of new forms and supervision (using various methods) to ensure continued use were reported to be essential. Various new indicators developed in the country sites were reported to have contributed to mental health service improvement, such as indicators measuring essential medication stock-out in Ethiopia, India, Uganda and Nigeria; approximate time since the last appointment in Nepal and number of trained mental health professionals in Nigeria and India (refer to Table 1). Advancement from previous studies The successful implementation of mental health indicators is dependent not only on the strength of evidence regarding the effectiveness of that indicator, but is equally a function of its acceptability, feasibility and sustainability. 7 Studies such as that by Ndetei and Jenkins 8 have identified the need for unconventional and innovative approaches to collect data on mental health indicators; for example, by utilising community health workers and primary and midcadre health workforce. Our study has gone a step further by exploring perspectives on the use of forms generating data on mental health indicators by health workers at a primary care level, where mental health services are being integrated. Few studies from highincome country contexts have reported evidence regarding the feasibility of implementing performance indicators for mental healthcare programmes, 21 and fewer still in lower-income country settings. 9 Previous evaluations of routine health information systems also do not provide insights on implementation outcomes 22,23 and do not cover the specific domain of mental health indicators. Understanding acceptability, feasibility and sustainability of introducing new forms In our study, across the six countries where the Emerald programme was implemented, mental health forms to capture new indicators were accepted because of their simplicity and general satisfaction with the content. Reported confidence and competence in completing new mental health forms by participants further underlined their acceptability. Therefore, the perceived acceptability of the new reporting system was high. Contextual considerations are necessary in implementation and evaluation of information systems. 20,24 Based on context, certain countries in our study tailored approaches by adding some indicators (on sociodemographics in Ethiopia, patient history in South Africa and patient referrals in Nepal) and omitting others (indicators on cost in Ethiopia, Uganda, Nigeria and Nepal, and severity in Nigeria and India). As suggested from other studies and reports, 25,26 every health worker in our study also understood the need for mental health information generated from routine information systems. However, study participants reported little (Uganda, Nigeria, South Africa) to no (Ethiopia, India, Nepal) evidence on the use of information generated from the new forms. Despite being a potentially cost-effective source of valuable information, there is little evidence in the literature on the reported use of HMISs. 27 More studies are needed to investigate the use of information to inform local planning. The learning health system approach tries to do this and is being tested in Nepal and Ethiopia as part of the OPAL (Optimizing Provider Attitudes and competence in Learning mental health systems) project, 28 and (in Ethiopia) through the ASSET (health system strengthening in sub-Saharan Africa) project. 29 Repeated measures to understand acceptability and feasibility of information systems over time can assist in improving their use for patient care and facility management. Jordans et al measured utility of these mental health indicators by quantitatively analysing health records at two time points during the implementation phase. 11 Nesting different assessment methods over time can redefine barriers and refine implementation of data systems in mental health programmes. The increased workload resulting from completing the new mental health forms presents another set of sustainability challenges, particularly when the same non-specialist staff are responsible for both task-shared mental health service delivery and completing patient records. For the system of mental health reporting to function, buy-in from management staff is crucial to ensure sustainability. Similar measures have been suggested for strengthening hospital-based mental health information systems in Ghana and South Africa. 6,30 Our study affirms the need for supervision and active facilitation for inception and normalisation of the new reporting process as well as the use of routine data for local planning. This data can be used for measuring utilisation patterns over time. Similarly, accuracy and overall quality of immunisation records was seen to have been enhanced through auditing and supervision. 31 All participants from the six countries supported the idea of integration of mental health indicators with other routine indicators, with two (India, Nepal) suggesting partial integration. There is extensive evidence of integrating mental health into primary care, with the aim of strengthening mental health information systems. 32 In a review by Ndetei and Jenkins, challenges and opportunities were identified in linking mental health data systems to other data systems and vice versa for better clinical and overall outcomes. 8 However, there is no clear evidence on integrating mental health indicators within routine information systems. Therefore, further measures are needed to assess the feasibility of integrating all data systems at the primary care level on a large scale, to estimate their cost and other system implications and to evaluate whether integration improves data quality and usage at primary care level. Study limitations This study has several limitations. First, as this was a qualitative study, we are reporting on the perceptions of respondents with respect to the implementation of the new mental health forms. Nonetheless, the more in-depth understanding that was possible complements the more representative findings obtained from quantitative approaches. 11 Second, there may have been nested social desirability bias considering that respondents were usually being interviewed at their place of work. More objective approaches, including participant observation, could have reduced social desirability bias. Third, a cross-country researcher analysed a synthesised spreadsheet developed by country researchers. Although quality checks of external reviewing were put in place, some of the local nuances may not have been captured. In conclusion, in this qualitative study exploring the use of new mental health indicators in primary care facilities across six LMICs, the views of respondents from the different countries were mixed. Barriers to implementation across settings were related to the time taken to complete indicators measuring the functionality and symptom severity of people diagnosed with mental disorders. However, the simplicity of the new data collection method, competence and motivation of health workers in completing the new forms, and the appreciation that the new system held value and utility, were factors supporting implementation of the new system. There is a pressing need to integrate mental health indicators into routine health information systems. Even so, further research is needed to examine the sustainability of this integration and to find ways to support the use of mental health service data to improve the reach and quality of care.
2019-07-21T08:06:43.301Z
2019-08-06T00:00:00.000
{ "year": 2019, "sha1": "c2db4a2a0c74b0a9f5f33ff2fce13e32d6f31f3f", "oa_license": "CCBYNCND", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9FA1236D7AD873B41E310F3B6E17C2AA/S2056472419000292a.pdf/div-class-title-experience-of-implementing-new-mental-health-indicators-within-information-systems-in-six-low-and-middle-income-countries-div.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4c33382cf35ff1365f5964a83d6eced50df2bbcd", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine", "Psychology", "Business" ] }
57761373
pes2o/s2orc
v3-fos-license
Postoperative Late-Onset Endophthalmitis Caused by Leishmania donovani: A Case Report Leishmania donovani is a human blood parasite that belongs to the genus Leishmania. We would like to present a case of late Leishmania donovani endophthalmitis in one eye of a patient that underwent simultaneous bilateral grade 3 cataract surgery. A 77-year-old female patient was referred to our clinic for a routine, same-day, grade 3 cataract surgery on both eyes. Preoperatively and intraoperatively, the patient underwent standard preparation and surgical procedure. No visible endophthalmitis risk factors (blepharitis, conjunctivitis, keratitis) were seen on the preoperative exam. There were no intraoperative, nor early postoperative complications whatsoever. The follow-up ended 1 month after both surgeries without any evident signs of inflammation. At 3 months postoperatively, the patient was referred to us again complaining about rapid vision decrease in her right eye (20/200), with redness and mild pain, while the left eye had no symptoms and retained excellent postoperative vision (20/20). On examination, the presence of endothelial precipitates was noted, followed by a 2-mm hypopyon in the anterior chamber. Pupil was constricted and covered with a thick fibrotic membrane, which was also shown on the B-scan ultrasound exam in vitreous cavity. An urgent pars plana vitrectomy was performed. Vitreous tap was taken intraoperatively and sent for cytology and microbiology assessment. Cytology findings showed a large number of microorganisms in the cytoplasm of the phagocytes, which appeared to be Leishmania donovani parasites. The sample was taken according to the microbiology standards, in sterile conditions, and sealed in the specimen container. Therefore, the external contamination of the sample was ruled out. The patient was put on oral antiparasitic therapy (amphotericin B), as well as the local topical therapy (Tobradex ® ) 5 times per day for 2 weeks. One month after pars plana vitrectomy, visual acuity improved to 20/60, but the patient did not want to undergo a thorough examination at the clinic for infectious diseases. Discussion Endophthalmitis still represents a postoperative complication that can cause severe damage to the patient's visual performance. Leishmania is the obligate intracellular protozoan parasite which causes various forms of leishmaniasis. Their primary hosts are hyraxes, canids, rodents, and humans. It currently affects 6 million people in 98 countries. Even though a lot of preoperative, intraoperative, as well as postoperative measures are being taken every day in order to avoid it, endophthalmitis still occurs mostly in an acute form [1]. When it comes to late-onset endophthalmitis, it can be seen after gastrointestinal [2] and pacemaker surgeries [3]. Some authors even report an orbital involvement of post-cataract-glaucoma surgery endophthalmitis [4]. A variety of microorganisms can cause endophthalmitis, but their diversity is best described by Pluquet et al. [5], who report a novel species of microorganism that was isolated in a case of acute endophthalmitis. To our knowledge, this is the first case of late-onset endophthalmitis caused by Leishmania donovani. Currently, only Pradhan et al. [6] reported two cases of keratitis after post-kala-azar dermal leishmaniasis. Since the microorganism is being transferred by animals onto humans, we cannot know for certain how our patient got infected. Our theory is that the infestation might be systemic and that the eye was infected hematologically. Even still, the patient refused a thorough exam at the clinic for infectious diseases. Therefore, we cannot be certain about this fact. Treatment options are the same in all endophthalmitis cases. An urgent pars plana vitrectomy should be performed as soon as possible. Postoperative visual acuity mostly depends on the timing of pars plana vitrectomy, while the causative agent also plays an important role. Bacterial agents usually have the best prognosis; on the other hand, parasitic and fungal microorganisms have the worst. Systemic therapy can contribute to a positive postoperative effect of the disease and should be included even before the pars plana vitrectomy, especially if there is proof of the causative agent. In conclusion, we can say that endophthalmitis can still surprise us with the involved microorganisms even if we know more about the disease and the treatment options than ever before. Patient's cooperation is crucial for successful and on-time treatment, particularly in determining the causative agent.
2019-01-22T22:31:13.466Z
2018-12-04T00:00:00.000
{ "year": 2018, "sha1": "7fcd235bb92a4b281a258ff3f256ab12cdaef37d", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/495001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7fcd235bb92a4b281a258ff3f256ab12cdaef37d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119207533
pes2o/s2orc
v3-fos-license
Bounce inflation in $f(T)$ Cosmology: A unified inflaton-quintessence field We investigate a bounce inflation model with a graceful exit into the Friedmann-Robertson-Walker (FRW) decelerated Universe within $f(T)$ gravity framework, where $T$ is the torsion scalar in the teleparallelism. We study the cosmic thermal evolution, the model predicts a supercold Universe during the precontraction phase, which is consistent with the requirements of the slow-roll models, while it performs a reheating period by the end of the contraction with a maximum temperature just below the grand unified theory (GUT) temperature. However, it matches the radiation temperature of the hot big bang at later stages. The equation-of-state due to the effective gravitational sector suggests that our Universe is self-accelerated by teleparallel gravity. We assume the matter component to be a canonical scalar field. We obtain the scalar field potential that is induced by the $f(T)$ theory. The power spectrum of the model is nearly scale invariant. In addition, we show that the model unifies inflaton and quintessence fields in a single model. Also, we revisited the primordial fluctuations in $f(T)$ bounce cosmology, to study the fluctuations that are produced at the precontraction phase. I. INTRODUCTION The standard model of cosmology (big bang) has succeeded to trace the cosmic thermal evolution in an elegant way by comparing the particles interactions rate with the expansion rate. At very hot stages, the rate of interactions is much larger than the expansion rate and the local thermal equilibrium could be achieved, while at later stages, when the Universe cools down, the interaction rate decreases faster than the expansion allowing the particles to decouple from the thermal path at the equality of the rates. However, the big bang suffers many problems, e.g., causal connected, flatness, horizon, etc. This requires a superfast accelerated expansion phase at some early time, i.e., cosmic inflation [1][2][3][4][5], that is usually represented by an exponential expansion at ∼ 10 −35 s after the big bang. As a result, the Universe becomes isotropic, homogeneous and approximately flat. Standard inflation models assume the existence of a self-coupled scalar field (inflaton) minimally coupled to gravity, whose potential governs the inflation model. During this stage, when the initial quantum fluctuations cross the horizon and transform into classical fluctuations producing a nearly scale-invariant scalar perturbations spectrum. Although inflation solves the above mentioned problems, one of the fundamental problems still exists, that is the initial singularity which arises when tracing the Universe back in time as divergences of the cosmic temperature and density. Since the initial singularity is before inflation raids, the problem can not be solved within inflation framework. Another serious problem is the trans-Planckian problem which also appears in inflationary cosmology where * bamba@sss.fukushima-u.ac.jp † nashed@bue.edu.eg ‡ waleed.elhanafy@bue.edu.eg § shymaa 77@sci.asu.edu.eg the cosmological scales that we observe at present time correspond to length scales smaller than the Planck length at the onset of inflation [6,7]. One of the suggested alternatives is by assuming that the scale factor initially shrinks down to a nonzero minimal value then bounce to an expanding phase. In this case a singular or nonsingular bounce Universe can be obtained [8,9]. This idea has been extended to recognize nonsingular cyclic Universe models, e.g., pre-big-bang [10], ekpyrotic model [11]. Other than the nonsingular issue, bounce cosmologies have many interesting features such as solving the horizon and flatness problems even in the initial shrinking phase; also, these models can generate scale-invariant scalar perturbations as supported by observations. However, bounce models are usually faced by two main problems [12,13]: The first is called the anisotropy problem, that is in the contraction phase the anisotropies grow faster than the background so that the contraction ends with a complete anisotropic Universe which violates the cosmological principle and bouncing to an expanding phase will not occur. The second is called the ghost instability problem, that is the bounce cosmology violates the null energy condition (NEC), which gives rise to ghost degreesof-freedom. However, both two issues have been successfully resolved within a nonsingular bounce cosmology [14][15][16]. As a matter of fact the above mentioned anisotropy problem can be deluded if the equation-of-state is larger than unity during contraction, then the background dominates the anisotropies. Indeed, a large equation-of-state constrains the potential to be negative in scalar field models. On the other hand, the ghost degrees-of-freedom is an outcome of using the GR theory, while other modified gravity theories could alter the situation (for reviews on modified gravity theories, see, for instance, [17][18][19][20][21][22][23][24][25]). In f (T ) gravity, where T is the torsion scalar described by the Weitzenböck connection in the teleparallelism [26][27][28][29][30], it has been shown that nonsingular bounce solutions can be constructed in a straightforward way [9,31,32]. Also, it has been shown that f (T ) gravity combined with holonomy corrected loop quantum cosmology supports the bounce Universe model [13,[33][34][35] In this sense, we organize the work as follows. In Sec. II, we review the general relativistic cosmology showing its limited wilingness in cosmological applications. In Sec. III, we discuss a possible choice of a scale factor capable to perform a reliable cosmological model. We show that two possible scenarios could be obtained according to the values of the model parameter: a graceful exit inflation or a bounce graceful exit inflation. Also, we use the nice feature of f (T ) cosmology to represent the modified Friedmann equation as a onedimensional autonomous differential equation. This enables us to construct the corresponding (Ḣ − H) phase space, where the dynamical evolution of the model can be shown clearly. In Sec. IV, we construct an f (T ) theory corresponding to the bounce inflation model. Also, we evaluate the equationof-state of the torsion gravity showing its role to describe a healthy bounce Universe. In Sec. V, we discuss the thermal evolution of the Universe showing that its maximum reheating temperature is at the bounce point. We show how the slow-roll condition can arise naturally in this model as a consequence of the thermal evolution. We assume the matter to be a canonical scalar field, then we obtain the potential corresponding to the f (T ) theory. The slow-roll potential provides a nearly scale invariant spectrum consistent with observations. So the model does not suffer from a large tensor-to-scalar ratio that is usually obtained in bounce scenarios. In addition, we show that for a particular case, the model can unify inflatonquintessence fields in a single model. We also show that the NEC is not generally violated, which makes the model safe from the ghost instability problem. In Sec. VI, we extend our analysis to investigate the f (T ) theory at the perturbation level to study the primordial fluctuations during the precontraction phase. The work has been summarized in Sec. VII. II. EINSTEIN'S COSMOLOGY The Copernican (or cosmological) principle is believed to be a good approximation to construct a reliable cosmological model. Standard cosmology today is a manifestation of the Copernican principle and Einstein's field equations, when they have been applied to the whole Universe. Where G µν is the Einstein tensor, κ 2 = 8πG/c 4 , G is the Newtonian's gravitational constant and c is the speed of light in vacuum. We assume the natural unit system c = = k B = 1, and the stress-energy tensor T µν is taken as for a perfect fluid where u µ = δ 0 µ is the 4 velocity of the fluid in comoving coordinates, and ρ and p are the density and pressure of the fluid, respectively. We also assume the Universe is FRW spatially flat, that gives rise to the metric where a(t) is the scale factor. Applying the Einstein's field equations to the FRW Universe leads to Friedmann's equations where H ≡ȧ/a is the Hubble parameter and the dot denotes the derivative with respect to time. Constraining Friedmann's equations by the linear equation-of-state p = ωρ, then solve for the scale factor [36] a FRW = where a k , H 0 and t i are constants. The former is the usual power-law scale factor, for ω > −1/3 the Universe is expanding with deceleration, while it is accelerated when ω < −1/3. The later gives a de Sitter universe, where ω = −1, which does not allow the Universe to evolve, that could be considered in late phases rather inflation. We next discuss necessary consequences of using the above power-law scale factor. Since, the classical laws of physics breakdown beyond Planck time, we usually assume our description is valid at an initial time at Planck's time t = t p ∼ 10 −44 s, where the temperature Θ p ∼ 10 32 K is the Planck temperature, and the length ℓ p ∼ 10 −33 cm is Planck length. At a present time t 0 , the rough estimation of the horizon scale is ct 0 ∼ 10 28 cm. So the ratio of the present value of the scale factor a 0 to its initial one at Planck's time a i is given as a 0 /a i = Θ p /Θ k ∼ 10 32 . However, the initial size of the present Universe is L i ∼ ct 0 a i a 0 . If we assume that nothing is faster than speed of light, the casual region size is L c ∼ ct p . Thus, at Planck's limit the ratio of the expected initial to the casual region size of the Universe is L i /L c ∼ 10 28 . Here the need to an early accelerated expansion episode, inflation, becomes clear. As a matter of fact this inflation requires the Universe to grow up by a factor > 10 28 ∼ 64 efolds to be causally connected at a time ∼ 10 −35 s from the big bang. But the power-law scale factor (5) for ω > −1/3 giveṡ a ∼ a/t such that L i /L c ∼ȧ i /ȧ 0 ≫ 1, which is consistent with the standard idea of gravity as an attractive force. However, the causal connected Universe condition implies that L i /L c ∼ȧ i /ȧ 0 < 1 at some early time. In this sense, gravity should act as a repulsive force during inflation. In addition, we need this early accelerated expansion phase to end at ∼ 10 −32 s with a smooth transition to the standard FRW model in order to gain the benefits of the big bang nucleosynthesis successes. In GR theory, the cosmic evolution is constrained by the scale factor only so that any modification needed must be through the choice of the equation-of-state. As a matter of fact, in GR theory, any choice of the scale factor different from equation (5) leads to inconservative Universe as long as the equation-of-state is chosen p = ωρ. In an alternative to GR theory, e.g., the teleparallel gravity, we may see gravity in a different way [37][38][39][40][41][42][43]. However, in cosmological applications the teleparallel equivalent to general relativity (TEGR) suffers the same problem. One of the interesting modifications in gravity is f (T )-theories, this extended version of the teleparallel gravity has received a wide echo in the literature in cosmology [44][45][46][47][48][49][50][51][52][53][54] and in astrophysics as well [55][56][57][58][59][60][61]. III. A MODIFIED SCALE FACTOR Fortunately, the f (T ) gravity shows more flexibility with the FRW model, it allows two unknowns a(t) and f (T ) in the field equations so that we have two possible ways to well identify the universe: either by introducing a specific f (T ) in addition to the equation-of-state then solving for a(t); or by introducing a scale factor in addition to an equation-of-state then solving for f (T ). In these two cases, the Universe is conservative, and the gravitational sector is expected to play an important role in the cosmic dynamics. We take the second path to obtain a possible f (T ) theory describing how does teleparallel gravity can perform an early acceleration episode with a smooth transition to the usual decelerated FRW epoch, with no need to the slow-roll approximation. We summarize a useful tool to qualitatively describe the dynamical behavior of a flat FRW model by constructing its (Ḣ-H) phase space diagram [62]. This method uses the pressure properties such as asymptotic behavior and fixed points to analyze cosmological solutions. We first identify the zero acceleration curve by the deceleration parameter q ≡ − aä a 2 = 0, i.e.,Ḣ = −H 2 , which divides the phase space into two regions. The inner region characterizes the usual decelerated FRW models, this is shown by the shaded region in Fig. 1. However, the unshaded region represents the accelerated phases. We classify different phases in Fig. 1 as follows: (I) region represents an accelerated contracting Universe as q < 0 and H < 0, (II) region represents a decelerated contacting Uni-verse as q > 0 and H < 0, (III) region represents a decelerated expanding Universe as q > 0 and H > 0 which characterizes the usual FRW models, and (IV) region represents an accelerated expanding Universe as q < 0 and H > 0 which characterizes the so-called inflation or dark energy phases according to the dynamical evolution of the model. It is worth to mention that one can engineer a Universe using a particular scale factor to fulfill the observational requirements. A. A possible choice As a result of the above discussion, we showed that how the GR theory limits the choices to perform acceleratedto-decelerated expansion transition, unless we change the equation-of-state by hand from ω < −1/3 to ω > −1/3, respectively. However, the f (T ) gravity can perform this task with no need to change the equation-of-state manually. This can be done by plugging a suitable scale factor into the f (T ) equations of motion. As a matter of fact, we need the scale factor to construct an (Ḣ − H) phase space able to cross the zero acceleration curve from the (IV) region into (III) region. For this reason, let us reintroduce the power-law scale factor to the game with a correction term where α is a parameter with units of time; the usual FRW model is recovered by setting α = 0. Also, one finds that the radiation and matter dominant epochs are achievable at a late time when t − t i ≫ |α|. In order to make our terminology clear as far as we can, it is worth mentioning that we take the equation-of-state parameter of ultrarelativistic (e.g., radiation) matter ω = 1/3 as t p < t < t eq , where t eq denotes the time of the matter-radiation equality [i.e., ρ r (t eq ) = ρ d (t eq )] where the subscripts r and d denote , respectively, the radiation and dust phases. At time t > t eq , the equation-of-state parameter will be taken as ω = 0 of cold matter (e.g., dust). It is convenient now to fix the values of the constants t i and a k in (6) in addition to the parameter α. So we take the three conditions: a(t = 0) = 0 at the initial singularity with the equation-of-state parameter ω = 1/3, the accelerationä(t end ) = 0, where t end = 10 −32 s denotes the time at the end of inflation, and a(t 0 ) = 1 with an equation-of-state parameter ω = 0 at the present time t 0 ∼ 10 17 s. Using (6) with the just mentioned conditions, we fix t i = 0, a k = 4.6 × 10 −12 , while the parameter α may have the values 1.61 × 10 −32 s, or −2.76 × 10 −33 s. In a nonphantom regime, and when t > 0, we discuss qualitatively two possible cases: (i) For α > 0. As t → 0, the scale factor is initially a i → 0, then we expect ρ i (t) = ∞; that is the initial big bang singularity. At t ≪ α, we have a FRW ≫ a corr , while at t ≈ α, we have a FRW < a corr ; also at t ≫ α, we get a FRW ≫ a corr ∼ 1. This case gives rise to a typical graceful inflation model, see Fig. 2(a). (ii) For α < 0. As t → 0 + , the scale factor is initially a i → ∞. At t ≪ |α|, we have a FRW ≪ a corr , while at The models: (a) For α > 0, we have an initial big bang singularity followed by an inflation period capable to evolve to FRW phase; (b) For α < 0, we have a bouncing behavior which avoids the trans-Planckian problems of inflationary models; (c) For α < 0, the velocity curve shows clearly a bouncing behavior. However, after the bouncing time t B ∼ 4.14 × 10 −33 s, the model can also perform an early accelerated expansion period with a smooth transition (graceful exit) into a FRW model. t ∼ |α| we have a FRW < a corr , also at t ≫ |α| we get a FRW ≫ a corr ∼ 1. This case gives rise to a bouncing universe, see Fig. 2(b). In both cases, we find that a corr → 1 asymptotically, which matches perfectly the FRW phase at the late time, see Fig. 2. However, we are interested to study case (ii) in the above. So we take the negative parameter model, the bouncing behav-ior could avoid the big bang singularity so that regular problems of inflationary cosmology, e.g., trans-Planckian problems, would not have been addressed here. Interestingly, the (ii) model can perform an early accelerating expansion phase with a smooth transition to a FRW decelerated expansion later. We determine the bouncing time t B at which the velocityȧ = 0, whereȧ < 0 (contraction phase) at t < t B , whileȧ > 0 (expansion phase) at t > t B . This determines the bouncing time t B = − 3α 2 ≈ 4.14 × 10 −33 s. The plot in Fig. 2(c), regardless of the initial contraction phase (t ≤ t B ), shows that the velocityȧ experiences an increasing phase between the bounce pointȧ = 0 and the inflation endä = 0 (i.e., a =maximum) as t ∈ (t B , t end ), then the velocity curve matches the power-law scale factor (FRW) decreasing phase. This indicates that the (ii) model shares same features required for a successful graceful exit inflation model after the bouncing at t > t end . B. (Ḣ − H) phase space analysis In the following, we construct the (Ḣ − H) phase space corresponds to the modified scale factor (6), then we track its phase portrait to extract information about the model at hand in a clear and transparent way. a. Autonomous system. For the scale factor (6), we obtain the useful relatioṅ which represents one-dimensional autonomous system. Here, H(H) is a double valued function as it should be in bounce cosmology. Such a double valued function often appears when there is a first-order phase transition [63]. We take the plus sign to represent theḢ > 0 branch, while the minus sign represents theḢ < 0 branch. b. Bounce cosmology. In Fig. 3 we draw the phase space diagram corresponding to (7), where the bounce point is clearly shown on Fig. 3(a) at the point (H B = 0,Ḣ > 0). Before this point, the contraction phase can be shown as H < 0 andḢ > 0, while after this point the expansion phase is determined as H > 0 andḢ > 0. The contraction period can be evaluated as which is in agreement with the previous calculations. c. Phantom crossing. We first determine that the fixed points (i.e., dH/dt = 0) are at the minimal Hubble H min = H 1 = 0 (i.e., Minkowski space) and at the maximal Hubble H inf = H 2 = −1 9α(1+ω) ∼ 2.07 × 10 7 GeV, which represents an inflationary Universe with ω e f f = −1 (i.e., de Sitter space). So the period after bounce to reach de Sitter H inf can be evaluated as This makes the Universe to stay in theḢ > 0 branch a period of −3α ≈ 8.28 × 10 −33 s. Since the point H 2 is a fixed point, the above result seems to be unconventional. We expect that the time to reach any fixed point is an infinite, this is true when the trajectories are forced to increase or decrease monotonically. However, in our case the double valued function could alter the picture. We next investigate the possibility to cross fromḢ > 0 branch toḢ < 0 through the de Sitter fixed point H 2 . The former branch goes effectively as a phantomlike (ω e f f < −1), while the latter is a nonphantom (ω e f f > −1). The conditions for this transition to occur are listed as follows [62]: The first condition is to ensure that the crossing point be at the fixed point H inf . The second condition indicates that the pressure satisfying d p(H)/dH has an infinite discontinuity at H inf so that the Universe reaches ω e f f = −1 in a finite time, but in the general relativistic framework, the solution is not causal. The time to reach the crossing can be determined from the third condition. In addition to these conditions, it has been shown that the crossing is possible only whenḢ(H) is a double valued function [17,62]. Since the above mentioned conditions are fulfilled in this model, then the de Sitter fixed point is accessible and the transition to the standard inflationary era is valid. d. Inflationary Universe. In the following discussion one should deal with theḢ − branch. The phase portrait of (7) in Fig. 3(b) shows clearly a short inflationary period after the de Sitter stage to the intersection with the zero acceleration curveḢ − = −H 2 , which is required to go from (IV) to-(III) regions as indicated by Fig. 1. So we determine possible transitions from acceleration-to-deceleration by identifying the intersections with the zero acceleration curveḢ − = −H 2 . So we obtain these transitions at This shows that possible transitions are, only, when the matter fluid is a phantom ω < −1 or when ω ≥ −1/3. We are interested in the more physical case ω > −1/3. Now if we restrict ourselves to the radiation case by taking ω = 1/3, it predicts the transition from acceleration-to-deceleration at H exit ∼ 2.01 × 10 7 GeV just below the H inf at the de Sitter Universe. Furthermore, we determine the inflation period by evaluating the following integral This makes the Universe to stay in the accelerating expansion phase ∼ 5.85 × 10 −33 s. e. Graceful exit inflation. Moreover, the model is capable to end the inflationary phase gracefully to a decelerated expansion phase, which characterizes the standard FRW cosmology at t ∼ 10 −32 s. This can be shown easily by summing up (8), (9) and (11). f. Standard decelerated FRW cosmology. As discussed before, when the cosmic time t > |α|, the model approaches the standard decelerated FRW cosmology. This is shown clearly on Fig. 3(b) when the phase portrait model goes from region (IV) to (III). Also, it shows that the model matches the radiation phase portrait of the standard cosmology at late time. This is an important feature to match the thermal history of the Universe. This point will be revisited in detail in Sec. V. Finally, we show that the model has no future singularity, since the time required to approach the next fixed point, i.e., Minkowski space, at (H = 0,Ḣ = 0) is infinite as g. Conclusion. In this sense, we find the scale factor (6) with these constraints is not only a good candidate to describe a reliable graceful exit inflation model but also its bouncing behavior avoids the trans-Planckian problems of the standard inflationary models. However, using this version of scale factors can not work properly with the linear equation-of-state, if one insists to use the standard GR. This is because of the breaking of the continuity equation. On the contrary, we can keep using the linear equation-of-state along with the scale factor (6), if we switch to modified gravity theories. One of the modified gravity theories which has been used widely in cosmology is the f (T ) theory. Although, this can be applied generally in modified gravity, the modified Friedmann equations of any f (T ) theory can be viewed as a onedimensional autonomous system [64], i.e.,Ḣ = F (H). This feature is not available for other modified gravity theories, e.g., f (R), which contain higher derivatives of H. In this sense, we find that the phase space analysis is more consistent with f (T ) cosmology. However, similar models have been investigated, without using the phase space, in Gauss-Bonnet modified gravity [8,65]. A. Teleparallel space In this section, we give a brief account of the absolute parallelism (AP) space. This space is denoted in the literature by many names teleparallel, distant parallelism, Weitzenböck, absolute parallelism, vielbein, parallelizable space. An APspace is a pair (M, h a ), where M is an n-dimensional smooth manifold and h a (a = 1, · · · , n) are n independent vector fields defined globally on M. The vector fields h a are called the parallelization vector fields. Recent versions of vielbein space with a Finslerian flavor may have an important impact on physical applications [66][67][68][69]. Let h a µ (µ = 1, ..., n) be the coordinate components of the a th vector field h a , where Greek and Latin indices are constrained by the Einstein summation convention. The covariant components h aµ of h a are given via the relations where δ is the Kronecker tensor. Because of the independence of h a , the determinant h ≡ det(h a µ ) is nonzero. However, the vielbein space is equipped with many connections [70][71][72][73]; on a teleparallel space (M, h a ), there exists a unique linear connection, namely the Weitzenböck connection, with respect to which the parallelization vector fields h a are parallel. This connection is given by and is characterized by the property that where the operator ∇ (Γ) ν is the covariant derivative with respect to the Weitzenböck connection. The connection (14) is referred to as the canonical connection. The relation (15) is known in the literature as the AP condition. The noncommutation of an arbitrary vector fields V a is given by where R α ǫµν and T ǫ νµ are the curvature and the torsion tensors of the canonical connection, respectively. The AP condition (15) together with the above noncommutation formula force the curvature tensor R α µνσ of the canonical connection Γ α µν to vanish identically. Moreover, the parallelization vector fields define a metric tensor on M by with the inverse metric The Levi-Civita connection associated with g µν is In view of (15), the canonical connection Γ α µν (14) is metric: The torsion tensor of the canonical connection (14) is defined as The contortion tensor K α µν is defined by where the covariant derivative ∇ (Γ) σ is with respect to the Levi-Civita connection. SinceΓ α µν is symmetric, it follows that [using (20)] one can also show the following useful relations: where T µνσ = g ǫµ T ǫ νσ and K µνσ = g ǫµ K ǫ νσ . It is to be noted that T µνσ is skew symmetric in the last pair of indices whereas K µνσ is skew symmetric in the first pair of indices. Moreover, it follows from (21) and (22) that the torsion tensor vanishes if and only if the contortion tensor vanishes. In the teleparallel space there are three Weitzenböck invariants: We next define the invariant T = AI 1 + BI 2 + CI 3 , where A, B and C are arbitrary constants [74]. For the values: A = 1/4, B = 1/2 and C = −1 the invariant T is just the Ricci scalar up to a total derivative term; then a teleparallel version of gravity equivalent to GR can be achieved. The teleparallel torsion scalar is given in the compact form where the superpotential tensor is skew symmetric in the last pair of indices. Also, there are different extensions of TEGR, e.g., Born-Infeld extension of the TEGR [75,76], another interesting variant is the modified teleparallel equivalent of Gauss-Bonnet gravity and its applications [77][78][79]. Another extension is the f (T ) gravity, it has been inspired by the f (R)-gravity when the Ricci scalar is replaced by an arbitrary function f (R) in the Einstein-Hilbert action. But the former is by replacing the teleparallel torsion scalar by an arbitrary function f (T ) [80][81][82][83]. We consider the action of the f (T ) gravity where L m is the Lagrangian of the matter and |h| = √ −g = det h µ a . The variation of the action (25) with respect to the tetrad gives ∂T 2 such that the TEGR theory is recovered by setting f (T ) = T . Also, the stressenergy tensor is assumed to be for perfect fluid as given by (2). The applications of the f (T ) gravity in cosmology show interesting results, for example, avoiding the big bang singularity by presenting a bouncing solution [31,32]. Also, f (T ) cosmology provides an alternative tool to study inflationary models [44,75,80,[84][85][86][87][88][89][90][91][92][93]. Although f (T )-theories lack invariance under a local Lorentz transformation [94][95][96] (for the related considerations, see [97][98][99][100][101][102][103]), a recent modification by considering nontrivial spin connections may solve the problem [104]. For more details of f (T ) gravity, see the recent review [105]. B. Constructing an f (T ) theory As presumed that the Copernican principle is valid, we take the flat FRW metric (3), which may give rise to the vierbein For the vierbein (27) and by using (6), the teleparallel torsion scalar (23) gives rise to the useful relation As we mentioned earlier, in the f (T ) framework, one needs to enter a particular scale factor or viable f (T ) in addition to a specific equation-of-state. In this model, we are interested to construct an f (T ) theory corresponding to the modified scale factor (6), where the equation-of-state is chosen to be linear p = ωρ. We apply the f (T ) field equations (26) to the vierbein (27), then the modified Friedmann equations read It is convenient to write the f (T ) in terms of time t. One easily can show that Substitute (28) and (31) into (29) and (30), then matter density and the matter pressure The continuity equation can be integrated to where the integration constant ρ 0 ≡ ρ(t 0 ) ≈ 1.88 × 10 −29 Ωh 2 g.cm −3 , the density parameter Ω, and the dimensionless hubble constant h are given by the observations. Combining (34) with (32), then solving for f (t), we get where the constant c 2 = − 4κ 2 ρ 0 a −3(1+ω) k 3α(1+ω) 2 ; α 0. Using the inverse relation of (28), one can rewrite the above result as f (T ) as usual. Thus the corresponding f (T ) theory which generates the scale factor (6) is given by where . Here, f (T ) + corresponds to the branchḢ > 0, and f (T ) − corresponds to the branchḢ < 0. One can show that the first term ∼ √ T in (36) has no contribution in the field equations, so we omit this term in the following without affecting the generality of the model. It is convenient to evaluate the evolution of the density and pressure of the matter, this can be achieved by substituting from (35) into (32) and (33); the density and pressure can be written as It is obvious that as t ≫ |α| the density and pressure of the standard FRW model is recovered. C. Effective equation of state It is convenient to transform from the matter frame we have been using to Einstein frame, which gives the Einstein's field equations form and additional degrees-of-freedom by f (T ) gravity. So we write the modified Friedmann equations in the case of f (T ) gravity, i.e., where the standard matter energy density ρ and pressure p have their torsion scalar counterpart ρ T and p T , are the torsion contributions to the energy density and pressure, respectively, which satisfy the continuitẏ One can show that ρ T and p T vanish where f (T ) = T and the standard Friedmann equations are recovered. We argue here that the quantities ρ T and p T can explain the early selfacceleration of the Universe. Then, by using Eqs. (40) and (41), we can define the effective torsion equation-of-state parameter as Where the last equation has been evaluated by using (28), (35), it can be shown that ω T = ω at t ≫ |α|. We plot the evolution of the equation-of-state parameter of the teleparallel torsion fluid as seen in Fig. 4. Equation (43) is ill defined at where Σ = interval (t + , t − ) ≈ (3.54 × 10 −33 s, 5.85 × 10 −33 s). So it initially begins as a cosmological constant ω T → −1, then it goes to −∞ as t → t + , while ω T ≫ 1, where t + < t < t − ; this includes the bounce time t B , which shows that the torsion equation of state is greater than unity at the contraction phase as required for solving the anisotropy problem. After that, ω T is negative again, while it goes back to cross ω T = −1 to connect the observed expanding Universe [12], then it crosses ω T = −1/3 ending the early accelerated expansion, at t ∼ t end ≈ 10 −32 s, to enter a new phase of a decelerated expansion. Finally, it approaches the radiation limit ω T = ω = 1/3 as t ≫ α as required to match the hot big bang consistently. As we discussed above, the torsion equation-of-state evolutions fulfills the requirements of a successful bounce cosmology. In addition, it matches precisely the results of the phase space analysis in Sec. III B. This behavior supports our argument that the cosmic bounce is a manifestation of a higherorder teleparallel gravity. In other words, the vacuum f (T ) is a good candidate to describe bounce cosmology. We also define the effective (total) equation-of-state parameter Where the last equation has been evaluated by using (37), (40) and (41), it is obvious that ω e f f = ω at t ≫ |α|. We plot the ω e f f as shown in Fig. 4. Equation (44) shows that the Universe effectively initially is a cosmological constant with ω e f f = −1, then it evolves to ω e f f → −∞ at the bounce time t B = − 3 2 α ∼ 4.14 × 10 −33 s, while it is −1/3 at the acceleration expansion ends t end . Finally, it matches the radiation limit, i.e. , ω e f f = ω = 1/3. In conclusion, we find that the torsion and the effective equation-of-state parameters agree in all stages except at the bounce time. The later goes to −∞, while the former is much greater than unity at that time. As is well-known, the violation of the NEC is necessary to obtain a bouncing solution. In addition, violation of the strong energy condition (SEC) is necessary to obtain an accelerated expansion phase. The above results show clearly that the model effectively breaks these energy conditions at the early Universe, where ω e f f < −1. However, due to the limitation of the GR, the violation of these energy conditions in the matter component is unavoidable. Even in the effective field theory, a bounce Universe is usually achieved by introducing matter fields, which violate the NEC. On the contrary, this picture could be altered if we use the f (T ) gravity which violates the NEC effectively. We can always use this feature to produce a healthy bounce solution, where the matter component in this scenario will be consistent with the NEC. This will be discussed in detail in Sec. V D. A. Reheating in bounce universe As mentioned before, the key of the thermal history is to compare the rate of interactions Γ with the rate of expansion H. In the case of Γ ≫ H, the time scale of the particle interactions is much smaller than the expansion time scale as Thus, a local thermal equilibrium can be reached before the effect of the expansion becomes relevant. After, as the Universe cools down, the Γ decreases faster than H so that at t c ∼ t H , the particles decouple from the thermal bath. Different particle species may have different interaction rates and so may decouple at different times. On the other hand, one of the essential ingredients of inflationary models is the reheating process of the Universe by the end of the inflation. In order to examine the capability of the model to predict a successful thermal evolution, we define the entropy S of all particles in thermal equilibrium at temperature Θ in volume V. According to the first law of thermodynamics, in the expanding universe, we have with the integrability condition [106] the energy density and pressure satisfy Using (37) and (46), we evaluate the temperature FIG. 5. The temperature evolution (47) shows a reheating after inflation, the maximum effective temperature by the end of reheating is just below the GUT temperature ∼ Θ eff = Θ rh,max ∼ 10 26 K. Then the effective temperature evolves similar to the radiation temperature Θ r of the hot big bang. where Θ 0 ≡ Θ(t 0 ) is an arbitrary constant, with a dimension K. We choose a boundary condition such that the temperature Θ ∼ 2.73 K at the present time t 0 ∼ 10 17 s > t eq , with an equation-of-state parameter ω = 0. This determines the value Θ 0 = 2.73 K. In standard cosmology we expect an extremely high temperature as Universe goes back towards the initial singularity, i.e., Θ i → ∞ as a i → 0. However, in the present model, equation (47) indicates that the temperature initially is extremely small, i.e., Θ i → 0 K as a i → ∞, during the precontraction phase. Then the temperature increases as a decreases during contraction to its maximal value at the bounce time t B . From the temperature (47), it can be shown that the maximum temperature by the end of the reheating, at t B = − 3α 2 ∼ 4.14 × 10 −33 s, is Θ rh,max ∼ 4.8 × 10 26 K, see Fig. 5. Also, it is clear from (47) that the temperature evolves just as the standard cosmology at t ≫ |α|. In conclusion, the model predicts an initial low temperature, then a reheating of the Universe occurs during the contraction phase. At bounce time, the Universe reaches its maximum temperature 10 26 K, which is just below the GUT temperature Θ GUT ∼ 10 27 K. So the model is safe from reproducing unacceptable amount of monopoles after inflation. This is followed by a very short period of accelerated expansion to cool the Universe down to match exactly the standard model thermal evolution. So we gain the successes of the hot big bang scenario as well. B. Unified inflaton-quintessence field The above result seems unfamiliar at first impression. One may expect the temperature to start with Θ p at Planck's time, not Θ ∼ 0 K. As a matter of fact, this model predicts a more physical scenario when dealing with a scalar field component. We will discuss this point in detail in the following section. In order to investigate the scalar field induced by the theory at hand, we take the matter component to be a canonical scalar field with density ρ φ and pressure p φ to be defined as whereφ 2 represents a kinetic term of the scalar field and V(φ) is its potential. Combining (48), (49), (38) and (39), we write the kinetic and the potential of the scalar fielḋ In order to be consistent with the literature, we may use κ 2 = 1/M 2 p with M p = 1.22 × 10 19 GeV. The above equations are consistent with the scalar field background (Klein-Gordon) equation of a homogeneous scalar field in the expanding FRW Universë Inserting (28) and (35) into (50) and (51), we geṫ which can be integrated exactly to where φ 0 is a constant of integration, ξ ≡ ρ 0 1+ω a − 3 2 (1+ω) k , Ei(i,x) is the exponential integral, and γ is Euler's constant which is approximately 0.5772 . . .. Since α < 0, the Ei function in (54) is real and consequently, the scalar field. Integrating the Klein-Gordon equation (52), we have where V 0 is the constant of integration. One can reobtain the above solution from (51) without the V 0 term. However, the presence of V 0 can be recovered by considering a cosmological constant in the f (T ). We shall discuss this issue in more concretely later on in this section. Using the inverse relation of (55), we can eliminate t in (56) and rewrite the potential as V(φ) as usual. Substituting from (53) and (56) into (48) and (49), we evaluate the scalar field density energy and pressure Then the equation-of-state parameter ω φ = p φ /ρ φ of the scalar field is It is important to investigate a possible crossing of an equation of state to the phantom divide line ω φ = −1 or the quintessence limits at ω φ = −1/3. Independent of the V 0 value, Eq. (59) indicates no crossing to the phantom phase, so the scalar field bounce model is always connected with the observed expanding Universe. We determine the value of V 0 by requiring that ω φ = −1/3 at time t s chosen according to cosmological constraints. Thus we have If we assume that ω = 1/3 along with t s = 10 −32 s at the graceful exit time, we have V 0 ∼ 5.29 × 10 78 . If we choose t s = 10 17 s at late time acceleration, we have V 0 ∼ 1.21 × 10 −19 . However, inserting (60) into (59) implies that the scalar field crossing to the quintessence limit has a three patterns according to the value of V 0 : a. Case (V 0 = 0). For a vanishing value of V 0 , we get ω φ = ω. So the scalar field has a fixed equation of state. b. Case (V 0 0). For nonvanishing values of V 0 , we have two possible scenarios: (i) for large V 0 , the Universe is trapped in an inflation phase. It begins with ω φ = −1, then it goes to higher values. In order to make the acceleration-to-deceleration transition at the end of the inflation, i.e., t s = 10 −32 s, we choose a large value V 0 ∼ 5.29 × 10 78 . This makes the equation-of-state parameter just above the quintessence limit ω φ −1/3 for a very short period t ∈ (10 −33 , 10 −32 ) s, then goes back towards ω φ → −1 or an eternal de Sitter and will never match the radiation limit. (ii) for small V 0 , similar to the previous case, the Universe begins with ω φ = −1; however in this case, it has a chance to end its early accelerating expansion phase entering a deceleration one for a reasonable long period, with a later transition to a de Sitter Universe just as ΛCDM cosmology. We found that for the smaller V 0 values, the later transition is towards de Sitter. In order to make the late transition at t s = 10 17 s, we choose V 0 ∼ 1.21 × 10 −19 . Besides, the early transition from acceleration to deceleration can be obtained at t ∼ 10 −35 s. Interestingly, the radiation limit in this case is allowed at t ∈ (∼ 10 −35 s, t eq ). So we find that the scalar field unifies the inflaton and the quintessence fields in a single model. The three patterns of the scalar field equation-of-state parameter are shown in the plots of Fig. 6. In order to comprehend the results of an induced scalar field in the frame of the phase space analysis in Sec. III B, we see that the presence of the V 0 term representing a ground state or background of a cosmological constant. In large V 0 regimes, the Universe has a high potential to remain in a de Sitter universe, while for small V 0 regimes, the Universe has being outside de Sitter for a longer period before pulling to de Sitter once again as a final fate. This can be compared to the situation when large or small values of cosmological constant are adopted in the theory. This conclusion can be easily seen on the phase space diagram, where the presence of a positive cosmological constant shifts the phase portrait vertically upwards such that the larger value of Λ, the more shifts of the phase portrait. In this model the portrait, Fig. 2(b), generally cuts the zero acceleration curve at two points. When the cosmological constant is large, the period between these two cuttings is short just as in the large V 0 regime of the scalar field. On the other hand, when the cosmological constant is small, the phase portrait will be allowed to remain in the decelerated FRW cosmology for a longer period. However, in both cases, the Universe evolves towards a de Sitter fixed point instead of Minkowski in an infinite time, which represents a similar scenario of a ΛCDM Universe. C. Slow-roll validity As discussed in Sec. V A, the cosmic temperature begins with very low temperature Θ ∼ 0 K as predicted by the model. This result is compatible with the slow-roll condition V(φ) ≫ φ 2 so that the inflation epoch is dominated by the scalar field potential only. Accordingly, its equation-of-state parameter The later assumption is called the slow-roll condition. In fact this condition can not be justified unless the temperature at that episode is very small. According to Eq. (47), the slow roll condition can be valid at the precontraction phase as well as at a late Universe phase when the temperature is low as shown in Fig. 5. In order to examine the viability of the model at hand, we write the slow-roll parameters: where V φ = dV/dφ and V φφ = d 2 V/dφ 2 . Using (61), we evaluate the two observable parameters: For large V 0 , the tensorto-scalar ratio r = 16ǫ V ∼ 9.7 × 10 −4 , and scalar tilt (spectral index) n s = 1 − 6ǫ V + 2η V ∼ 1.0004. Although the spectrum is nearly scale invariant, the spectral index in this case is slightly blue tilted which is disfavored by the observations. However, for a vanishing or small V 0 , we evaluate the two observables r ∼ 1.56 × 10 −2 and n s ∼ 0.997, which are in agreement with the recent observations by the Planck satellite and BICEP2 and Keck Array [107][108][109][110]. In the above calculations, we assumed that ω = 1/3, for different choices of ω the corresponding values of V 0 will be different but the qualitative behavior is the same. However, for V 0 0 models and the choice ω = 1, we find that the scalar power spectrum is the scale invariant Harrison-Zel'dovich spectrum with r = 0 and n s = 1, that it is ruled out by the Planck 2015 results. We develop a novel technique to trace the matter equation of state in order to produce nearly scale invariant power spectrum at different time near the bounce to the end of inflation. This can be done by substituting from (60) into (56), which can be rewritten as Then the slow roll parameters (61) read Now, we obtain the observable parameters r and n s as functions of ω and t, since all the constants appear in the above equations are known. By requiring the reasonable values of r and n s from observations, one can get an explicit relation between ω and t that produces the observed power spectrum. We give the results in the plots of Fig. (7), which is represented in the (ω − t) plane, where the small (large) V 0 is due to the choice t s = 10 −32 s (t = 10 17 s) just as previously identified. The intersections of the curves determine which choice of the matter equation-of-state at different times verify the desired values of r and n s . As shown by Fig. 7(b), the small V 0 model is more flexible with observations. In addition, it shows clearly that ω > 1 in the contraction phase is the natural choice allowing the contraction phase not only to dominate over the anisotropy evolution but also to produce a scale invariant power spectrum. In conclusion, we find that the slow-roll condition is natural in this model. In addition, by applying the slow-roll approximation, we find that the scalar field induced by f (T ) of this bounce model does not suffer from the problem of a large tensor-to-scalar ratio, which usually faces bouncing models. However, it produces a nearly scale invariant spectrum of the scalar field in a good agreement with observations. Moreover, it provides a unified field representing inflaton and quintessence in a single model. D. Energy conditions In the previous section, we have shown that the large tensor-to-scalar ratio in bouncing models is avoidable in our model. Here, we investigate another major problem that usually faces bouncing models, that is the violation of the NEC, which gives rise to the ghost instability problem. As we have shown in Sec. IV C, the f (T ) gravity breaks the NEC effectively at the early time as required to obtain a bounce solution. However, in this section we argue that this feature can help to induce a scalar field model free from ghosts. Next, we briefly present the necessary background of the energy conditions. In the context of a geometric field theory, e.g., the GR theory, it is enough to apply the Bianchi II identity to guarantee the matter conservation. This leaves us with a huge amount of arbitrariness of the choice of the matter source. This implies to impose a particular kind of matter, e.g., dust, radiation, scalar field, electromagnetism, . . .. However, in modified gravity theories, e.g., f (T ) gravity, the dark sector of the Universe arises as an effective gravity in the field equations. Energy conditions strategy can be used to limit the arbitrariness of the T µν for a variety of different sources. In order to describe the interaction between any two nearby bits of matter, we should remember the Raychaudhuri equation. This equation represents the fundamental lemma of the Penrose-Hawking singularity theorems. Raychaudhuri equation for a congruence of timelike (or null) geodesics, respectively, in spacetime can be written as The lhs of Raychaudhuri equation identifies the temporal evolution of the expansion of scalar ϑ, while the rhs contains two classifications: the first promotes a collapsing configuration due to a nonzero initial expansion scalar, shearing σ µν , and the second opposes the collapsing configuration due to a nonzero vorticity ω µν . However, the contribution of the last term R µν u µ u ν , where u µ is an arbitrary timelike vector and k µ is an arbitrary null vector, is restricted by the energy conditions. There are four forms of energy conditions namely: weak energy condition (WEC), NEC, SEC and dominant energy condition (DEC). As a result of the attraction of gravity, the focusing theorem states that dϑ dτ < 0, which implies the positivity of the trace of the tidal tensor, i.e., This precisely gives the SEC and the NEC, respectively. These in terms of a stress-energy tensor and its trace can be written as As a result, the inequalities of SEC and NEC having the form In the case of a perfect fluid, these energy conditions SEC and NEC, namely (67), are reduced to ρ + p ≥ 0 and ρ + 3p ≥ 0, while the WEC and DEC demand the following constrains ρ ≥ 0 and ρ ± p ≥ 0. We summarize the energy conditions of the perfect fluid as Name For perfect fluid Weak ρ ≥ 0, ρ + p ≥ 0; Null ρ + p ≥ 0; WEC ρ φ ≥ 0 always always always ρ φ + p φ ≥ 0 always always always NEC ρ φ + p φ ≥ 0 always always always SEC ρ φ + p φ ≥ 0 always always always ρ φ + 3p φ ≥ 0 always 10 −35 t 10 17 s 10 −33 t 10 −32 s DEC ρ φ ≥ |p φ | always always always The Raychaudhuri equation is a pure geometrical relation, so that using different geometries gives rise to different descriptions of the Raychaudhuri equation. Also, the idea of energy conditions can be generalized to the modified theories of gravity. All ordinary matter, even the vacuum expectation value (vev) of a scalar field obey the DEC, while during inflation this condition should be relaxed. For the scalar matter (57) and (58), it is convenient to visualize the evolution of four ingredients (ρ φ , ρ φ ± p φ , and ρ φ + 3p φ ) which are necessary to study any of the energy conditions (68). This is shown on the plots of Fig. 8. (i) The case of the large value of V 0 is shown in Fig. 8(a); we find that only the SEC is violated except during the very short time interval 2.1 × 10 −33 t 10 −32 s it is not verified. This result is consistent with the previous result of Sec. V C where the equation-of-state ω φ is allowed to exceed the quintessence limit −1/3 only during this short period. This has been explained before as a result of the large background potential V 0 which drags the Universe back to its de Sitter phase very shortly. Also, we can see that other energy conditions are verified for the large V 0 model. (ii) The case of a vanishing V 0 is shown in Fig. 8(b); we find that all the energy conditions are verified. This is also in agreement with the previous results where ω φ > −1/3 for this model. (iii) The case of the small value of V 0 is shown in Fig. 8(c); we find the the SEC is verified during the time interval 3.4 × 10 −35 < t < 10 17 s, while it is violated elsewhere. This result can be also verified by Fig. 6, where ω φ > −1/3 during this interval. However, at very early and very late times, −1 < ω φ < −1/3, which explains these accelerated expansion phases. We summarize these results in Table I. As is well-known, the bounce models require a temporary violation of the NEC about the bounce time. Interestingly, we find that the NEC is fulfilled for the matter field during the bounce phase which makes the model free from a ghost instability. However, the NEC violation is due to the effective torsion field not in the matter field. This nice feature of f (T ) gravity could provide a better environment to obtain a healthy bounce model. VI. PRIMORDIAL FLUCTUATIONS IN f (T ) COSMOLOGY In the Sec. IV, we discussed the cosmological bounce in the f (T ) cosmology at the background level. In this section, we extend our analysis to investigate the theory at the perturbation level. In order to identify the true perturbations, which do not change under gauge coordinate transformation, is to fix the gauge freedom. We choose the longitudinal (conformal Newtonian) gauge which fixes the gauge completely and only involves two scalars metric fluctuation as By comparison with the weak field limit of GR about Minkowski space, one can realize that the metric fluctuation Φ plays the role of the gravitational potential. Assuming that the anisotropic stress vanishes, one can obtain that Ψ = Φ. The authors of [31] have shown that the gravitational potential Φ can be completely determined by the scalar field fluctuation δφ. Therefore, there exists only a single degree of freedom in the scenario of f (T ) gravity minimally coupled to a canonical scalar field. In order to understand the evolution of scalarsector metric perturbations, we use the perturbed equation of motion for the gravitational potential Φ instead of a scalar field fluctuation δφ. Then the complete form of the equation of motion for one Fourier mode Φ k with the comoving wave number k is given by [31] (ii) At large scales when c s k ≪ aH (super-Hubble): When the fluctuations exit the sound horizon, we have z ′′ s /z s ≫ c 2 s k 2 in (79). Consequently, its solution gives rise to v k = C k z s , (frozen fluctuations) (81) where C k is a constant, which can be determined by matching the solutions (80) and (81) at the sound horizon exit (i.e., c s k = aH). Consequently, we have Using (77) and (78), we obtain the power spectrum for the scalar perturbations in the framework of f (T ) gravity as [46] which should be evaluated at the sound horizon exit specified by c s k = aH. This equation reduces to the standard result for slow-roll inflation if the sound speed is equal to the light speed (c s = 1). The scalar spectral index is defined as Since during slow-roll inflation, the Hubble parameter H and the sound speed c s are almost constant, therefore using the relation c s k = aH that is valid at the sound horizon exit, we can obtain the relation The tensor power spectrum Now we turn to study the tensor perturbations in f (T ) gravity. According to [31,113], the equation governing the tensor perturbation h i j can be obtained as where the definition of the parameter γ is given by [46] γ ≡Ṫ When the tensor perturbation h i j is constrained to be symmetric (h i j = h ji ), transverse (∂ i h i j = 0), and traceless (h ii = 0), then it will have only two degrees of freedom corresponding to two polarization modes of the gravitational waves. The Fourier transformations of the tensor perturbation is given by where the index r is to identify the polarization state. Therefore, each state of h i j (t, x) can be written as a scalar field h r (t, x) multiplied by a polarization tensor ξ r i j , which is constant in space and time. Using the above in (86), we geẗ Change to the conformal time and rescale the field by introducing the canonically normalized field variable where Then, the equation of motion of the tensor fluctuations (89) gives rise to Following the same procedure used for the scalar perturbations, we can find the two solutions of (92) at sub-Hubble and super-Hubble scales. Therefore, the tensor power spectrum can be obtained by matching the solutions at the horizon exit k = aH, which is sum of the power spectra P h for two polarization modes of h i j . Tensor-to-scalar ratio One of the most important inflationary observable is the tensor-to-scalar ratio. It is completely constrained by the Planck satellite and BICEP2 and Keck Array [107][108][109][110]. Therefore, it can be used to exclude unviable inflationary models. This observable is defined by Another inflationary observable, but not accurately tied yet, is the tensor spectral index defined as In the inflationary scenario, the Hubble parameter H is nearly constant, i.e.,Ṫ ≃ 0, then from (87), we have γ ≃ 0 as well. Consequently, Eq. (91) reduces to the standard solution z t = a, and (93) reduces to the standard expression, which matches the tensor power spectrum in Einstein gravity. This relation must be calculated at the time of horizon crossing for which k = aH. This time is not exactly the same as the time of the sound horizon crossing for which c s k = aH, but to the lowest order in the slow-roll parameters this difference is negligible. Therefore, using Eqs. (83), (94) and (96), the tensor-to-scalar ratio is obtained as Using (85), (95), and (96), we can obtain the tensor spectral index as From (97) and (98), we write the consistency relation as We conclude that the inflation in f (T ) gravity differs from Einstein gravity by introducing the sound speed. On the other hand, the standard inflationary of Einstein's gravity is recovered when c s = 1, which is valid during the contraction phase of the bounce Universe models. B. Conservation of the comoving curvature perturbations in the precontraction phase It is convenient to present briefly some key features of the inflationary theory, then we discuss the alternative scenario of the bounce model at hand. In inflationary models, the Universe begins with an initial singularity just as in the standard cosmology, at the initial singularity, the hubble radius 2 r H = 1 a(t)H(t) is infinite as a(t i ) = 0. Consequently, we expect that all primordial quantum fluctuations to be subhorizon where their wavelength λ ≪ r H or equivalently, the comoving wave number is at the subhorizon scales k ≫ a(t)H(t) = 1/r H . Since the scale factor grows exponentially and the Hubble parameter is almost constant during inflation, the horizon shrinks exponentially allowing some modes to exit the horizon and become classical (freeze-out). By the end of inflation, the horizon expands allowing the freezed modes to reenter the horizon at a superhorizon scale and propagating as particles. The conservation of the comoving primordial fluctuations is an essential feature in inflationary models, as a matter of fact, it enables us to relate the observable quantities at the superhorizon (law energy) scale to the subhorizon (high energy) scale. In the present bounce model, we discuss the alternative scenario. Using (6) and (28), the hubble radius is given by . 2 The hubble radius is usually referred to as the horizon. FIG. 9. Schematic diagram to show the evolution of the hubble radius (100). Immediately after bounce, it shrinks allowing the relevant physical modes to exit the horizon, then it expands allowing these physical modes to reenter the horizon at later time. At the bounce point, we expect the hubble radius (horizon) to be infinite as H(t B ) = 0 (i.e., t = − 3 2 α). This means that all modes are sub-Hubble as λ ≪ r H . Immediately after bounce, the horizon suddenly shrinks to a minimal value, see Fig. 9, during this stage when the initial quantum fluctuations cross the horizon, they transform into classical fluctuations. So it is worth to investigate if the comoving curvature fluctuations will be conserved at the super-Hubble or not. This feature is important to examine the validity of the bounce scenario. The Universe around the bounce point is governed by quantum gravity which is beyond our reach so far. On the other hand, the perturbation during the contraction period suffers from producing a high tensor-to-scalar ratio, which is disfavored by observations [31]. As we have shown in Section V C that the slow roll conditions are valid during the precontraction period, so we expect that the primordial fluctuations that are produced at this phase to play an essential role in the present model. It is obvious that the precontraction phase can be identified by t ≪ |α| so that the correction term in (6) is much more dominant than the FRW evolution. So the scale factor and the corresponding Hubble at this phase are approximately We also reconstruct the f (T ) which generates the scale factor (101) as a function of the cosmic time t as Also, it can be rewritten in terms of T as f + (T ) = c 2 Ei 3, We are interested in the precontraction phase before bounce, so we take only the f (T ) form corresponding to theḢ > 0 branch. Therefore, the speed of sound (73) reads At the precontraction phase t ≪ t B , we have c 2 s ≃ 0. Consequently the evolution equation (74) takes the form which can be solved to The above relation describes the perturbations that are produced at the precontraction phase. These can cross the bounce time as the horizon is infinite at that time, then they exit the horizon and evolve as classical perturbations. Later on, they reenter the horizon to produce an almost scale invariant, Harrison-Zeldovich spectrum; this may explain its origin. So we study the evolution of the comoving curvature perturbations between horizon exiting to reentering. Since the perturbations could grow making an instability or even vanish which cannot explain the presence of our Universe in its structure, we find that this point should be investigated for the present model. In (106), the exponential function in the last term evolves to a constant value as time increases, while 1/t 2 contribution makes the last term approaching zero. Consequently, Eq. (106) reads Φ(t) ≃ (1 + ω)C 2 (1 + ω) α t . Also, we realize that all the fluctuations due to the term α/t are decaying as time increases. In this case we have We conclude that in this bounce scenario, the comoving curvature fluctuations at the precontraction phase are conserved after the modes exit the horizon. This is an important feature to ensure that the quantum fluctuations transform into classical fluctuations holding all of the information, which are characterizing the Universe at the sub-Hubble energy scale. Finally, we note that the slow roll conditions are valid at the precontraction phase so the calculations of Sec. V C are valid in this early stage. Therefore, the power spectrum at this stage matches the observations. VII. SUMMARY In this work, we have shown how the GR applications in cosmology are very limited. However, in f (T ) theories, we have more flexibility to start with a designed scale factor holding the well behaved linear equation of state of the matter content p = ωρ without breaking the conservation principle. We have modified the usual FRW scale factor as in (6), by introducing a new parameter α with units of time. The scale factor shows a good agreement with inflation-to-deceleration transition. The new parameter has been shown to control the time of transition, so the α parameter can have values α = 1.61 × 10 −32 s or −2.76 × 10 −33 s. The positive value gives a graceful exit inflation model, while the negative value gives a bounce model. We have studied the negative parameter model, i.e., α = −2.76 × 10 −33 s, extensively. The model shows interesting features, it performs a bouncing behavior from contraction to expansion at bouncing time t B = − 3α 2 ∼ 4.14 × 10 −33 s with a minimal scale factor a B 0 so that the trans-Plankian problems of inflation models can be smoothed away. On the other hand, it performs an early inflation phase with a graceful exit into FRW decelerated. We have also used the useful (Ḣ − H) phase space analysis to examine the capability of the model to practice the above mentioned behavior. Unlike the usual behavior in standard cosmology the model can cross the phantom divide line (de Sitter universe) safely in a finite time; then it interpolates smoothly between de Sitter and Minkowski spaces in an infinite time, i.e., the Universe is free from a future singularity. We have constructed an f (T ) theory corresponding to the modified scale factor. Consequently, we determined the density and pressure of the matter as a function of time. In addition, we have evaluated the temperature as a function of time; the model predicts that the temperature evolves as Θ ∝ a −3ω . So we expect a very low initial temperature Θ ∼ 0 K as expected by the bouncing behavior where a → ∞ at t ≪ t B . So we have argued that this result, however, seems unfamiliar, it is indeed providing a natural environment for the slow-roll inflation condition (i.e., V(φ) ≫φ 2 ). On the other hand, the temperature evolves to its maximum at the GUT energy scale Θ ∼ 10 27 K by the end of inflation providing a graceful exit at t end ∼ 10 −32 s into a FRW decelerated phase as required to initiate the standard hot big bang thermal scenario. We also have reexpressed the Friedmann equations in the Einstein's frame to identify the torsion gravity as a degree of freedom. The torsion equation of state suggests that the gravitational sector may provide a good candidate to describe the bounce behavior at an early time. Whereas the torsion equation of state begins with ω T = −1, then it goes in the phantom regime as ω ≪ −1. After that it evolves to ω T ≫ 1 before the bounce time, this is required in contraction for solving the anisotropy problem. By crossing the bounce point, ω T is in phantom energy phase again, while it goes back to cross ω T = −1 smoothly to connect the observed expanding Universe. Then it crosses ω T = −1/3 ending the early accelerated expansion, at t ∼ t end ≈ 10 −32 s, to enter a new phase of a decelerated expansion. Finally, it approaches the radiation limit ω T = 1/3 as t ≫ α as required to match the hot big bang consistently. We have considered the case when the matter component is a canonical scalar field φ. For a particular choice of the background potential V 0 , the small V 0 regime, the equation-of-state ω φ of the induced scalar field begins with a pure vacuum energy ω φ = −1 of a de Sitter universe, then crosses ω φ = −1/3 at t ∼ 10 −35 s to match the radiation limit. However, it crosses ω = −1/3 once more at a late time t ∼ 10 17 s entering a late accelerated expansion with a de Sitter fate just as ΛCDM cosmology. In this sense, the model provides a unified field representing inflaton and quintessence in a single model. In the slow-roll regime, we find that the scalar field induced by the f (T ) of this bounce model does not suffer from the problem of a large tensor-to-scalar ratio which usually faces bouncing models. In addition, we have shown the NEC is not violated so that it is free from ghost instabilities. We have developed a technique to trace the ordinary matter components that are consistent with the observed scale invariant power spectrum. Finally, we have extended the investigation of the model to the perturbation level to study scalar and tensor primordial fluctuations at the early Universe when the Newtonian gauge is assumed. We have shown that the slow roll conditions are fulfilled at the precontraction phase. In addition, the comoving curvature fluctuations are conserved at the super-Hubble energy scale which allows the fluctuations to match the observable Universe.
2016-10-12T08:18:26.000Z
2016-04-26T00:00:00.000
{ "year": 2016, "sha1": "4e2f3642a33d9371246335464c848c0e16a7d304", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1604.07604", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4155fb4457b381272ce421bd81635231f200826c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257755645
pes2o/s2orc
v3-fos-license
Dynamic Beam Steering and Focusing Graphene Metasurface Mirror Based on Fermi Energy Control Beam steering technology is crucial for radio frequency and infrared telecommunication signal processing. Microelectromechanical systems (MEMS) are typically used for beam steering in infrared optics-based fields but have slow operational speeds. An alternative solution is to use tunable metasurfaces. Since graphene has gate-tunable optical properties, it is widely used in electrically tunable optical devices due to ultrathin physical thickness. We propose a tunable metasurface structure using graphene in a metal gap structure that can exhibit a fast-operating speed through bias control. The proposed structure can change beam steering and can focus immediately by controlling the Fermi energy distribution on the metasurface, thus overcoming the limitations of MEMS. The operation is numerically demonstrated through finite element method simulations. Introduction Beam steering technology is one of the most important topics in the field of radio frequency (RF) signals [1,2] or 5G mobile communication signal processing [3,4] using gigahertz optics. The beam steering in an optical system can be easily implemented by changing the relative phase of a wave, which can be easily obtained by changing the refractive index or controlling diffraction grating [5][6][7][8]. Infrared (IR) frequencies from 300 GHz to 400 THz (wavelengths from 1 mm to 700 nm) have been applied in many industrial, scientific, military, and commercial fields that include imaging [9], sensing [10], and biomedical applications [11]. Therefore, the need for beam steering technology has also increased in these infrared optics-based fields, and recently, ultra-thin and fast optical devices have been in demand. For this purpose, the microelectromechanical system (MEMS), in which micromirrors are swiveled by electrostatic forces, is typically used for steering infrared-frequency beams [12,13]. However, an immediate response was not obtained due to the slow operational speed of MEMS [14,15]. As an alternative to structural tuning, metasurfaces with tunable properties are widely studied [16,17]. Two-dimensional (2D) materials, graphene, or transition metal dichalcogenides (TMDCs), such as molybdenum disulfide (MoS 2 ) and tungsten selenide (WSe 2 ), have atomic thicknesses and various mechanical, chemical, and optical properties [18,19]. In the case of graphene, a gate-dependent optical transition [20] could be generated under an electric field by simply controlling a current [20] to control the light transmitted from a source. Because of this gate-tunable property with a thin structure, graphene is widely used in electrically tunable optical devices [16,21,22]. Surface plasmon polaritons (SPPs) are quasi-particles generated by the interaction of light with free electrons on a metal surface. SPPs can propagate along the subwavelength scale metal surface [23]. They are actively used in nanocavity [24] and metasurface [25] research owing to their strong interactions with light. In addition, the strong interaction characteristics of SPPs allow light to interact with 2D materials that show limited optical interactions with light inherently due to their atomic thickness. Metamaterials are artificial materials that can optically control electromagnetic waves. They have been actively researched because they have permittivity and permeability that do not exist in natural materials [26][27][28]. The metasurface, as a 2-dimensional application of metamaterials, has the unique artificial optical properties of deep subwavelength thickness. On the other hand, it was reported that metasurfaces with all-metal or alldielectric structures could be fabricated [29][30][31][32]. Metasurfaces have been proposed for various applications, including antenna-sensor antenna [33], electromagnetic filtering [34], environmental sensing [35], and gain enhancement [36]. Antenna-sensor antenna have been demonstrated as tunable terahertz filters/antenna-sensors using graphene-based metamaterials [33]. Electromagnetic filtering has been observed in the all-metal wideband frequency-selective surface bandpass filter for different polarizations [34]. Thanks to smart metasurfaces, environmental sensing has been also studied through the advancement and artificial intelligence approaches in antennas [35]. In many antennas using different types of metasurfaces, gain enhancement has been reported [36]. Various forms of research have been studied to combine graphene with the surface plasmon of metals by placing metal structures on top of graphene [37][38][39]. Accordingly, we propose a tunable metasurface structure that could exhibit a fastoperating speed through voltage bias control using graphene in the metal gap structure. The proposed structure shows the operations of an electrically tunable metasurface mirror for dynamic beam steering and controlled focusing with a designed spatial distribution of the Fermi energy on the graphene, which is demonstrated by finite element method (FEM) simulations. It is possible to react at high speed [40,41] and immediately change the beam steering and focus by controlling the graphene bias. We anticipate that the proposed structure can overcome the limited operational speed of MEMS. Structure Design The metasurface has been demonstrated to enable beam steering by using the concepts of phase-shifting surfaces [42] and leaky waves [43]. The structure we propose is a tunable metasurface mirror that controls beam steering through the tuning of graphene's Fermi level based on a phase-shifting surface. We propose a tunable metasurface consisting of 1D arrayed metal (gold) strips and graphene in the thin gap between the strips, as shown in Figure 1. The gold strips with a thickness of 30 nm and a width of 1200 nm are periodically placed along the x-axis with 25 nm gaps between the strips. Underneath the gold strips, graphene is placed on a dielectric spacer with a thickness of 1000 nm, and the backside of the spacer is covered by gold. The backside gold substrate functions as a light reflector as well as a common electrode. Each top gold strip is biased so that the electric field can be applied to the graphene below the strip. For mechanical support, an additional substrate structure is required under the gold substrate. However, if the gold substrate is thicker than hundreds of nanometers, the electric fields cannot penetrate the gold substrate, and the additional substrate does not affect the optical property. Therefore, the additional substrate is neglected to estimate the optical performances of the proposed structure in the simulation. The top Au strips act as an optical resonant scatterer, in which a metal-insulator-metal (MIM) plasmonic resonance gap mode appears at the gap between two Au strips [44]. The electric field enhanced in the gap is overlapped with the graphene layer. The gap mode provides a strong interaction channel in which the incident light can be more interactive with the graphene layer. functions as a light reflector as well as a common electrode. Each top gold strip is biased so that the electric field can be applied to the graphene below the strip. For mechanical support, an additional substrate structure is required under the gold substrate. However, if the gold substrate is thicker than hundreds of nanometers, the electric fields cannot penetrate the gold substrate, and the additional substrate does not affect the optical property. Therefore, the additional substrate is neglected to estimate the optical performances of the proposed structure in the simulation. The dielectric function of the graphene layer can be expressed by = 1 + iσ/ωd, where σ, ω, and d are the optical conductivity, angular frequency of light, and thickness of graphene, respectively. The optical conductivity (σ) for the graphene layer is calculated according to the local random phase approximation (RPA) method, with an assumption carrier mobility µ that is 10,000 cm 2 /V · s [37,45]. According to the local RPA method, σ can be expressed by the following equation: τ is the Drude relaxation rate, E F is the Fermi energy level, T is the temperature, k B is the Boltzmann constant, and is the Dirac constant. Because the Drude relaxation rate τ = ev 2 F /µE F , is actually E F -dependent, σ can be tuned by the Fermi energy E F at stationary ω [46,47]. In other words, the dielectric function of a graphene layer can be controlled by the Fermi energy. Additionally, the Fermi level in Graphene E F can be expressed in E F = ν F (πn 2D ) 1 2 with n 2D for the carrier density of graphene, which is linearly proportional to electric bias on graphene [48]. Hence, in practical devices, the electric bias can adjust the Fermi energy of graphene and, thus, its dielectric function [49,50]. In this structure, bias voltage in the graphene layer is applied by a gold strip with a back gold substrate as a common ground [16]. The simulations were performed using the 2D finite element method (FEM) tool(COMSOL Multiphysics). We constructed a metasurface mirror consisting of infinite unit-cell arrays ( Figure 1b) with periodic boundary conditions. On the unit cell of Figure 1b, the 4λ thickness air domain was placed for mode profile observation. For beam steering and focusing, in Section 3.2, we used 20-unit cell arrays for steering and 51-unit cell arrays for focusing. Additionally, we included a 3000 nm thickness PML layer around the finite cell arrays. A linearly x-polarized plane wave was assumed to be the incident light normal to the surface of the proposed structure. When simulating a photonics device with a mono or a few layers of graphene-the cause of the atomic level thickness of the graphene-the macroscopic optical properties cannot be applied to the simulation directly. We modeled graphene using surface current density, and the graphene was treated as a 2D layer without any thickness. When the thickness of graphene, 0.34 nm, was much smaller than the wavelength, the model produced exactly the same optical behavior. The current density can be controlled by bias voltage [51,52]. Results First, we investigated the reflectance and phase shift of the reflected light in the proposed infinitely periodic metasurface as a function of the wavelength and different Fermi levels at infrared wavelengths. Next, we showed the operation of beam steering in the metasurface by applying different bias voltage sets on each strip as suggested by the spatial phase change. In addition, the metasurface could focus the reflected light at the on-demanded focal point. Reflectance and Phase Shifting in a Unit Cell of the Proposed Metasurface When an Ex-linearly polarized light was normally incident on the metasurface, a strong resonance could be observed at the gap between the gold strips (Figure 2a), called the MIM plasmonic gap mode [23]. The gap mode shows a broad dip near 16.8 µm in the reflectance spectrum (black curve) shown in Figure 2b, because the resonance increased the metallic absorption of the incident light. Although the atomic thickness of graphene limits the interaction with light because of the MIM gap mode, the graphene placed at the gap can change the resonant wavelength, linewidth, and reflectance of the MIM gap mode. Moreover, the proposed metasurface structure consisting of the gold strip, graphene film, and gold substrate could be considered an effective ultrathin film with deep subwavelength thickness. Meanwhile, the deep-subwavelength ultrathin highly lossy film can cause a loss-induced large phase shift [53]. In the proposed structure, graphene provides strong absorption at the target wavelength by combining the metal gap structure; thus, a strong absorption-induced phase shift can be exhibited. When the Fermi level of graphene was 300 meV, the resonant dip (red curve) could be observed at 9.3 µm, and the lowest reflectance was 0.80. The reflectance peaks blue-shifted as the Fermi Energy (E F ) increased. Micromachines 2023, 14, x FOR PEER REVIEW metasurface by applying different bias voltage sets on each strip as suggested by th tial phase change. In addition, the metasurface could focus the reflected light at th demanded focal point. Reflectance and Phase Shifting in a Unit Cell of the Proposed Metasurface When an Ex-linearly polarized light was normally incident on the metasurf strong resonance could be observed at the gap between the gold strips (Figure 2a), the MIM plasmonic gap mode [23]. The gap mode shows a broad dip near 16.8 μm reflectance spectrum (black curve) shown in Figure 2b, because the resonance incr the metallic absorption of the incident light. Although the atomic thickness of grap limits the interaction with light because of the MIM gap mode, the graphene placed gap can change the resonant wavelength, linewidth, and reflectance of the MIM gap m Moreover, the proposed metasurface structure consisting of the gold strip, graphene and gold substrate could be considered an effective ultrathin film with deep subw length thickness. Meanwhile, the deep-subwavelength ultrathin highly lossy film cause a loss-induced large phase shift [53]. In the proposed structure, graphene pro strong absorption at the target wavelength by combining the metal gap structure; t strong absorption-induced phase shift can be exhibited. When the Fermi level of grap was 300 meV, the resonant dip (red curve) could be observed at 9.3 μm, and the l reflectance was 0.80. The reflectance peaks blue-shifted as the Fermi Energy (EF) incre To confirm the properties of the reflected light, we investigated the phase chan the reflected light as a function of the Fermi energy for three wavelengths, 8 μm (red μm (black), and 9 μm (blue), corresponding to the dips of the MIM plasmonic mode the biased graphene, as shown in Figure 2c. As EF changed from 300 meV to 500 me To confirm the properties of the reflected light, we investigated the phase change of the reflected light as a function of the Fermi energy for three wavelengths, 8 µm (red), 8.5 µm (black), and 9 µm (blue), corresponding to the dips of the MIM plasmonic mode with the biased graphene, as shown in Figure 2c. As E F changed from 300 meV to 500 meV, the phase change of the incident light with a wavelength of 8.5 µm was covered from 0 to 360 • . For the incident wavelengths of 8 µm and 9 µm, phase changes of 320 degrees were estimated as the Fermi energy was controlled. Since the phase change of the 8.5 µm light could be controlled fully over 360 degrees by manipulating the Fermi energy, the required reflected phase change ϕ could be determined according to the specific Fermi energy on the graphene. Figure 2d shows that the electric field antinode of the standing wave between the incident and reflected lights moved away from the surface of the proposed metasurface as the Fermi energy increased from 300 meV to 450 meV because the phase change of the reflected light increased with the Fermi energy in Figure 2c. According to the mode profiles, the phase of the reflected light changed continuously with the Fermi energy of graphene. The Fermi energy could be controlled by applying a proper external bias onto the graphene. Constructing a Metasurface Mirror with a Finite Unit Cell Array By applying a certain Fermi energy on the graphene in the unit cell of the proposed metasurface, the phase change of the reflected light could be controlled over 360 degrees as required. Here, we limited the target wavelength to 8.5 µm. The target wavelength, which can cover the phase change of 360 degrees, can be adjusted by tunning the proposed metasurface. Accordingly, we constructed the electric tunable metasurface mirror consisting of finite numbers of unit cell arrays, such as Figure 3a, which is functionally similar to the optical phased array mirror. By applying different biases on each unit cell and, thereby, different Fermi energies, the reflection angle became electrically tunable. Figure 3b-d shows the relation of the shifted phase and Fermi level by position, as well as propagating the electric field profiles of the steered beam after reflection for the steered angle of −60, 30, and 10 degrees, respectively. For controlling the reflected beam direction, the reflected phase per each unit cell should be calculated. As a beam steering modulation, all reflected beams from each cell should propagate the same reflection angle, θ, from the incident light angle. So, in a continuous structure, the steered light phase ϕ(x) can be expressed in the following equation [54]. ϕ(x) = (2π/λ)x sin θ (2) In this case, however, unit cells barely act as a direct beam reflector but as a reflection phase shifter. Since reflected light in the unit cell was assumed to propagate in a normal direction, we constructed beam steering through an array of shifted phases. Meanwhile, Figure 2c shows that the reflected light phase shift can correspond to certain graphene biases. The result in Figure 2c, ϕ shi f t at 405 meV is ϕ shi f t = 360 • , which becomes the starting point of the whole beam steering structure. Next to the point, the following cells have the specific Fermi levels corresponding to the next equation. The steering angle θ steer and shifted phase ϕ shi f t , ϕ shi f t can be expressed in the following equation. By constructing the successive, tilted mode profile on the arrayed cell, beam steering could be implemented. Therefore, the reflected beam can be propagated as intended (see Figure 3b-d) by applying a bias corresponding to ϕ(x). In order to maintain the plane-like shape of the wavefront of the reflected light, the maximum steering range is expected to be ±60 • , which is a range large enough to be applied as a tunable concave mirror, similar to Figure 4. 4, x FOR PEER REVIEW 6 of 11 λ/7 the size of the target wavelength, much smaller than λ/2, which is small enough to operate as a continuous phase modulator based on Fraunhofer diffraction [56,57]. As a beam-focusing modulation, all the reflected beams from each unit cell must propagate to the focal point (focal length: ). Therefore, the reflected light phase ( ) can be expressed as follows [54]: To modulate beam focusing as expected, the shifted phase should also be placed as parabolic. Hence, can be expressed in the following equation. Furthermore, by applying a different Fermi energy, corresponding to Equation (5), beam focusing can be achieved, as shown in Figure 4, for different focal lengths of 3l and l. Conclusions In this paper, we propose a graphene-based electrically tunable metasurface mirror for dynamic beam steering and focusing. In this approach, the Fermi energy of graphene in each unit cell in the metasurface was designed to realize beam steering and focusing. From the structural limitation of the optical phased array, the missing phase range, which causes interference between nearby unit cells, deteriorates the beam steering ability by constructing side lobes [55]. However, each unit cell in the proposed structure is nearly λ/7 the size of the target wavelength, much smaller than λ/2, which is small enough to operate as a continuous phase modulator based on Fraunhofer diffraction [56,57]. As a beam-focusing modulation, all the reflected beams from each unit cell must propagate to the focal point (focal length: f ). Therefore, the reflected light phase ϕ(x) can be expressed as follows [54]: To modulate beam focusing as expected, the shifted phase should also be placed as parabolic. Hence, ϕ shi f t can be expressed in the following equation. Furthermore, by applying a different Fermi energy, corresponding to Equation (5), beam focusing can be achieved, as shown in Figure 4, for different focal lengths of 3l and l. Conclusions In this paper, we propose a graphene-based electrically tunable metasurface mirror for dynamic beam steering and focusing. In this approach, the Fermi energy of graphene in each unit cell in the metasurface was designed to realize beam steering and focusing. For the incident light, two gold strips, placed on a low-index dielectric with nanogaps of 25 nm, confined the electric field due to the MIM plasmonic gap resonance, resulting in an absorption dip of the reflectance spectrum at infrared wavelengths. The graphene between the gold strips experienced a strong field such that the change in the Fermi energy induced a resonant wavelength shift. For an incident light with a wavelength of 8.5 µm, the phase of the reflected light could be tuned from 0 to 360 • by controlling the Fermi energy from 300 meV to 500 meV. Further, the reflectance was larger than 80% for the operating wavelength. Based on the phase change property of the reflected light on the proposed metasurface unit cell, the reflected light of a normally incident light can be steered as required when the spatial distribution of the Fermi energy of the whole metasurface is designed to induce a phase shift of the angled reflected light. Beam steering of −60 • , +10 • , and +30 • was demonstrated using this proposed approach. In addition, the reflected light could be focused at one point, and the focal length could be tuned from one wavelength, λ, to three wavelengths, 3λ. The fabrication of metal-strip/dielectric/metal structures with graphene has been reported by various researchers. According to Guo, Xuguang et al. (2021) [39], graphene was attached under a metal grating using polymethyl methacrylate (PMMA), and two benzocyclobutene (BCB) was used as a spacer. Although there is a difference in scale, the process of our structure is expected to be sufficiently feasible. In addition, Gahoi, Amit et al. (2016) [38] showed that it is possible to place graphene on a silica spacer and deposit metal on top of it by using PMMA as a transfer medium with the same structure as a back gate on a silicon substrate. Additionally, Naresh K. Emani et al. (2012) [37] showed that it is possible to realize metamaterials by placing graphene on a dielectric spacer and placing a metal scatterer of hundreds of micrometers on top of it. Therefore, the proposed structure is also fully realizable by growing a dielectric spacer on a metal mirror, transferring graphene through PMMA, and then depositing metal strips. The 25 nm or smaller gap can be fabricated by various methods, such as focused ion beam milling (FIB) [58,59], e-beam lithography (EBL) [60,61], and atomic layer deposition (ALD) [62,63]. [64,65]. In the proposed structure, the entire size of the metasurface for beam steering and focusing were 24 µm and 64 µm. Hence, we expect that our proposed structure can be fabricated. Although graphene is highly utilized due to its electrical tunability, thermal expansion damage is likely to occur during the deposition process on the substrate. Therefore, it is necessary to manufacture devices that consider the difference in thermal expansion. Beam steering of the metasurface can be measured using the free-space Michelson interferometer [66]. Table 1 is a comparison table between our structure and other tunable metasurfaces. The proposed structure numerically confirmed that it could be applicable to beam steering ± 60 degrees with the electrical control of graphene's Fermi level. In conclusion, we performed relatively straightforward numerical simulations and demonstrated that dynamic beam steering and focusing can be implemented by applying different spatial distributions of the Fermi energy of graphene in the proposed metasurface. The proposed electrically tunable metasurface mirror can be applied to mid-IR optics and photonics for astronomy, IR imaging, and chemical sensing owing to its fast response and high reflectance [67,68].
2023-03-26T15:20:25.527Z
2023-03-23T00:00:00.000
{ "year": 2023, "sha1": "dc54fa353c695990df282c5e21cf6e1f58e76e9e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/14/4/715/pdf?version=1679571512", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a58715c8e8d169a48ec5085296d1d6919c5d2f1", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
23876539
pes2o/s2orc
v3-fos-license
Economic burden of pneumococcal infections in children under 5 years of age ABSTRACT The present study aimed to determine the cost of childhood pneumococcal infections under 5 years of age and to provide further data for future health economy studies. Electronic medical records of children diagnosed with meningitis caused by S. pneumoniae and all-cause pneumonia, and acute otitis media (AOM) between January 2013-April 2014 were retrospectively evaluated. Direct costs for the treatments of hospitalized patients (pneumonia and pneumococcal meningitis) including costs of healthcare services consisted of costs of hospital bed, examination, laboratory analyses, scanning methods, consultation, vascular access procedures, and infusion and intravenous treatments. Direct costs for patients (AOM) treated in outpatient setting included constant price paid for the examination and cost of prescribed antibiotics. Indirect costs included cost of work loss of parents and their transportation expenses. Data of 130 children with pneumococcal meningitis (n = 10), pneumonia (n = 53), and AOM (n = 67) were analyzed. The total median cost was €4,060.38 (direct cost: €3,346.38 and indirect cost: €829.18) for meningitis, €835.91 (direct cost: €480.66 and indirect cost: €330.09) for pneumonia, and €117.32 (direct cost: €17.59 and indirect cost: €99.73) for AOM. The medication cost (p = 0.047), indirect cost (p = 0.032), and total cost (p = 0.011) were significantly higher in pneumonia patients aged ≥36 months than those aged <36 months; however, direct cost of AOM were significantly higher in the patients aged <36 months (p = 0.049). Results of the present study revealed that the treatment cost was significantly enhanced for hospitalization and for advanced disease. Thus, preventive actions, mainly vaccination, should be conducted regularly. Introduction Pneumonia, meningitis, and acute otitis media (AOM) are substantial pediatric public health problems worldwide. 1 In this group of diseases, Streptococcus pneumoniae (pneumococcus) is the major global etiological cause of pneumonia and accounts for 36% of overall childhood pneumonia. 2 As invasive Haemophilus influenzae type b infections decrease, pneumococcus has become the leading cause of bacterial meningitis among children aged 5 years or below and is also isolated in 28%-55% of middle ear aspirates from AOM cases. 3 According to the World Health Organization (WHO) estimates, approximately 1.6 million people die of pneumococcal diseases each year 4 and 0.7-1 million of these deaths occur in children under the age of 5 years. 2 Although case fatality rate due to pneumococcal infections is substantially high in developing countries, it is also considerable in developed countries. While case fatality rate of pneumococcal meningitis was reported as 48% in hospitalized children in Gambia, 5 this rate has been reported as high as 20% in developed countries. 6 According to the results of a study evaluating the global disease burden of pneumonia in children under the age of 5 years, pneumococcal diseases accounted for approximately 11% of overall deaths. 1 Further analyses of the same study revealed that the mortality rate was 119 (87-130) per 100,000 and the case-fatality rate was 5% (4%-9%) for pneumococcal pneumonia; however, for pneumococcal meningitis, the case-fatality rate was as high as 59% (27%-80%) despite the mortality rate of 10 (4-13) per 100,000. Within the European region (including Turkey) identified by WHO, the mortality rate was 25 (18-28) per 100,000 and the case-fatality rate was 5% (4%-9%) for pneumococcal pneumonia, whereas these rates were 3 (2-4) per 100,000 and 38% (32%-58%), respectively, for pneumococcal meningitis. The differences in quality and accessibility of healthcare services may create differences between countries. According to the global epidemiological data, more than 90% of pneumonia-related deaths in children under the age of 5 years occur in 40 countries, with the highest mortality rates in India, Pakistan, Bangladesh, and Afghanistan. 7 When intercontinental distribution of pneumonia-related death is evaluated in this age group, sub-Saharan Africa and South Asia show similar distribution. 8 In Turkey, although detailed data about respiratory tract infections are limited, the Turkey Burden of Disease Study conducted between 2002 and 2004 by the Ministry of Health reported that the ratio of disability adjusted life years (DALY) caused by respiratory tract infections were 13.4% in 0-4 year age group and 6.5% in 5-14 year age group. 9 Additionally, the estimated annual number of cases was 250 for meningitis, 250,000 for pneumonia, and 2.5 million for AOM in Turkey, 10 although data on the contribution of S. pneumonia to overall pediatric respiratory tract infections is not fully known due to lack of surveillance data. The present study aimed to determine the cost of childhood pneumococcal infections, which are major public health problems in Turkey as well as globally, under the age of 5 years and to supply a source for further researches on this issue. Results The study data was comprised of 10 hospitalized patients diagnosed with pneumococcal meningitis, 53 hospitalized patients diagnosed with pneumonia, 67 patients diagnosed with AOM treated in an outpatient setting, 2 patients who received ventilation tube insertion, and 1 patient who developed hearing loss due to AOM. Of the patients diagnosed with pneumococcal meningitis, 60% were boys and the median (Q1-Q3) age was 28.87 months (7. For two cases who received ventilation tube insertion due to AOM, costs were calculated separately. Accordingly, the median (Q1-Q3) values for direct and indirect costs were €351.33 (€197.24-€505.41) and €441.31 (€406.89-€475.73), respectively and the median (Q1-Q3) value of the total cost was €792.63 (€672.97-€912.30). For one patient who developed hearing loss as a complication of AOM, the direct and indirect costs were €32.41 and €310,411.89, respectively and the total cost was €310,444.30. The resulting national economic burden is approximately 1 million Euros (direct cost: approximately 0.8 million Euros) for pneumococcal meningitis, approximately 209 million Euros (direct cost: approximately 120 million Euros) for pneumonia, and approximately 293 million Euros (direct cost: Comparison between age groups revealed that medication cost (p D 0.047), indirect cost (p D 0.032), and total cost (p D 0.011) were significantly higher in pneumonia patients aged 36 months than those aged <36 months (Table 2). Direct cost of AOM was significantly higher in the patients aged <36 months than those aged 36 months (p D 0.049). On the other hand, comparison of costs between genders revealed no difference between the groups (Table 3). Discussion Childhood respiratory tract infections are associated with high morbidity and mortality. According to the 2014 Health Statistics released by the Turkish Ministry of Health, 11 among major health problems occurring during the last 6 months of 2014, the rate of upper and lower (pneumonia, etc.) respiratory tract infections in the 0-6 year age group was 41.9% and 10.1%, respectively. However, there is no national data with regard to the economic burden associated with these diseases, which are seen in more than half of children in this age group. According to the 2004 National Burden of Disease and Cost Effectiveness Project, 12 pneumococcal meningitis ranks fifth among the first 20 diseases causing death by 2.7% in 0-14 year age group in Turkey. The same report also stated that pneumonia is the second leading cause of mortality by 14% among lower respiratory tract infections. In addition to direct and indirect costs arising from the treatments of these diseases, years of life lost due to mortality also lead to a substantial burden on health-related economic outcomes. Although treatment of these diseases prevents an important portion of resulting mortality, the ratio of DALY caused by these diseases to overall DALY is 12.6 for 0-14 year age group. 12 In the light of these data, it is obvious that pneumococcal diseases lead to a substantial burden on health and economic systems of Turkey in terms of economic cost and morbidity as well as mortalityrelated costs. In the present study, among pneumococcal infections, the highest total cost was estimated for the treatment of pneumococcal meningitis by €4,060.38 per patient followed by pneumonia by €835.91 per patient and AOM by €117.32 per patient. There is a gap of economic burden studies in this area. In a study which evaluated the economic burden of childhood pneumococcal diseases in the Gambia, the median (interquartile range) total societal cost of inpatient pneumonia was US$87.0 (US$63.0-US$129.7) per patient. 13 In Korea, a nationally representative cross-sectional study evaluated the economic burden of otitis media in 2012 reported the total cost of otitis media in 0-9 year age group as 257.87 million Dollars in the outpatient setting. 14 Since there are differences in health policies as well as in national economies of the countries, these results are not comparable. In Turkey, national implementation of the 7-valent PCV infant vaccination program followed by the implementation of 13-valent PCV in 2011 is associated with substantial improvements in the prevention of pneumococcal infections. The introduction of PCVs has been associated with a significant reduction both in invasive pneumococcal disease (IPD) and mucosal pneumococcal diseases in many countries. Following the introduction of PCV13 in the USA, IPD incidence decreased by 91% in children below 2 years of age due to the additional 6 serotypes. 15 In the UK, IPD due to PCV13-only serotypes decreased by 89% after the PCV13 National Immunization Program in children below 2 years of age as compared with the pre-PCV13 periods. 16 In Southern Israel, sequential implementation of PCV7 and PCV13 was associated with decreases in otitis media caused by PCV7 serotypes plus serotype 6A and then by 5 additional PCV13 serotypes (5VT: 1, 3, 5, 7F, 19A) by 96% and 85% in the respective periods. 17 The present study has also some limitations. Firstly, being a single center study is one of the limitations. Secondly, the pneumonia cases included in this study were not proven to be caused by S. pneumoniae; thus, lobar pneumonia cases were aimed to be included as they best represent pneumococcal pneumonia. Thirdly, only hospitalized patients with pneumonia were included. However, if non-hospitalized pneumonia patients were also included, contributory outcomes could be provided both in clinical and economic impact of vaccination. Despite these limitations, this study provides useful insights into economic burden of pneumococcal infections in children (0-5 years old) considering that there is very limited data in this area especially in Turkey. Therefore, further cost studies are needed, particularly in children, to set forth the economic burden of pneumococcal infections. Conclusions The present study brings new information and perspectives to the limited local data on the economic burden of childhood pneumococcal infections in Turkey. This study revealed that the treatment cost was significantly higher in hospitalized patients. The cumulative burden of pneumococcal disease on the health care systems elucidates the necessity of effective prevention strategies. Materials and methods Electronic data of children under 5 years of age who were treated for meningitis due to S. pneumoniae, pneumonia, and AOM in Hacettepe University Hospital from January 2013 through April 2014 were retrospectively evaluated. Children who were diagnosed with pneumonia after 48 hours of hospitalization and were hospitalized 2 days or more within the last 90 days were excluded. The study was approved by the Non-Interventional Clinical Research Ethics Board of Hacettepe University (Approval number: GO 14/290-22). Direct costs for hospitalized patients with meningitis due to S. pneumoniae and with pneumonia included the costs of healthcare services, medications, and materials. Costs of healthcare services consisted of costs of hospital bed, examination, laboratory analyses, scanning methods, consultation, vascular access procedures, and infusion and intravenous treatments. Data regarding costs of healthcare services were obtained from Hacettepe University Hospital, which uses the unit prices indicated in the National Health Practices Statement (HPS) Appendix-2B (UPDATED Statement of Changes 2013 HPS, dated August 30, 2014). 18 Medication costs were determined based on the prices stated on the hospital invoice. Costs of materials consisted of consumables which were used over the course of hospital stay. Since the etiology of pneumonia cannot be accurately estimated as it is technically difficult, 19 the costs for pneumococcal pneumonia were calculated using overall pneumonia cases. Among the overall pneumonia cases, lobar pneumonia cases were selected for cost calculation as they best represented pneumococcal pneumonia. Since the etiology of AOM cases were not known as well, direct costs for overall AOM treated in outpatient setting were calculated according to package pricing -the price paid to Pediatric Infection and Ear Nose Throat outpatient clinics by the Social Security Institution-which included costs of laboratory analyses, drugs used during examination, and the examination fee. The costs for patients requiring ventilation tube insertion due to AOM were calculated separately. Likewise, the costs for patients who developed hearing loss as a complication of AOM were also calculated separately. Indirect costs for each disease were calculated using the same method. The average monthly gross earnings (€774.14) obtained from the Labor Cost Survey 2012 conducted by Turkish Statistical Institute (TurkStat) were used as the basis of the calculation. 20 According to the data from TurkStat, labor force participation rate in February 2014 was 70.0% among males and 28.7% among females. 21 The rate of employment was adjusted to 90% in fathers and 41% in mothers. 22 Daily earnings were determined to be €23.23 (€774.14 £ 0.90/30 days) for fathers and €10.58 [(€774.14 £ 0.41)/30 days] for mothers. According to the assumptions based on an expert opinion, mothers were assumed to be the primary caretakers and were therefore considered to accrue work loss over the course of a child's hospital stay. The fathers were assumed to have one day of work loss per week over the course of a child's hospital stay. The parents were also supposed to come for the control visits together and thus, it was assumed one day of work loss at each control visit. Workforce losses of the mothers and fathers were calculated by multiplying aforementioned earnings by the number of days of work loss. Expenses for round trip to hospital were calculated by multiplying the number round trips to hospital by €4.59. 23 It was assumed that the fathers had a round trip to hospital every day over the course of his child's hospital stay and that 5 more round trips to hospital arose from mothers making 1 trip to the hospital on the first day of hospitalization and 1 trip from the hospital on the day of discharge and from both mothers and fathers making 4 round trips to the hospital for two control visits after discharge. All costs were expressed in Euros according to the 2014 currency of Turkish Liras (€1.00 D 2.9 Turkish Liras, without inflation adjustment). The probabilistic sensitivity analysis was performed to assess the robustness of the indirect cost estimates, including costs for total round trips and workforce losses of the mothers and fathers, for each disease and the sensitivity analysis range value was §25% of the base value. Statistical analysis Data were analyzed using the Predictive Analytics Software (PASW) version 18.0 (SPSS Inc., Chicago, IL, USA). Descriptive statistics were expressed as number and percentage for categorical variables and as mean, standard deviation, median (interquartile range) for numerical variables. In comparing two independent groups, Mann-Whitney U test was used for nonnormally distributed numerical variables. A p value <0.05 was considered statistically significant. The human capital approach was used in the productivity loss calculation. Disclosure of potential conflicts of interest Mehmet Ceyhan, MD, Prof., received non-financial support from Pfizer, Turkey. Yasemin Ozsurekci, MD, received non-financial support from Pfizer, Turkey. Kubra Aykac, MD, received non-financial support from Pfizer, Turkey. Basak Hacibedel, MSc, is employee of Pfizer, Turkey. Egemen Ozbilgili, MD, is employee of Pfizer, Turkey. Funding This work was supported by Pfizer Pharmaceuticals.
2018-04-03T01:28:42.193Z
2017-11-07T00:00:00.000
{ "year": 2017, "sha1": "aef9406b4054bc208dbb8735e5803fd25caa3eba", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645515.2017.1371378?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "aef9406b4054bc208dbb8735e5803fd25caa3eba", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9220235
pes2o/s2orc
v3-fos-license
Sensing of Immature Particles Produced by Dengue Virus Infected Cells Induces an Antiviral Response by Plasmacytoid Dendritic Cells Dengue virus (DENV) is the leading cause of mosquito-borne viral illness and death in humans. Like many viruses, DENV has evolved potent mechanisms that abolish the antiviral response within infected cells. Nevertheless, several in vivo studies have demonstrated a key role of the innate immune response in controlling DENV infection and disease progression. Here, we report that sensing of DENV infected cells by plasmacytoid dendritic cells (pDCs) triggers a robust TLR7-dependent production of IFNα, concomitant with additional antiviral responses, including inflammatory cytokine secretion and pDC maturation. We demonstrate that unlike the efficient cell-free transmission of viral infectivity, pDC activation depends on cell-to-cell contact, a feature observed for various cell types and primary cells infected by DENV, as well as West Nile virus, another member of the Flavivirus genus. We show that the sensing of DENV infected cells by pDCs requires viral envelope protein-dependent secretion and transmission of viral RNA. Consistently with the cell-to-cell sensing-dependent pDC activation, we found that DENV structural components are clustered at the interface between pDCs and infected cells. The actin cytoskeleton is pivotal for both this clustering at the contacts and pDC activation, suggesting that this structural network likely contributes to the transmission of viral components to the pDCs. Due to an evolutionarily conserved suboptimal cleavage of the precursor membrane protein (prM), DENV infected cells release uncleaved prM containing-immature particles, which are deficient for membrane fusion function. We demonstrate that cells releasing immature particles trigger pDC IFN response more potently than cells producing fusion-competent mature virus. Altogether, our results imply that immature particles, as a carrier to endolysosome-localized TLR7 sensor, may contribute to regulate the progression of dengue disease by eliciting a strong innate response. Introduction The innate immune system acts as the first line of defense for the sensing of viral infection. This involves rapid recognition of pathogen-associated molecular patterns (PAMPs), including viral nucleic acids, by pattern recognition receptors (PRRs). This recognition results in an antiviral response characterized by the production of type I interferons (IFNs) and expression of IFNstimulated genes (ISGs). This response suppresses viral spread by blocking the viral life cycle at multiple levels and also mediates immunomodulatory effects in surrounding tissues that impart the onset of the adaptive immune response [1]. The PRR can be cytoplasmic, e.g., retinoic inducible gene-I (RIG-I)-like receptors (RLRs) and NOD-like receptors (NLRs), or endosomal, e.g., Toll-like receptors (TLRs) [1]. Thus, depending on their intracellular localization, virus-induced innate immune signaling typically occurs within cells that are either productively infected or that have internalized viral particles [1,2]. Recent studies illustrated the existence of alternative host sensing strategies by bystander plasmacytoid dendritic cells (pDCs), which recognize infected cells [3,4,5,6,7]. pDCs are immune cells known to function as sentinels of viral infection and are a major type I IFN-producing cell type in vivo [8,9]. Using hepatitis C virus (HCV) as a model, we recently demonstrated that HCV infected cells can selectively package immunostimulatory viral RNA within exosomes that deliver their RNA cargo to pDCs, which, in turn, produce IFNa [3]. Exosomes also permit transfer to pDCs of distinct immunostimulatory viral RNAs, such as those of the negative strand lymphocytic choriomeningitis virus (LCMV) [4]. This sensing pathway is thought to assure recognition of infected cells and hence protects the host against viruses that defeat the pathogen-sensing machinery within the cells they infect. Virtually all viruses have evolved strategies that preclude antiviral signaling in the cell they infect [10]. For example, dengue virus (DENV) has evolved several evasion strategies that prevent IFN and ISG expression within infected cells [11]. Notably, the DENV NS2B-3 protease complex, by cleavage and degradation of an adapter of the cytoplasmic sensor-mediated signaling (STING, also called MITA) and by preventing phosphorylation and nuclear translocation of the downstream transcriptional factor, IFN regulatory 3 (IRF3), inhibits type I IFN production in DENV infected cells [12,13,14,15]. Despite these potent inhibitory mechanisms, expression of antiviral and inflammatory molecules is readily detected in DENV infected humans [16,17]. Their levels play a pivotal role in DENV infection clearance and pathogenicity [16,18,19], thus highlighting the importance of elucidating the host sensing mechanisms leading to the IFN response during DENV infection. Here, we showed that pDCs are robust IFNa producer cells in response to DENV infected cells. In addition, we demonstrated that cell-to-cell contact-and TLR7-dependent pDC responsiveness leads to an antiviral state, inflammatory cytokine production as well as expression of co-stimulatory molecules by pDCs. Newly formed particles of DENV, like many viruses, undergo maturation by cleavage of the virus envelope protein, premembrane (prM), in the secretory pathway that renders the virus infectious [20]. Yet, the prM cleavage site is suboptimal, leading to the secretion of about 30-40% immature, prM-bearing particles [21,22,23,24,25,26]. This evolutionarily conserved suboptimal site may be critical for the export of the infectious viral particles and/or may also positively contribute to viral infection by usurpation of humoral immune response, because anti-prM antibodies facilitate efficient binding and cell entry of prM-containing immature particles into Fc-receptor-expressing cells, a process called antibody dependent enhancement (ADE) [21,22,23,27,28]. Here, we report a previously unsuspected function of immature particles in innate immunity. Although the immature particles are not infectious, they are fully competent to trigger a robust type I IFN response by contacting non-permissive pDCs. Our results highlight the trade-off between efficient secretion of infectious viral particles and the production of a large amount of IFN-inducing immature particles. IFNa is robustly produced by pDCs in contact with DENV infected cells To investigate the mechanisms regulating the IFN response against DENV infection, primary human peripheral blood mononuclear cells (PBMCs) from healthy donors were exposed to supernatants containing DENV virions or DENV infected cells. We found that PBMCs specifically responded to co-cultivation with DENV infected cells but not to uninfected Huh7.5.1 cells, by a robust secretion of IFNa ( Figure 1A). In sharp contrast, supernatants from the DENV infected cells failed to trigger IFNa secretion by PBMCs ( Figure 1A). Plasmacytoid dendritic cells (pDCs), which represent a rare PBMC population, i.e. 0.41% of PBMCs ( Figure 1B, upper panel), are known to produce IFNa [9]. Antibody-mediated pDC depletion from PBMCs ( Figure 1B, middle panel) abolished IFNa secretion in response to co-culture with DENV infected cells ( Figure 1A). Similar results were also obtained using DENV infected BHK-21 cells (Figures S1A and S1B). To rule out potential non-specific effects of the depletion procedure on innate cell responsiveness, we verified that IL-6 production triggered by lipopolysaccharide (LPS) exposure was maintained after pDC depletion ( Figures 1C and S1C). Consistent with the depletion results, the isolated pDC population ( Figure 1B, lower panel) potently produced IFNa in response to co-culture with DENV infected cells, but not in the presence of their supernatants ( Figure 1A). A very limited number of pDCs (i.e., 2,000 pDCs) was sufficient to produce a robust secretion of IFNa ( Figure 1A). Similar levels of IFNa production were detected after co-culture of infected cells with isolated pDCs as compared to total PBMCs, which contained a similar number of pDCs ( Figure 1D), further suggesting that pDCs are the main IFNa producer cells among PBMCs. We showed that the cells productively infected with DENV did not produce IFNa themselves ( Figure S2A). The pDC IFNa response increased as the duration of infection and, thus the replication levels, prior to co-culture increased ( Figure S2A). Remarkably, similar levels of IFNa secretion were reproducibly obtained with pDCs isolated from the blood of a cohort of 20 healthy donors ( Figure 1E). Together these results suggest that pDCs represent the main cell type in PBMC populations that produce IFN in response to co-cultivation with DENV infected cells and that this response was not induced by the addition of cellfree supernatants containing virus. To exclude the possibility that pDCs respond transiently to supernatants containing DENV, we quantified IFNa secretion in time course experiments. IFNa secretion was already detectable as early as 4 hours after co-cultivation of pDCs and DENV infected cells (Figure 2A). IFNa levels concurrently increased over the time course of co-culture of DENV infected cells with either pDCs or PBMCs, and reached levels around 100 ng/mL after 16 hours of co-culture ( Figure 2A). In contrast, cell-free supernatants containing DENV did not trigger detectable IFNa production by pDCs or by PBMCs at any of time points analyzed (Figure 2A). IFNa Author Summary Viral recognition by the host often triggers an antiviral state, which suppresses viral spread and imparts adaptive immunity. Like many viruses, dengue virus (DENV) defeats the host-sensing pathway within infected cells. However, in vivo studies have demonstrated a key role of innate immunity in controlling DENV infection. Here we report that sensing of DENV-infected cells by non-permissive innate immune cells, the plasmacytoid dendritic cells (pDCs), triggers a cell-contact-and TLR7-dependent activation of a strong antiviral IFN response. This cell-tocell sensing involves transmission of viral elements that are clustered at the interface between pDCs and infected cells and is regulated by the actin network. Importantly, we revealed that uncleaved prM surface protein-containing immature particles play a key function in stimulating the innate immune response. These non-infectious immature particles are released by infected cells as a consequence of a suboptimal cleavage site, which is an evolutionarily conserved viral feature that likely favors the export of infectious virus by prevention of premature membrane fusion in the secretory pathway. Therefore our results highlight a conceptually novel trade-off between efficient infectious virus release and the production of IFN-inducing particles. This concept may have broad importance for the many viruses that, like DENV, can disable the pathogensensing machinery within infected cells and can release uncleaved glycoprotein-containing non-infectious particles. producer cells were markedly enriched in pDCs, characterized as a CD123-positive population, as compared to the CD123-negative population ( Figure 2B). For example, 12 hours after co-culture of DENV infected cells with PBMCs, <0.05% and <25-30% of CD123-negative and -positive cells, respectively, were IFNa positive ( Figure 2B). Consistently, the frequencies of IFNa producer cells (i.e., about 30%) among pDCs (i.e., CD123-positive populations) were comparable in co-cultures of DENV infected cells with PBMCs vs. isolated pDCs ( Figure 2B). Together these results suggested that IFNa is robustly produced only by pDCs that are co-cultured with DENV infected cells. Sensing of cells infected by different members of the Flavivirus genus by pDCs is not cell-type restricted Next, we showed that co-cultivation of DENV infected primary cells, i.e., monocyte-derived macrophages (mo-M) and monocyte- Figure 1. pDCs robustly produce IFNa in response to DENV infected cells. (A) Quantification of IFNa in the supernatants of PBMCs, pDCdepleted PBMCs and isolated pDCs (Responders) co-cultured with DENV infected Huh7.5.1 cells (DENV cells) or treated with 100 ml of supernatants from the latter (DENV SN), as indicated (Inducers). Viral titers of DENV SN <2.5610 6 foci forming units (ffu)/ml. pDC depletion/enrichment was performed using an anti-BDCA-4 antibody. Cell #; number of co-cultured responder cells. Cont cells; uninfected Huh-7.5.1 cells. Arrows indicate results below the detection threshold of the IFNa ELISA (i.e., 12.5 pg/ml). Results are representative of 4 independent experiments. Error bars represent the means 6 SD. (B) Representative FACS analysis of pDC depletion and isolation from PBMCs using the pDC selective markers, CD123 and BDCA-2. (C) Quantification of IL6 in the supernatants of PBMCs and pDC-depleted PBMCs triggered by LPS (10 mg/mL for 20 hours). Results are expressed as percentage relative to LPS-treated PBMCs. 3 independent experiments in triplicate (error bars, means 6 SD), paired Student's t test, ***p,0.0001, NS p.0.1, D; indicates separated group by ANOVA. (D) Total PBMC population containing a number of pDCs equivalent to the purified pDCs as determined by FACS analysis as described in (B) were co-cultured with DENV infected cells. IFNa productions were thus compared for equal numbers of pDCs. PBMCs and pDC-depleted PBMCs were compared using equal cell numbers. The results are expressed as IFNa levels relative to the co-culture with PBMCs, set at 100, 3 independent experiments in triplicate, paired Student's t test, ***p,0.0001, NS p.0.05, D; indicates separated group by ANOVA (error bars, means 6 SD). (E) Quantification of IFNa secretion by pDCs isolated from the blood of a healthy donor cohort (n = 20) cocultured with infected cells or treated with their supernatant. IFNa levels in the supernatants of pDCs co-cultured with uninfected (cont) cells or with DENV SN were all below the detection threshold (i.e., 12.5 pg/ml). doi:10.1371/journal.ppat.1004434.g001 derived dendritic cells (mo-DC) with pDCs (isolated from the same donor), potently triggered pDC IFNa secretion ( Figures 3A and 3B). This stood in stark contrast to the corresponding cell-free supernatants containing virus or the parental uninfected cells did not, or very weakly, induced pDC IFNa production (Figures 3A and 3B). Consistent with the previously reported inhibition of type I IFN production by the DENV NS2B-3 protease in infected cells [12,13,14,15], DENV infected primary cells did not produced detectable levels of IFNa ( Figures 3A and 3B). Additionally, we determined if the production of IFNa by pDCs could be reproduced in response to co-culture with various cell types infected by DENV. Robust secretion of IFNa was triggered when pDCs were co-cultivated with DENV infected cell lines from different origins (i.e., human Huh7.5.1, Hela and 293T cells or non-human BHK-21 and Vero cells), but not by the corresponding supernatants containing virus or the uninfected cells ( Figure 3C). DENV infected Vero cells were weaker IFNa inducers ( Figure 3C), consistent with lower levels of intracellular DENV RNA ( Figure 3D) and infectious viral particle ( Figure 3E) produced by these cells, suggesting that pDC IFNa induction is proportional, to some degree, with the level of viral replication. Remarkably, 293T cells infected by another member of the Flavivirus genus, West Nile virus (WNV), but not the corresponding cell-free supernatants containing virus, also triggered robust IFNa production when co-cultured with pDCs ( Figure 3C). Similar to the results obtained using co-cultures with DENV infected cells, the pDC IFNa responses increased as the numbers of WNV infected cells increased (Figures S2B and S2C). Together, these results demonstrated that the production of IFNa by pDCs in response to co-culture with DENV infected cells is not cell type specific and that pDCs similarly respond to another member of the Flavivirus genus. Short range sensing of infected cells by pDCs triggers the IFNa response Cell-free supernatants containing virus from various infected cell types failed to trigger pDC IFNa production, even when added as crude non-filtered supernatants containing virus at concentrations as high as 20 infectious units per pDC ( Figure S3), indicating that the transmission of the immunostimulatory signal to pDCs likely requires cell-to-cell contacts. To determine if contacts with DENV infected cells favors pDC sensing, we assessed IFNa production by pDCs cultured in transwell chambers with infected cells. Transwell cultures containing DENV infected monocyte-derived dendritic cells (mo-DCs) and pDCs separated by a 0.4 mm permeable membrane did not result in detectable levels of IFNa production by the pDCs ( Figure 4A). Similar results were obtained using DENV infected Huh7.5.1, BHK-21, Hela and Vero cells as well as WNV infected cells ( Figure 4B), confirming that this feature is not cell type specific or restricted to DENV. Similarly to IFNa, pDCs robustly produced IFNb when in contact with DENV infected cells, but not when cells were physically separated by a transwell membrane ( Figure 4D). Consistent with these results, IFNb production by pDCs was not triggered by supernatants from DENV infected cells and DENV infected cells did not themselves release detectable levels of IFNb ( Figure 4D). In control experiments using identical transwell culture settings, an agonist of TLR7, a viral RNA immune sensor [9], triggered the production of both IFNa and IFNb by the pDCs at levels similar to those obtained in the co-culture setting ( Figure 4C and 4E), thus ruling out potential non-specific effects of the experimental setting on pDC responsiveness. In agreement with previous reports [29,30], vesicular stomatitis virus (VSV) or Influenza virus (FluAV) containing supernatants robustly triggered IFNa production by pDCs ( Figure 4F). Consistent with these results, VSV and FluAV infected cells in contact with pDCs ( Figure 4F, cocult) or separated by a transwell membrane ( Figure 4F, TW), triggered IFNa production at similar levels. This suggested that contact with virus infected cells is not a universally employed mechanism to promote pDC activation by RNA viruses. Next, viral transmission across the transwellmembrane was assessed by quantifying infectious DENV (Figure 4G) and WNV ( Figure 4H) on both sides of the membrane that separated infected cells from recipient cells. To evaluate the possible interference of recipient cells on the extracellular infectivity detection, we compared two types of recipient cells, i.e., IFNa response-competent pDCs, which are non-permissive to infection ( Figure S4) and permissive cells ( Figure 4G and 4H, Naïve recipient cells). As expected from their size, infectious viral particles readily flowed across the 0.4 mm membrane ( Figures 4G and 4H), thereby permitting viral transmission from infected cells to naïve cells in the absence of direct contact ( Figures 4I and 4J). In sharp contrast, type I IFN production by the pDCs was induced exclusively under conditions where cell-to-cell contact was possible between infected cells and pDCs ( Figures 4A, 4B and 4D). Collectively, these results demonstrated that the exposure of pDCs to the DENV or WNV infected cell milieu either at defined time points ( Figure 3A-C) or continuously (Figures 4A, 4B and 4D) failed to trigger a robust IFN response by pDCs, which were responsive to infected cells by cell-to-cell contact and/or in a shortrange manner. DENV infected cells transmit viral RNA to pDCs and induce a TLR7-mediated antiviral state and pDC maturation pDCs typically respond to viral infection via endolysosomelocalized TLR7-or TLR9 sensors that recognize RNA or DNA viral genomes, respectively [9]. Accordingly, we examined the transmission of DENV RNAs to co-cultured pDCs. The presence of DENV RNA in infected cells and co-cultured pDCs (selectively labeled with DiI, a fluorescent membrane dye) was assessed using a highly sensitive DENV RNA-specific fluorescence in situ hybridization (FISH) assay ( Figure 5A, upper panels). The analyses were performed after 5 hours of co-culture with DENV infected cells, at which time pDCs already produced IFNa (Figure 2A). DENV RNA (green) was detected as discrete dots inside pDCs ( Figure 5A, lower panels). Inspection of consecutive Z-axis sections of cocultures stained by combined DENV RNA FISH and anti-IFNa immuno-detections revealed that the frequency of DENV RNApositive pDCs was elevated in both IFNa-positive (i.e., 85%) and IFNa-negative pDCs (i.e., 74.5%) ( Figure 5A, summary table). The specificity of these examinations was validated by the absence of DENV RNA-positive pDC when co-cultured with uninfected cells and when the FISH procedure was performed in the absence of the DENV RNA specific probe ( Figure 5A, summary table and Figure S5). The presence of DENV RNA in IFNa-negative pDCs may reflect the time required for DENV RNA to trigger pDCs to produce enough IFNa to be detectable, which may not have occurred by 5 hours of co-cultivation. Alternatively, differential DENV RNA localization in intracellular compartments may modulate their recognition by innate sensors, and/or potential subsets of pDCs may be differentially responsive to the DENV RNA stimulus, in accordance with the maximal detection of about 30% IFNa-positive pDCs at plateau ( Figure 2B). Only a few DENV RNA dots were detected inside pDCs, suggesting that it is a rare event but sufficient to trigger pDC IFN production. Together, these results indicated that DENV RNA was readily transmitted from DENV infected cells to co-cultured pDCs, supporting the notion that DENV RNA might be recognized by pDC TLR7. Accordingly, a TLR7 antagonist significantly inhibited pDC IFNa production induced by DENV infected cells ( Figure 5B). The specificity of this TLR7 antagonist was demonstrated by the inhibition of IFNa production induced by a TLR7 agonist (R848) but not by a TLR9 agonist (ODN2216) ( Figure 5B). Collectively, these results suggested that DENV infected cells transfer viral RNA to co-cultured pDCs and trigger TLR7-dependent IFNa production. Next, to further define the nature of the pDC-mediated antiviral state induced by contact with DENV infected cells, we examined the secretion of the inflammatory cytokines, IL-6 and tumor necrosis factor (TNF)-a, triggered by activation of the transcription factor NF-kb, known to transduce antiviral signaling downstream of TLR7 [1]. TNF-a is known to play a pivotal role in the vascular leakage syndrome, a hallmark of dengue hemorrhagic fever [18]. Sensing of DENV infected cells, but not their supernatants, specifically triggered pDCs to produce IL-6 and TNF-a at levels comparable to those induced by treatment with a TLR7 agonist ( Figure 5C). In addition, ISGs (i.e., MxA and ISG56) were specifically up-regulated in co-cultures of DENV infected cells with pDCs or PBMCs ( Figure S6), thus indicating the establishment of an antiviral state. Finally, we determined if DENV infected cells trigger pDC maturation as assessed by the up-regulation of the CD83 and CD86 markers at the cell surface. DENV infected cells, but not their supernatants, triggered a rapid increase in the surface expression of CD83 on co-cultivated pDCs (i.e., in CD123 marker-gated cells) ( Figure 5D, left panel), accompanied by a slightly delayed augmentation of CD86 cell surface expression ( Figure 5D, left panel) and by a concomitant increase in IFNa secretion ( Figure 5D, right panel). Collectively, these results demonstrated that sensing of DENV infected cells by TLR7, a sensor of single stranded-RNA, triggers IFNa production by pDCs, along with the induction of the inflammatory response, an antiviral state and pDC maturation. The IFNa response by pDCs is modulated by DENV glycoproteins To define how pDCs sense DENV infected cells, we analyzed the ability of cells harboring recombinant DENV genomes containing mutations specifying phenotypes deficient in various viral functions to trigger IFNa production by co-cultured pDCs. First, we tested cells containing DENV genomes encoding lethal mutations in the methyltransferase domain of the viral NS5 polymerase (i.e., Rep 2/2 ) [31]. As expected [31], the triple mutation significantly reduced the intracellular level of DENV RNA at 48 hours post-transfection as compared to the wild type (WT) genome ( Figure 6A), reflecting a failure to amplify viral RNA ( Figure S7A). Consistently, this mutant did not express Figure S7B). Despite comparable intracellular viral RNA levels between the DENV WT and Rep 2/2 mutant genomes at the onset of coculture i.e., 24 hours post transfection, likely reflecting the input transfected RNA ( Figure 6A), Rep 2/2 DENV mutant genome harboring cells did not trigger IFNa production by co-cultured pDCs ( Figure 6D). Similarly, cells harboring DENV genomes encoding a four amino acid deletion in the capsid (i.e., amino acids V51-to-L54), that significantly compromised both viral RNA replication ( Figures 6A and S7A) and viral protein expression ( Figure S7B), failed to induce IFNa production by co-cultured pDCs ( Figure 6D). Together these results indicated that the pDC IFNa response requires active viral replication in neighboring DENV infected cells. Next, to address the requirement of viral genome release for pDC activation, we tested the effects of co-culture with cells harboring DENV genomes encoding point mutations in the envelope (E) glycoprotein, i.e., the substitutions D215A, H244A or P217A, known to inhibit infectious viral production [32,33]. Consistent with previous reports [32,33], the E glycoprotein mutations did not impair intracellular levels of either viral RNAs or proteins (Figures 6A, S7A and S7B), but they all greatly compromised the production of infectious particles ( Figure 6B). Both the D215A and H244A mutations abrogated the release of viral RNA and structural proteins and the pDC IFNa response (Figures 6C-D and S7C). Conversely, cells harboring DENV genomes encoding the P217A mutation triggered the IFNa response by pDCs (i.e., <36% relative to WT) ( Figure 6D) at various inducer cell concentrations ( Figure S7D) and in proportion to the release of extracellular DENV RNA (i.e., <60% and 26% relative to WT at 24 and 48 hours post-transfection, respectively) ( Figure 6C) and viral structural proteins ( Figure S7C). Remarkably, the production of infectious virus ( Figure 6B) was severely and disproportionally inhibited by the P217A mutation (i.e., 40-to-1,000 fold-reduction at 24-to-48 hours post-transfection) as compared to the modest inhibition of the IFNa response by pDCs (i.e., <2.5 fold-reduction of IFNa response in the same time period) ( Figure 6D and S7D). These results suggested that infectious virus production is not required and/or is not ratelimiting for pDC activation. Consistently, pDCs were not permissive to DENV infection ( Figure S4A), this latter observation is in line with the previous demonstration of pDCs as refractory to infection by other viruses [30,34,35]. Altogether, these results suggested that glycoprotein-dependent release of non-infectious viral components by DENV infected cells might trigger the IFNa response by contacting pDCs. DENV envelope protein-dependent transfer and activation of IFNa response by pDCs To determine whether DENV surface proteins mediate the transmission of viral components to pDCs, we first assessed whether, similarly to DENV RNA ( Figure 5A), the DENV envelope proteins are transmitted into the pDCs, by inspection of consecutive Z-axis sections of DiD-labeled pDCs in co-culture with cells harboring the WT and DENV genomes encoding E protein mutations ( Figure S8). Similar to DENV genome, we observed the E glycoproteins (E GP) in dot-like structures inside the pDCs. The frequencies of E GP dot-positive pDCs were elevated when in the co-cultures with either cells harboring the WT genome (i.e., around 90%) or the P217A mutation (i.e., above 65%) ( Figures 6E and S8), which was in proportion to the release of extracellular DENV RNA (i.e., 60% relative to WT particles at 24 hours post-transfection) (Figures 6C). In contrast, cells harboring the DENV genome encoding the H244A mutation in E, which do not release viral particles and fail to trigger the IFN response by pDCs ( Figures 6B, 6C, 6D, S7C and S7D), demonstrated little to no transmission of the E GP into the pDCs ( Figures 6E and S8). Because the intracellular levels of viral components (i.e., viral RNA, E and capsid proteins) were equivalent for cells harboring DENV genomes encoding the H244A point mutant, as compared to WT genome ( Figures 6A and S7B), the results suggested that pDC IFNa production is activated by the glycoprotein-mediated transmission of viral components from DENV infected cells into contacting pDCs. Next, we tested the impact of expressing the DENV surface proteins alone ( Figure S9A) on pDC IFN induction. Expression of the envelope proteins alone is known to result, in absence of nucleocapsid, in the release of viral envelope containingmembrane vesicles, the sub-viral particles (SVPs) ( Figure S9B) [36]. Although the glycoproteins were readily transmitted from cells expressing only the DENV surface proteins to the co-cultured pDCs ( Figure S9D-F), IFNa production was not triggered ( Figure S9C). These observations are in agreement with the transmission of DENV RNA to pDCs and activation by the TLR7 RNA sensor ( Figure 5A and 5B). To corroborate these results, we determined whether pDC activation by contact with DENV infected cells requires an internalization-dependent mechanism by testing inhibitors of dynamin (Dynasore) [37], of clathrin-mediated endocytosis (Chlorpromazine [38]) and of macropinocytosis (Gö6983-PKC inhibitor [39,40]). Inhibitors of both dynamin and clathrinmediated endocytosis, but not macropinocytosis, abrogated pDC IFNa production triggered by DENV infected cells ( Figure S10A), without any effect on the ongoing DENV replication and viral production (Figures S10B-C). In addition, these inhibitors did not markedly impair pDC IFNa production induced by a TLR7 agonist ( Figure S10A), a cell-permeable imidazoquinoline, which passively diffuses inside the pDCs [41], thus ruling out potential side-effect downstream of TLR7 recognition. These results, in agreement with the requirement of the endolysosome localizedsensor, TLR7 ( Figure 5B), suggested that pDC IFNa production triggered by DENV infected cells requires glycoprotein-mediated secretion of non-infectious viral components, which are subsequently internalized by co-cultured pDCs. These results demonstrated that pDC activation triggered by DENV infected cells is distinct from that induced by cells infected by other viruses, such as HCV, LCMV and classical swine fever virus (CSFV), which does not require viral structural protein expression [4,5,7]. Disruption of cell contact and DENV surface protein clustering by cytoskeleton inhibitors abrogates the pDC IFNa response Next, we sought to study the regulation by cell contacts of DENV surface protein-dependent transfer and activation of pDCs. First, the cytoskeleton organization at the cell interface between pDCs and DENV infected cells was determined by confocal microscopy analysis. We observed an accumulation of the actin network at the cell contacts ( Figure 7A-E), while the microtubule network was not markedly modified at this location ( Figure S11A, left panel). In agreement with the importance of secreted structural components for pDC activation (Figure 6), specific immunostaining of non-permeabilized cells revealed that envelope proteins (i.e., E GP and prM) were both present as clusters at the interface between pDCs and infected cells (Figures 7F-Q and S12). These observations prompted us to define the impact of the cytoskeleton network on cell contact-dependent pDC IFNa production. We showed that two inhibitors of the cytoskeleton network, Latrunculin B and Nocodazole (i.e., actin and microtubule depolymerizing drugs, respectively) disrupted the actin network in co-cultures of pDC/DENV infected cell ( Figure 8A), consistent with previous reports [42,43]. As expected, the microtubule network was only perturbed by Nocodazole treatment ( Figure S11A) [44]. By imaging flow cytometry analysis of GFP expressing-DENV infected cells co-cultured with pDCs (stained by pDC marker CD123) (Figures S13), we showed that the frequency of conjugates between pDCs and DENV infected cells was greatly decreased by inhibitors of the cytoskeleton network ( Figure 8B and S13). Both these inhibitors, in conjunction with the loss of actin accumulation at the contacts ( Figure 8A), impaired E glycoprotein clustering ( Figure 8C). Indeed, quantifications performed in a ''double-blind'' set-up revealed that, while E GP clustering was readily observed at the cell interface in untreated co-cultures (i.e., <60% of the pDCs at close proximity with DENV infected cells harboring E GP clustering), these frequencies were reduced to 15% for co-cultures treated with either inhibitor ( Figure 8D). Importantly, similar treatments inhibited IFNa production by the pDCs ( Figure 8E). Neither compound inhibited DENV RNA replication in the infected cells and infectious viral production ( Figures 8F-G), nor did they prevent the internalization ability of pDC, as assessed by membrane dye uptake ( Figure S11B). In addition, they did not inhibit pDC IFNa production triggered by a TLR7 agonist ( Figure 8E), thus ruling out potential nonspecific effects of these compounds on pDC responsiveness. Altogether, these results suggested that the cytoskeletondependent regulation of cell contacts and apposed GP clustering likely favors the subsequent activation of IFNa production by the pDCs. Cell producing immature DENV particles trigger pDC IFNa production more potently than cells releasing mature virus The phenotypic analysis of a virus production defective mutant (i.e., P217A) ( Figure 6) revealed that infectious virus production is not required and/or is not rate-limiting for pDC activation. Like many viruses, DENV infected cells release immature noninfectious particles harboring uncleaved precursor membrane proteins (prM), that are generated by inefficient cleavage of prM by the resident trans-Golgi protease furin [21,22,23,24,25,26]. To determine whether immature particles can serve as vehicles from DENV infected cells to contacting pDCs, we first determined the presence of prM protein dots inside co-cultured pDCs by using an antibody recognizing the pr peptide [45] and by examining consecutive Z-sections by confocal microscopy analysis. Dots of prM were observed inside pDCs co-cultured with DENV infected cells (Figures S14A and S14C) with very little background staining in pDCs co-cultured with uninfected control cells (Figures S14B and S14C), suggesting that prM (and/or pr peptide), along with the E GP ( Figures 6E, S8 and S9) are readily transferred to the pDCs. Next, to determine the ability of immature particles to convey immunostimulatory RNAs to pDCs, we tested the effects of DENV genomes encoding mutations in the furin cleavage site of the prM protein (i.e., the substitutions R88A, K90A and R91A), which, as expected from previous reports with single mutations [26], failed to produce infectious virus ( Figure 9C). By contrast, RNA replication, intracellular viral protein expression (Figures 9A, S7A and S7B), release of viral components ( Figures 9B and S7C), and transmission of viral components to the pDCs ( Figures 9D and S8D) were maintained at levels comparable to the WT counterparts. Remarkably, the pDCs produced similar levels of IFNa when comparing contacting cells producing noninfectious immature virions vs WT DENV ( Figure 9E). Similar results were obtained when using various concentrations of cells harboring WT/mutant DENV genome ( Figure S7D). Therefore, these results suggested that cells producing immature particles potently trigger IFNa production by contacting pDCs. Next, to define the specific function of uncleaved prMcontaining particles in pDC activation by DENV infected cells, we designed experiments aiming at modulating the levels of prM maturation. Firstly, we assessed the impact of an inhibitor of furin. As expected, this inhibitor markedly decreased the maturation of DENV particles, as shown by an increased prM:E ratio measured by ELISA ( Figure 9F). The production of extracellular infectious DENV was also reduced in a dose-dependent manner upon furin treatment ( Figure 9H), while the levels of intracellular DENV RNA were unchanged ( Figure 9G). Remarkably, inhibition of prM cleavage enhanced IFNa productions by co-cultured pDCs in a dose-dependent manner ( Figure 9I). Increased pDC activation was observed despite a reduction in the release of physical particles, as shown by extracellular DENV RNA measurement ( Figure 9H). Altogether these results suggested that the activation of pDCs triggered by contacting infected cells inversely correlates with the levels of prM maturation. To further confirm these results, we studied the impact of furin up-regulation. As expected, cells overexpressing furin produced viral particles containing reduced prM:E ratios (i.e., <10-fold reduction) ( Figure 9J). The specific infectivity of DENV particles was increased upon furin overexpression (i.e., <3-fold increase in the ratios of infectivity to extracellular DENV RNA, comparing furin-overexpressing cells to counterpart control cells). Thus, cells overexpressing furin were compared to counterpart cells that produced either similar levels of intracellular and/or extracellular DENV RNA, or alternatively, similar production of infectious virus, by using different MOIs ( Figure 9K-L). Our results indicated that cells producing more mature particles were clearly impaired at triggering IFNa production by co-cultured pDCs ( Figure 9M). Altogether these results demonstrated that cells producing immature DENV particles are very potent at inducing IFNa production by pDCs, as compared to cells releasing mature virions. Discussion DENV has rapidly emerged in recent years as the most significant arboviral disease of humans, with greater than half of the world population at risk of infection [46]. Despite many years of research, the virus-host interactions that determine dengue pathogenesis are still incompletely understood [47]. Nonetheless, the self-limiting febrile symptoms observed in most DENVcontracted cases and the short course of illness suggest a key role for innate immune defenses in controlling DENV infection at early stages [18]. Accordingly, in vivo studies have demonstrated a critical role for type I IFNs in the host defense against DENV [16,18,19]. Furthermore, the activation of pDCs strongly correlates with the disease outcome of DENV infected patients [48]. Importantly, a study of children with DENV infections across a broad range of illness severities suggested that a blunted blood pDC response to systemic infection was associated with higher viremia levels and was a key step in the pathogenic cascade toward severe disease [49]. Although the activation mechanism and exact function are still elusive, altogether, these findings highlight the critical roles played by pDCs and the IFN response on disease progression in DENV infected individuals. Here, we revealed that DENV infected cells potently trigger IFNa secretion by nonpermissive pDCs, a host response that bypasses the evasion from the innate response within infected cells. Furthermore, we demonstrated that TLR7-dependent IFNa production by pDCs in response to infected cells is concurrent with other hallmarks of innate immunity, such as inflammatory cytokine secretion, ISG up-regulation and pDC maturation. In agreement with our results, Rodriguez at al. showed that DENV-containing supernatants failed to trigger pDC IFNa production [50], while other reports suggested that they triggered pDC activation [48,51]. This discordance may be explained by the preparation and concentration of supernatants and large number of pDCs that were used in the latter reports. Remarkably, the results of our study demonstrated that, despite continuous exposure to the infected cell milieu, physical separation from infected cells precludes the IFN response by pDCs. Consistently, strong pDC IFNa secretion was induced by co-cultured DENV infected cells (i.e., up to 0.5 mg/ml), indicating that cell-to-cell contact is a key feature of pDC activation. Interestingly, cell-to-cell transmission of immunostimulatory signals appears to be a common characteristic of pDC induction, as shown in this report for two members of the Flavivirus genus, DENV and WNV and as previously reported for other viruses, i.e. HCV, HIV, LCMV and CSFV [3,4,5,6,7]. Specifically, our previous results obtained in the context of HCV indicated that pDC stimulation occurs via viral RNA-containing exosomes. In this context, we suggested that the concentration of immunostimulatory exosomes in the supernatants was below an activating threshold for pDC stimulation, while this threshold might be reached in the intercellular space when cells are in contact [3]. Importantly, we showed here that viral structural components are detected in clusters at the interface between pDCs and infected cells. This finding suggests that cellular surface molecules and/or structures might concentrate the PAMP-carrier at the cell contacts, thereby enhancing transmission to pDCs. We further revealed that the actin network is pivotal for both this clustering of viral components at the pDC-infected cell interface, likely by regulating cell-to-cell contacts, and for pDC activation. Based on this observation, it is conceivable that the cytoskeleton structure serves as a platform contributing to the cell-to-cell transmission of viral components to the pDCs. Additional experiments will be required to test these hypotheses and to determine whether, for the various viruses that trigger the pDC IFN response in a cell-to-cell contact dependent manner, the mechanism of activation involves either common or distinct cellular factors and/or structures at the contacts. The mechanism we have identified is distinct from the conventional induction of the innate response, which typically occurs by the recognition of viral nucleic acids within infected cells [1,2]. Moreover, in contrast with the previously characterized induction of pDC IFNa production through contact with infected cells [3,4,5,7], here we have defined a sensing pathway, which requires an E glycoprotein-dependent secretion of viral components, notably viral RNA, to trigger the pDCs. As such, it is different from the mechanism of induction by cells infected by other viruses, which does not require viral structural proteins [3,4,5,7]. Indeed, our results illustrate the crucial role of DENV envelope proteins in the induction of the innate response by neighboring IFN producer pDCs that are not permissive to infection. Importantly, our results revealed that cells producing uncleaved prM-containing immature particles triggered IFNa by pDCs more potently than cells efficiently producing fully mature pDCs in co-cultures for 8 hours. Results are expressed as the percentages of DiI stained pDCs that contain E GP dot(s) detected by immunostaining and analyzed by confocal microscopy. Representative pictures are displayed in Figure S8. Similar results were obtained in 3 independent experiments and <20 pDCs surrounded by E GP positive/DiI negative cells were observed per experimental condition. doi:10.1371/journal.ppat.1004434.g006 virions. These immature particles are known to be deficient for the membrane fusion step, which occurs in the endo-lysosomal compartment during cell entry [52,53,54]. Interestingly, recognition of viral RNA by TLR7 sensor also takes place in this cellular compartment [1,55]. Therefore, based on these findings, we suggest a working model in which an extended retention within the endosomal compartment of fusogenic-deficient immature particles may favor the exposure of their viral genome for TLR7 recognition. In contrast, mature virions, which are fusioncompetent, could escape from this compartment by membrane fusion. Additional experiments will be required to firmly validate and generalize this new concept. Several reports have demonstrated that a large proportion of uncleaved prM-containing immature particles are released from DENV infected cells, i.e., 30-to-40% of viral particles [21,22,23,24,25,26], on which prM content is variable on a perparticle basis [56,57]. Consistently, we showed that furin overexpression reduced the levels of immature virus, otherwise, produced by DENV infected cells, and concomitantly with reduced pDC IFN response. Although direct proof is still required, current evidence supports the in vivo existence of uncleaved prMcontaining virus. Previous studies have demonstrated that a proportion of B cells isolated from DENV infected individuals produces monoclonal antibodies against prM [58,59]. In addition, the characterization of these anti-prM antibodies indicated that they are a major component of the serological response to DENV infection, leading to increased replication in Fc receptor-bearing cells via antibody-dependent enhancement (ADE) [56,58,60]. Importantly, our results illustrate a previously unsuspected function of these immature particles in innate immunity in mediating an IFN response by non-permissive bystander pDC. Indeed, the results of our study imply that the suboptimal furincleavage sequence, likely evolutionarily conserved to favor efficient export of infectious virus by preventing premature membrane fusion in the secretory pathway and cell entry of immature virus into Fc-receptor-expressing cells by ADE [21,22,23,27,28], might also, by producing an IFN-inducer, contribute to regulate dengue pathogenesis. It is possible that pDC activation by infected cells elicits a strong local innate response that may lead to viral replication suppression or, alternatively, to the possible subsequent recruitment of DENV permissive cells and systemic viral spread. It is also conceivable that the interplay between pDCs and other cells regulating the innate responses, in turn, modulates this newly identified innate sensing mechanism of infected cells and/or the homing of pDCs to the infection site. Productive infection of cells with a wide range of enveloped viruses depends critically on the processing of the viral surface glycoproteins by cellular proteases [20]. Yet, depending on viral variants/strains, such cleavages might be limited by the differential requirement for certain host proteases, as their expression can be tissue-restricted. These selective requirements may contribute to their virulence, as proposed for influenza virus [61]. Additionally, suboptimal cleavage sites are evolutionarily maintained by sequence features, such as, e.g., the presence of acidic residues or glycosylation sites adjacent to the cleavage site [23,62]. These events lead to the release of viral particles with uncleaved glycoproteins, as shown for viruses such as, e.g. measles virus [63], influenza virus [61,64], DENV and WNV [53,56]. Therefore our results, by uncovering a functional role of immature viral particles in innate immunity, may have broad implications for our understanding of the host-virus relationship. Reagents The antibodies used for immunoblotting were mouse anti-E glycoprotein (4G2 and 3H5) kindly provided by P. Despres (Pasteur Institut, Paris, France); mouse anti-capsid (6F3) kindly Introduction of mutations into the genomic length DENV-2 strain NGC cDNA clone pDVWS601 Introduction of mutations into the genomic length DENV-2 strain NGC cDNA clone pDVWS601, encoding amino acid substitutions into the E glycoprotein (i.e., H244A, D215A, P217A) and NS5 (i.e., Rep 2/2 , containing the multiple amino acid substitutions G81A, G83A and G85A) have been described previously [31,32]. Mutations encoding amino acid substitutions in prM (amino acids R88A/K90A/R91A) and an in frame four amino acid deletion in the capsid (amino acids V51-to-L54) were first introduced into DENV-2 subgenomic cDNA fragments by overlap-PCR (OL-PCR) using mutagenic primers. The sequences of the primers are described in the Table S1. The OL-PCR fragments were purified and cleaved with BsrGI and SphI and then transferred into the pDVWS601 plasmid that had been cleaved with the corresponding restriction enzymes. The presence of the mutations and sequence of the PCR derived regions were confirmed by sequencing. In vitro RNA transcripts were prepared from the parental and mutated pDVWS601 plasmids as described above and transfected into Huh7.5.1 cells using the Lipofectamine 2000 transfection reagent (Life Technologies), according to the manufacturer's instruction. One mg of RNA was used to transfect a 60% confluent cell monolayer contained in a single well of a 6-well plate following the manufacturer's protocol. Six hours post-transfection, the cells were either harvested for the quantification of viral RNA (6 hour time point) or washed 3 times with PBS and fresh culture medium added to the cells for additional incubation times. At 24 and 48 hours post-transfection, the cells were harvested for the determination of RNA and protein levels and their supernatants collected for the quantification of viral RNA and infectious titer or concentrated by ultracentrifugation for the determination of protein levels by Western blot. At 24 hours post-transfection, the cells were harvested and co-cultured with isolated pDCs for 18-20 hours. Co-culture experiments Forty-eight hours prior to co-culture, cells were infected at a MOI of 3 using a viral stock of WNV or DENV. Unless otherwise indicated, 2610 4 pDCs were co-cultured with 10 5 infected cells, transfected cells or uninfected parental cells, or treated with 100 ml of supernatant from the latter cells in a 200 ml final volume in 96well round-bottom plates incubated at 37uC/5% CO 2 . Eighteen to twenty hours later, cell-culture supernatants were collected and the levels of IFNa, IFNb, TNFa and IL-6 were measured using a commercially available ELISA kits specific for IFNa and IFNb (PBL Interferon Source), TNFa and IL-6 (Affymetrix), following the manufacturer's instructions. When indicated, 10 5 infected cells or uninfected cells were co-cultured with 3610 4 pDCs or with 10 5 naïve recipient cells, as indicated, in 96-well format transwell chambers separated by a 0.4 mm membrane (Corning). Immunostaining and FACS analysis At the indicated times, cells were harvested and resuspended using 0.48 mM EDTA-PBS solution (Life Technologies). After with extracellular prM:E ratios (J), intracellular DENV RNA (K), extracellular DENV RNA and infectious virus production (L), and quantification of IFNa secretion by co-cultured pDCs (M). Results are expressed relative to 293T cells infected at a MOI of 1 in absence of Furin overexpression (indicated as 293T-1), set to 100 (error bars represent the means 6 SD, 3 independent experiments in triplicates). Arrows indicate results below the detection limit of assay as described in the legend of Figure 6. NA; Not Applicable. doi:10.1371/journal.ppat.1004434.g009 incubation with Fc receptor blocking reagent (MACS Miltenyi Biotec) for 10 minutes at 4uC, surface staining of pDC markers, CD123 and BDCA-2 and/or the cell differentiation markers CD83 and CD86 were detected by a 40 minute incubation at 4uC with 5 mg/mL of the indicated combinations of PE-conjugated mouse anti-CD123, APC-conjugated anti-BDCA-2, PerCP-conjugated anti-CD83, and APC conjugated anti-CD86, respectively, diluted in staining buffer (PBS without calcium and magnesium, with 2% FBS), followed by PBS washes. Cells were then fixed by incubation for 20 minutes at room temperature with 4% paraformaldehyde, followed by 20 minutes incubation with 0.1 M glycin-PBS at room temperature and two PBS washes. For intracellular-immunostaining of IFNa, cocultivated cells were treated with 1 ml/ml GolgiPlug solution (BD Bioscience) before collection. After fixation and CD123-staining steps, IFNa was detected by a 40 minute incubation with APC-conjugated mouse anti-IFNa (MACS Miltenyi Biotec) diluted at 1:10 in permeabilization buffer (BD Bioscience). Cells were then washed twice with permeabilization buffer and resuspended in staining buffer. Flow cytometric analysis was performed using a Digital LSR II, and the data were analyzed with Flow Jo software (Tree Star). The corresponding control isotypes served to define the specific signal. Detection of DENV RNA in pDC-infected cell co-cultures by Fluorescent In Situ Hybridization (FISH) analysis After isolation, 5610 4 pDCs were stained by using 0.5 mM Vibrant cell-labeling solution (CM-DiI, Life Technologies) by successive incubations for 10 and 15 minutes at 37uC and 4uC respectively. Labeled pDCs were washed twice with PBS and then co-cultured with pre-plated DENV infected cells for 5 hours at 37uC in glass bottom 96 well-plate (Fisher Scientific), pretreated with poly-L-lysine at 8 mg/mL. After 4% PFA fixation at room temperature and PBS washing, DENV plus strand RNA was detected using a probe set that targets a region between nucleotide positions 8437-to-9685 in the DENV-2 NGC genome (Panomics/ Affymetrix) according to the manufacturer's instructions. For IFNa immunostaining, the cells were permeabilized by incubation for 7 minutes in PBS containing 0.3% (v/v) Triton -and 3% (w/v) BSA, then incubated with mouse anti-IFNa antibody (MACS Miltenyi Biotec) at 2 mg/ml in PBS containing 3% BSA for 40 minutes at room temperature, followed by an incubation with Alexa 647-conjugated anti-mouse antibody (Life Technologies) and Hoechst dye for 40 minutes at room temperature. As controls, FISH detection of DENV RNA were performed in co-cultures of pDCs with non-infected cells and in co-cultures of pDCs with DENV infected cells by omitting DENV-specific probe and by following the same procedure of hybridization and immunostaining. Images were acquired with a Zeiss LSM 710 laser scanning confocal microscope and analyzed with Image J (http://rsb.info. nih.gov/ij) and IMARIS (Bitplane Inc.) software packages. Immuno-detection of DENV E and prM surface protein and confocal analysis After immune-isolation, pDCs were stained with 0.5 mM Vibrant cell-labeling solution (CM-DiI) as above-described. 4-to-5610 4 DiI-labeled pDCs (DiI-pDCs) were co-cultured with 4-to-5610 4 DENV infected Huh7.5.1 cells for 8 hours at 37uC. For analysis of DENV E and prM transfer into pDCs and cell contacts, co-cultures were performed in LabTek II Chamber Slide System (Nunc). After 4% PFA fixation and three PBS washes, cells were permeabilized 7 min with 0.1% Triton in PBS prior immunostaining. For analysis of DENV surface protein clustering at the cell contacts, co-cultures were incubated in a 96-Well Optical-Bottom Plates. After 4% PFA fixation and three PBS washes, immunostainings were performed without permeabilization step, as previously described [71]. After blocking step (PBS 3% BSA) actin filaments were stained with CF488A-conjugated phalloidin (Biotium) at 1.25 U/mL, a-tubulin was stained with mouse anti-a tubulin (DM1A clone, from Sigma) at 1:2000-dilution, DENV E glycoproteins were detected using anti-E antibody (3H5 clone) at 1:500-dilution and anti-PrM antibody (DM-1 clone, Abcam) at 1:50-dilution and IFNa was detected by a mouse anti-IFNa (Miltenyi) at 1:20-dilution. Antibodies were diluted in 3% BSA-PBS and added to the cell for 1 hour incubation at room temperature. After three PBS-washes with PBS, cells were incubated with an Alexa 647-conjugated-anti-mouse antibody (for detection of anti-a-tubulin and anti-E antibodies) or Alexa 488-conjugated-anti-mouse (for detection of anti-PrM antibody) at 1:1000-dilution in 3% BSA-PBS, added to the cells along with Hoechst diluted at 1:500 (Molecular Probes) for 1 hour incubation at room temperature. After three washes with PBS, cells in 96 wells plate were directly observed and cells in Labtek were mounted with mowiol prior observation. Images were acquired with a Zeiss LSM 710 laser scanning confocal microscope and analyzed with Image J (http://rsb.info.nih.gov/ij) and IMARIS (Bitplane Inc.) software packages. Analysis of intracellular and extracellular RNA levels RNAs were isolated from cells or supernatants harvested in guanidinium thiocyanate citrate buffer (GTC) by phenol/chloroform extraction procedure as previously [3]. The efficiency of RNA extraction and reverse transcription-real-time quantitative PCR (RT-qPCR) was controlled by the addition of carrier RNAs encoding Xef1a (xenopus transcription factor 1a) in vitro transcripts in supernatants diluted in GTC buffer. DENV RNA and Xef1a and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA levels were determined by RT-qPCR using an iScript RT kit (Biorad) and a One-Step PCR Master Mix kit for qPCR and analyzed using StepOnePlus Real-Time PCR system (Life Technologies). The sequences of the primers used for the RT-qPCR are described in Table S1. Extracellular and intracellular DENV RNA levels were normalized for Xef1a and GAPDH RNA levels, respectively. Analysis of extracellular infectivity Infectivity titers in supernatants were determined by end-point dilution using Huh 7.5.1 cells. Foci forming unit (ffu) were detected 72 hours after infection by GFP expression for WNV and anti-E glycoprotein specific immunofluorescence for DENV. Briefly, Huh 7.5.1 cells were fixed with 4% PFA and permeabilization by incubation for 7 minutes in PBS containing 0.1% Triton. Cells were then blocked in PBS containing 3% BSA for 15 minutes and incubated for 1 hour with mouse anti-E glycoprotein (clone 3H5) hybridoma supernatant diluted at 1:200 in PBS containing 1% BSA. After 3 washes with PBS, cells were incubated 1 hour with secondary Alexa 555-conjugated anti-mouse antibody (1:1'000dilution) and Hoechst dye (1:1'000-dilution) in PBS containing 1% BSA. Percentage of E-positive cells and GFP expressing cells was determined using a Zeiss Axiovert 135 microscope. Analysis of concentrated viral supernatants and cell lysats by western blot Viral supernatant were filtrated through a 0.45 mm filter (Corning) and concentrated prior to Western blot analysis by ultracentrifugation at 110,0006 g for 2 hours at 4uC using a SW41 rotor. The pellets were re-suspended in PBS. Viral pellets and cell lysates were extracted using lysis buffer (150 mM NaCl 50 mM Tris HCl pH 8, 1% NP40, 0.5% Deoxycholate, 0.1% Sodium dodecyl sulfate) and analyzed by Western blotting using hybridoma supernatant-containing anti-E (4G2) and anti-capsid (6F3) at the dilution of 1:500 and actin at 1 mg/ml followed by secondary horse radish peroxidase-coupled antibodies and chemiluminescence. Imaging combined with flow cytometry analysis of pDC/ DENV cell conjugates Huh 7.5.1 cells were transduced with retroviral based vector pseudotyped with VSV glycoprotein to stably express GFP, as previously described [72]. Forty-eight hours prior co-culture with pDCs, GFP-expressing Huh 7.5.1 cells were infected at a MOI of 3 using a viral stock of DENV. 10 5 GFP-expressing DENV infected cells were co-cultured with 3610 4 pDCs in low-adherence micro-plate designed for cell harvesting by temperature reduction (Nunc UpCell 96F Microwell Plate from Thermo Scientific) for 5 hours at 37uC in presence, or not, of Latrunculin B and Nocodazole (1 mM), as indicated. After 4% PFA fixation, cocultured cells were harvested by equivalent multi-pipetting at room temperature and washed three times with staining buffer (PBS without calcium and magnesium with 2% FBS). After incubation with Fc receptor blocking reagent (MACS Miltenyi Biotec) for 10 minutes at 4uC, surface staining of a pDC marker, CD123, was detected by a 40 minute incubation at 4uC with 5 mg/mL of APCconjugated mouse anti-CD123, diluted in staining buffer, followed by washes with staining buffer. Co-cultured cells were analyzed by Image Stream X technology (Amnis) at magnification 660 using IDEAS software. The cell population defined as pDC/DENV cell conjugates comprises conjugates of at least one CD123+ cell and at least one cell solely GFP+ cell among the total of APC+ cells, GFP+ cells and conjugates. The cell populations were sorted by using masks (IDEAS software) to eliminate i/the non-specific signals i.e., double positive single cells and ii/cells with background levels for APC signal. Post-cell sorting, the accuracy of the gated cell population in regards to the defined criteria was controlled by a visual inspection of the individual pictures in the gated cells population (i.e., assessment with 90 randomly picked pictures of the population defined as conjugates). The percentages of gated single cells or conjugates with an accurate phenotype according to the defined criteria among the total of examined pictures per category of cell population were: 97% for GFP+ gated population, 99% for APC-CD123+ gated population and 89% for conjugates. Modulation of prM maturation and detection by ELISA 293T cells, which stably express furin, were generated by transfection using Polyethylenimine and selected using hygromycin (at 5 mg/ml). The decRRVKR-CMK inhibitor (Calbiochem) was used to inhibit the Furin activity in Huh7.5.1 co-cultured with the pDCs, as the indicated concentrations. The levels of prM maturation were analyzed by detection of E and prM by ELISA, as previously described [58]. Briefly serial dilutions of viral supernatants were incubated on anti-E (4G2) antibody coated 96-well plate. Then, E and prM were detected by using a humanized version of 3H5 mAb (hu3H5) and anti-prM, respectively. The prM:E ratios were calculated using the viral supernatant dilution with E detection in the linear range. Expression of DENV glycoproteins DENV-2 NGC prM and E genes were cloned under the control of CMV promoter, by amplification from pSVprME [73] using primers ADVprME_Fwd (GATCCCCGGGACCGCCACCAT-GGTGAA) and ADVprME_REV (GATCCCCGGGAGCTT-GATATCAGGCCTGC) and cloned into the Sma I site of the adenovirus shuttle vector pDC104 under the control of the CMV promoter to produce pAdvprME. AdvprME was transfected into cells using the Xtreme-GENE HP DNA Transfection Reagent, follow the manufacturer's instructions. Six hours post-transfection, the cells were washed with PBS and fresh culture medium added to the cells for additional incubation times. At 48 hours posttransfection, the cells were harvested and co-cultured with isolated pDCs for 18-20 hours. Parallel determination of intracellular protein levels by Western blot in harvested cells and their supernatants concentrated by using vivaspin concentrator with centrifugation at 3000 g for 30 min (cut-off 100 KDa, Sartorius). Statistical analysis Paired Student's t-test was used to analyze data. Data considered significant demonstrated p-values less than 0.05. Data were also analyzed using a two ways non-parametrical analysis of variance (ANOVA), followed by comparison with Levene Test, analyzed with xlstat software. Triangles indicate the experimental conditions that belong to a separated group statistically different from the others. Figure S10 Impact of internalization inhibitors on IFNa production by pDCs co-cultured with DENV infected cells. Impact of inhibitors of clathrin-mediated endocytosis (chlorpromazine, CPZ, at 14 mM), of dynamin-dependent internalization (dynasore, at 100 mM) and macropinocytosis (Gö6983-PKC inhibitor, GO, at 5 mM) on pDC IFNa production triggered by DENV infected cells. (A) Quantification of IFNa in the supernatants of pDCs co-cultured with DENV infected Huh7.5.1 cells (DENV cells) or, as control, stimulated by TLR7 agonist the R848 (50 ng/mL), an imidazoquinoline known as a cellpermeable weak base that passively diffuses inside the pDCs. Results are expressed relative to IFNa produced in absence of inhibitor, set to 100 (means 6 SD, n = 4). Arrows indicate results below the limit of detection of the IFNa ELISA (i.e., 12.5 pg/ml). Quantification of the levels of infectious virus production (B) and intracellular DENV genome equivalent (GE) (C) by Huh7.5.1 cells infected by DENV at MOI 3 for 48 hours (as for the co-culture in (A)) and then incubated, or not, with inhibitors, exactly as in (A) (i.e. incubation time and concentration). Results are expressed as percentages relative to untreated DENV cells (means 6 SD, n = 4). (TIF) Figure 7F-K and 7L-Q, respectively. Left panels, confocal analysis of DENV envelope proteins, E GP (purple), prM (green), DiI-stained pDCs (Red), actin detected by Alexa 488-conjugated phalloidin (green), when indicated, and nuclei (blue). Middle panels, confocal microscopy analysis of DENV envelope proteins and nuclei (blue) projected on the phase contrast imaging. Right panels, confocal microscopy analysis of DENV envelope proteins and nuclei (blue). Star mark (*): pDCs and hash mark (#): Huh7.5.1 cells. Similar results were obtained in 3 independent experiments. (TIF) Figure S13 Analysis of the conjugates between pDCs and DENV infected cells by imaging flow cytometry analysis, related to Figure 8B. Imaging flow cytometry analysis (ImageStream) of DENV infected Huh7.5.1 cells, which stably express GFP, and co-cultured with pDCs for 8 hours, as described in the Figure 8B
2016-05-18T17:14:06.188Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "457e1a0d71dc00f0eb7c13c66cba2c86e81f5c88", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1004434&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "457e1a0d71dc00f0eb7c13c66cba2c86e81f5c88", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
15981695
pes2o/s2orc
v3-fos-license
Lack of correlation between non-labile iron parameters, total carbonyl and malondialdehyde in major thalassemia Thalassemia patients are at high risk of iron-induced toxicity and oxidative stress consequences. The present cross-sectional study is conducted to determine whether or not lipid peroxidation or protein oxidation is correlated with iron parameters in patients with thalassemia major. To prove this hypothesis, malondialdehyde and total carbonyl were correlated with the degree of excess iron concentration in the patients. A total of 118 Arabic Iraqi patients and 30 healthy children were participated in the present study. Results showed a significant increase (p<0.05) in serum total carbonyls, malondialdehyde and the iron indices of patients as compared with the control group. Total iron binding capacity and transferrin concentrations decreased significantly (p<0.05) in patients with thalassemia compared with the control group. The results also showed a lack of a significant correlation between each serum malondialdehyde and total carbonyl with each component of iron status. In conclusion, total carbonyls and malondialdehyde were increased in thalassemia patients indicating the vulnerability of these patients to tissue injury caused by oxidative stress. The formation of total carbonyl and malondialdehyde are independent of excess non-labile iron concentration, indicating that different mechanisms are involved in injury caused by the labile iron and in the formation of oxidation end products. Introduction T halassemia major can result in severe complications and death because of deficiency in or lack of synthesized hemoglobin A; patients with this disease are dependent on blood transfusion. (1) β-Thalassemia is an important health problem in different Iraqi governorates because this disorder displays a high genetic carrier rate and frequency of consanguineous marriages. (2,3) However, genetic carriers of β-thalassemia account for a higher percentage than patients manifesting β-thalassemia and comprise a significant percentage of the total population. (2) Iron metabolism disorders, including iron deficiency anemia and excessive iron storage, are common in humans. Iron is essential for oxidation-reduction catalysis and bioenergetics; however, this element may pose health risks because of the formation of toxic oxygen radicals that can attack biological molecules if such radicals are not eliminated properly. This condition is possible in the presence of excess iron concentrations as in transfusion-dependent patients with thalassemia. Hence, specialized molecules for the acquisition, transport (transferrin), and storage (ferritin) of iron in a soluble, non-toxic form have evolved. (4) Oxidative stress is important in the pathophysiology of thalassemia and other congenital and acquired hemolytic anemia cases. Reactive oxygen species (ROS) degrade polyunsaturated lipids and form malondialdehyde (MDA), which is mainly present in enol form. (5) MDA is one of many reactive electrophilic species that cause toxic stress in cells and form stable covalent protein adducts that are referred to as advanced lipoxidation end products. (6) These modifications by MDA can cause both structural and functional changes in oxidized proteins. This naturally occurring ROS is a marker of oxidative stress and used as a biomarker to determine oxidative stress level in organisms. (7) Organisms are constantly exposed to various ROS that induce protein oxidation. Protein carbonyls are efficient biomarkers of oxidative stress because of the relatively early formation and relative stability of carbonylated proteins. (8) However, the nature of relationships among high levels of protein carbonyls, oxidative stress, and diseases remains uncertain. Reactive carbonyl compounds, such as aldehydes and dicarbonyls, exhibit many biological properties. Aldehydes react with proteins to form adducts that induce protein dysfunctions and alter cellular responses. (9) The present study was conducted to examine the possible dependence of MDA and total carbonyl formation on non-labile iron status parameters in blood transfusion-dependent patients with thalassemia. Subjects and Methods Patients. A total of 118 Arabic Iraqi male patients (aged 4 years to 12 years) with β-thalassemia major participated in the present study. These patients were registered as patients with βthalassemia major in Thalassemia Unit at Al-Zahra'a Teaching Hospital in Najaf City, Iraq. This condition was diagnosed by observing clinical symptoms and conducting hematological and hemoglobin HPLC analysis. Hemoglobin HPLC analysis was conducted using an HPLC instrument (VARIANT TM β-Thalassemia Short Program). The patients received approximately 15 ml of packed red blood cells/kg of body weight at each transfusion (2-6 week intervals) to maintain hemoglobin levels above 9.5 g/dl. Patients were under chelation therapy with desferrioxamine B (Desferal) at least four times a week, as a subcutaneous infusion. The range of dose was 30 to 60 mg/kg body weight/day. The median duration of thalassemia was 6.2 years with a range of 1.8 to 9.3 years. The duration of the treatment was 3.1 ± 8.7 years. All participated patients had not undergone splenectomy. Endocrinologic, hepatologic and cardiac evaluations were performed regularly by the physicians. Blood samples from patients were collected after 7-10 days after the last transfusion and just before the next transfusion. Serum C-reactive protein (CRP) is negative in all of the samples T (CRP<6 mg/L). A normal CRP can be used to exclude increased ferritin concentration caused by acute phase reactions. The present study also excluded patients with apparent diabetes mellitus, infection and inflammation, and heart diseases, as well as patients from non-Arabic ethnic groups. Written consents were obtained from patients parents according to the Kufa University ethical rules. Controls. Thirty healthy male children with similar age range to the patients were included in the control group. None of the healthy subjects was anemic or exhibited an evident systemic disease. Methods. Blood samples were collected from individuals in the morning before breakfast and then placed in plain tubes. Serum was separated by centrifugation after clotting. Serum iron levels were estimated using Ferrozine colorimetric method, (10) and total iron-binding capacity (TIBC) was estimated colorimetrically by the following procedure. (11) Excess iron concentration was added to the serum to saturate transferrin. The unbound iron was then precipitated with basic magnesium carbonate. Afterward, iron in the supernatant was determined. Unsaturated iron-binding capacity (UIBC) and the amount of protein (apotransferrin) still available to bind iron can be estimated from the formula, UIBC = TIBC -Serum iron. A ferritin quantitative kit based on a solid phase enzyme-linked immunosorbent assay (ELISA) was supplied by Monobind ® Inc. (Lake forest, CA). The assay system utilized one rabbit antiferritin antibody in solid phase (microtitre wells) immobilization and a mouse monoclonal anti-ferritin antibody in the antibody enzyme horseradish peroxidase (HRP) conjugate solution. Estimated total iron body stores (ETIBS) were calculated using the following equation: (12) ETIBS (in μmol) = (Serum ferritin in μg/L) × 143 Transferrin saturation percentage (TS%) was calculated from the following equation: (13) TS% = (Serum Iron/TIBC) × 100% Transferrin concentration was calculated using the following equation: (14) Transferrin Conc. (g/L) = S. Iron (μmol/L)/(TS% × 3.98) This equation is based on the maximal binding of 2 mol Fe 3+ /mol of transferrin and a molecular weight of 79,570 g/mol of transferrin. (14) Total carbonyl concentrations were determined using CellBiolabs ® Protein Carbonyl ELISA kit. Briefly, bovine serum albumin (BSA) standards or protein samples were adsorbed on a 96-well plate for 2 h at 37°C. The protein carbonyls present in the sample or standard were derivatized with dinitrophenylhydrazine to dinitrophenyl (DNP)-hydrazone and then probed with anti-DNP antibody and HRP-conjugated secondary antibody. The protein carbonyl content in an unknown sample was determined by comparing with a standard curve prepared from a predetermined reduced and oxidized BSA standard. CellBiolabs ® MDA Adduct ELISA kit, an enzyme immunoassay used to detect and quantify MDA-protein adducts was used to determine MDA concentration. The quantity of MDA adduct in protein samples is determined by comparing the absorbance of MDA with that of a known MDA-BSA standard curve. BSA standard or protein samples were adsorbed on a 96-well plate for 2 h at 37°C. The MDA-protein adducts present in the sample or standard were probed with an anti-MDA antibody and then with an HRP-conjugated secondary antibody. The MDA protein adducts in an unknown sample was determined by comparing with a standard curve prepared from predetermined MDA-BSA standard. Statistical Analysis. The distribution types of the variables results were examined using Kolmogorov-Smirnov test. The results of the analysis divided the variables into two types according to the statistical distribution; the normally distributed variables and nonparametric variables. For the variables that are normally distributed, the results were expressed as (mean ± SD). Pooled t test has been used for the comparison between the patients and control groups. Pearson's correlation coefficients (r) were calculated to estimate the correlation between parameters. For the nonparametric variables, that are not normally distributed, the results have been expressed as medians, in addition to (mean ± SD). Mann-Whitney U test was used for the comparison between the patients and control groups. Spearman's correlation coefficients (ρ, rho) were calculated to estimate the correlation between parameters. All statistical analysis was performed using SPSS Statistics ver. 19.0.1 multilingual program (2010), IBM, Armonk, New York. Forecasting study was performed using "Regression Forecasting Model" software purchased from Business Spreadsheets, USA. Results and Discussion The iron indices in patients with thalassemia and the control group are presented in Table 1. A significant increase (p<0.05) in all iron indices was observed in patients with thalassemia compared with the healthy control group except TIBC, UIBC, and transferrin concentrations. These concentrations decreased in patients the compared with the control group. Total carbonyl and MDA concentrations significantly increased (p<0.05) in patients with thalassemia compared with the healthy control group. The list of ρ of the iron indices of each MDA and total carbonyl level is presented in Table 2. The results showed that MDA and total carbonyl were slightly dependent on iron index parameters in patients with thalassemia as indicated in the low r; however, this dependence was not significant. These results are different from the results of Livrea et al. (15) and Naithani et al. (16) They reported serum iron level showed correlation with oxidative stress markers in the beta thalassemia major patients. Both studies did not exclude the patients with positive CRP test as they excluded in the present work. Therefore, the correlations and huge changes in the measured parameters may be due to inflammation rather than thalassemia disorder. Furthermore, in Livrea et al. (15) research, the patients mostly adult (age mean 21 ± 10 year), patients were male and females, more than half of patients had positive hepatitis C virus and many of them had another diseases. Therefore the correlations and other results may be due to the differences in the patient's criteria. Another important factor is the low number of patients involved in the studies of Livrea et al. (15) , Naithani et al. (16) and Dasgupta et al. (17) in comparing with the higher number of patients in the present research in addition to the involvement of patients with positive CRP test and race difference lead to different results and correlations. The iron indices in patients with thalassemia indicated excess iron concentrations. In this condition, iron initially stored as ferritin is deposited in organs as hemosiderin, a toxic substance affecting tissues at least partially by inducing oxidative stress. (18) Iron induces toxicity when the body fails to deal safely with this toxic element. In thalassemia major, excess iron concentration is the outcome of multiple blood transfusions and an inappropriate increase in iron absorption associated with ineffective erythropoiesis. The outpouring of catabolic iron exceeds the iron-carrying capacity of transferrin, resulting in the presence of non-transferrinbound iron that catalyzes the formation of free radicals; this condition causes oxidative stress and damage to mitochondria, lysosomes, lipid membranes, proteins, and DNA. (19) Hemolytic anemia in thalassemia is also caused by unstable hemoglobin variants, (20) and heme iron released from hemolysis has been consistently associated with an increased risk of coronary heart diseases and cardiovascular mortality. (21) Other potential causes of hyperferritinemia are anoxia as a consequence of low hemoglobin level in the blood of patients with thalassemia. Ferritin concentration increases in response to stresses, such as anoxia. (22) MDA content in patients with thalassemia showed a significant 1.3-fold increase (p<0.05) compared with that of the control group (Table 1). In agreement with the results of Patrick et al., (23) MDA content significantly increased by 1.8-fold in patients with thalassemia relative to the control group. This result indicated an increase in lipid superoxidation because of an increase in oxidative stress in patients with thalassemia. MDA, a product of lipid peroxidation and protein carbonyls, represents the oxidation of circulating proteins and is increased in patients with thalassemia. (24) Free extracellular iron and intracellular iron species that have been identified in thalassemic blood cells are responsible for oxidative stress by reacting with hydrogen peroxide to form deleterious hydroxyl radicals that damage cellular macromolecules by catalyzing oxygen radical formation; this process then induces stress on the antioxidant capacity of cells. (25) As a result, iron chelation can eliminate free-iron species, which function as antioxidants. In addition, antioxidants such as vitamin E and polyphenols are also capable of ameliorating increased oxidative stress parameters, along with iron chelators, may substantially improve the pathophysiological characteristics of hemolytic anemia, particularly thalassemia. (26) Nevertheless, these findings require further investigation and discussion on the basis of the principles of oxidative stress and the mechanism of iron toxicity in patients with thalassemia. Correlation coefficients (р) in Table 2 showed no statistically significant correlation between iron status parameters and MDA or total carbonyl. A similar finding of oxidative stress with excess iron concentration in thalassemia has been described in many studies. However, the direct correlation between iron status parameters and MDA or total carbonyl is not well examined; in the present study, this correlation was statistically analyzed. However, some researches mentioned that excess iron concentration can stimulate lipid peroxidation (27) which is a well-defined mechanism of cellular damage in animals. Increased levels of erythrocyte free reactive iron and lipid peroxidation end product are associated with low erythrocyte glutathione level. This result indicates nonheme iron-mediated cellular damage in β-thalassemia. (28) Many possible factors may explain the lack of correlation between iron parameters with MDA and total carbonyl. Continuous blood transfusion in patients with thalassemia affects the concentrations of blood components, thereby leading to variable findings in MDA and total carbonyl contents. Considering the duration of chronic transfusion, Kadiiska et al. (29) showed that increased MDA is probably a real-time marker of oxidative injury and correlated more significantly with cumulative tissue injury. (30) Increased plasma MDA levels in thalassemia may result from several mechanisms. First, plasma MDA can be enhanced in patients with thalassemia because this factor may be dependent on the amount of circulating erythroid precursors and peripheral blood erythrocytes that contain a high density of unpaired αhemoglobin chains. (31) Furthermore, plasma MDA may be increased in thalassemia because of peroxidation in tissues; as a result, MDA leaks into the plasma. Similar to alaninetransferase, plasma MDA may increase partly as a result of possible liver lipid peroxidation and leakage into the plasma. MDA may also leak from the liver as indicated by the strong correlation between MDA and liver iron concentration in multivariate analysis. (23) Second, many oxidative stress reactions occur intracellularly, and some of the molecular products of oxidation may enter the blood and be transported into other tissues where these products participate in more harmful events or become degraded. Hence, the quantity of oxidative stress-inducing compounds in tissues is more accurate than that in the blood. Increased serum iron content affects the concentration of oxidative stress-inducing compounds in the body. (27) However, studies have indicated that no direct association between serum iron parameters and MDA and total carbonyl concentrations is indicated. In one study, the strongest predictor of increased MDA in patients with thalassemia is liver iron concentration. (23) Hence, estimated liver iron is more important than serum iron in the investigation of the effect of an increase in iron status on MDA level. Iron can potentially promote cellular damage by causing the formation of highly reactive hydroxyl radicals and inducing unsaturated lipid peroxidation. (32) Iron imbalance/accumulation has been implicated in oxidative injury associated with many diseases, including β-thalassemia, via multiple mechanisms. (33) Oxidative stress is associated with an increase in or the produc-tion of oxidizing species; oxidative stress is also associated with a significant decrease in the antioxidant-defense molecules, such as glutathione. (34) The effects of oxidative stress depend on the magnitude of these changes, considering that a cell can overcome small perturbations and regain normal status. Conclusions This study found a lack of correlation between oxidative stress represented by increased serum total carbonyl and MDA with non-labile iron status parameters in patients with thalassemia major. This result indicated that different mechanisms are involved in injuries caused by the labile iron and in the formation of oxidation end products.
2018-04-03T05:03:31.112Z
2014-10-17T00:00:00.000
{ "year": 2014, "sha1": "c1eaf0ea1fef5514d85bce50931d6229226cebe8", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jcbn/55/3/55_14-24/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1eaf0ea1fef5514d85bce50931d6229226cebe8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
218815987
pes2o/s2orc
v3-fos-license
Patterns and Features of the Formation of New Oil and Gas Provinces in the Russian Arctic (on the Example of the Nenets Autonomous Okrug) The article presents the results of the development of the Strategy for socioeconomic development of the Nenets Autonomous Okrug until 2030, in which the authors took part. The fundamental difference between the latest oil industry development of the Nenets Autonomous Okrug and the Soviet model is substantiated, and the proximity of the development algorithms, schemes and actors with foreign analogues of the NAO is substantiated. The most important result of the transition to a new corporate-state development scheme was the new, more economical, transport logistics of cargo delivery and oil export; the transition from a stationary scheme to a rotational method of organizing work and, at the same time, wasteful duplication of infrastructure efforts by each large company. The conclusion is made on the fundamental role of innovation, including in the field of telecommunications, for the success of the new development model of the Nenets Autonomous Okrug. Introduction In the 1990s, the Nenets Autonomous Okrug became a unique platform from scratch, pioneering, oil development on new, market principles, new, non-state actors [1][2][3][4][5]. The efforts of subsoil users who came to the oil production assets of the Nenets Autonomous Okrug were limited by the total absence of "export" infrastructure -that's why risky experiments / various import-export schemes were carried out here: with the export of products through the winter roads, with helicopter delivery of material and technical cargo, and so on. In the second half of the 1990s, Lukoil came to the Nenets Autonomous Okrug, which gradually became the largest actor in local subsoil use. Strengthening the influence and economic role in the subsoil use of the non-road Nenets Autonomous Okrug was possible only with the development of effective transport routes for the export of oil. Lukoil proposed a systematic solution to this problem, the "northern route", relying on the new Varandey marine terminal and the new ice-class tanker fleet, which he then began to build. The implementation of the new northern oil transportation route has become a key factor in Lukoil's continued dominance (about 45%) in oil production of the Nenets Autonomous Okrug. The Kharyaginskoye field located in the south relies on the traditional southern export scheme with access to the Transneft pipeline network in the Komi Republic. The weakness of state regulation in the 1990s, on the one hand, and the objective exceptional diversity of fields in the Nenets Autonomous Okrug, on the other hand, led to fragmented solutions to the problem of transporting produced hydrocarbons to foreign markets, and partly to duplication of the pipeline network. If the development of the region's oil resources had begun earlier, in Soviet times, the transport scheme for exporting hydrocarbons from the region would have been more centralized, unified and monopolistic, and all local subsoil users would be connected to it. At the present, state-corporate stage of development, which began with the advent of state corporations Rosneft and Gazpromneft, which had taken shape by the time the zero corporations had taken shape at that time, the competition for access to the hydrocarbon export system was added to the traditional competition for new license areas, and the struggle to reduce export costs. The hydrocarbon export problem could follow one of two scenarios: either an agreement was reached on prices for the use of Lukoil's created infrastructure, or each large company creates its own alternative infrastructure. Events began to unfold in the second scenario. The "dramaturgy" of the stopped expansion of Lukoil has unfolded, which managed to create an integrated and innovative production and transportation system here, on the one hand, and, on the other, the activities of the state-owned corporations Rosneft and Gazpromneft, which clearly were not satisfied with their role as latecomers to the division of the Nenets natural assets and therefore put forward initiatives for new infrastructure superprojects based on the investment resources of the federal budget -for example, ha the pipeline to the port of Indiga, the Barentskomur railway, etc. The new players here have obvious successes: Gapromneft was able to implement the most difficult project for developing the Prirazlomnoe shelf offshore field, Rosneft was able to form, as a result of acquiring the medium and small companies that worked here, a very concentrated corporate production and transport cluster in the south of the okrug -its Vala Gamburtseva fields are connected by an independent pipeline with the infrastructure of Transneft. Relevance, scientific significance of the issue Precisely because the Nenets Autonomous Okrug began to be actively developed in the industry in the new era of market reforms, it can be called the most "foreign" Arctic in Russia. The whole figure, the main territorial structures, drivers of the new oil industry development here were formed according to a similar algorithm with the Canadian, American, and North European Arctic. Under these conditions, the search for foreign analogues for the subject definition of the features of the new development is very constructive. The closest analogous regions abroad are the provinces of Nunavut, the Northwest Territories, Yukon (Canada), the province of Finnmark (Norway), the province of Lapland (Finland), Greenland (Denmark) and the state of Alaska (USA) (table 1). [6][7][8][9] Oil and gas production plays a huge role in the economy of the Nenets Autonomous Okrug. The value of hydrocarbons in the Northwest Territories, Alaska is the same. Similarly, a significant contribution to the economic life of the region is made by the extraction of mineral raw materials in Nunavut and the Yukon, only in their case the leading industry is mining (mainly precious and other metals). However, in the Nenets Autonomous Okrug the share of extractive industry in GRP is the largest (table 2). The industry also prevails in the gross product structure of the region. Like the Nenets Autonomous Okrug, the highest share of GRP in mining is in Nunavut. The system of resettlement of the Nenets Autonomous Okrug paradoxically turns out to be typical of the most severe regions of the foreign Arctic by natural conditions: the northern territories of Canada, Greenland, and the Faroe Islands. For example, the administrative centers of two of the three northern territories of Canada also concentrate more than half of the population -contrasting with extremely poorly populated spaces with rare national villages, shift camps or military bases in the rest of the territory of the administrative unit -and also with an extremely poorly developed transport network in sparsely populated areas (see table 3), and in Canada, the same population as in the Nenets Autonomous District is dispersed over a much larger area. The local history of the pioneer oil industry development of the Nenets Autonomous Okrug in the 1990-2010s entered the general world picture of the development of similar oil and gas provinces, with a similar algorithm for the formation of new shift camps, duplication of infrastructure efforts of resource corporations, and numerous problems in the interaction of resource companies and the territory of presence. Theoretical (problem) part The new model of oil development, which was first tested in the Russian Arctic in the Nenets Autonomous Okrug, is distinguished by: -"external" registration of most companies operating in the district (respectively: institutional remoteness, which reduces the possibility of effective communication on the coordination of interests of the district and companies, as well as a system of distribution of tax revenues that is disadvantageous for the district); -competition of oil and gas companies (not only for licensed areas, but also for the possibility of efficient export of produced hydrocarbons); -uncoordinated main subjects of the resource economy. The multisubjectivity of the economic development of the Nenets Okrug leads to duplication of functions of individual infrastructure facilities -especially in the field of hydrocarbon production and transportation. So, for example, almost parallel to each other, but in different directions, oil is distilled through the Kharyaga -Varandey oil pipelines (Lukoil, from south to north) and from Vala Gamburtseva fields (Khasyreyskoye field) to Bagansky field (Rosneft, from north to south), breaking the existing structure of oil export from the northern fields of the okrug -through the Varandey terminal, from the souththrough the structure of the Transneft oil pipelines. Therefore, the problem of improving the efficiency of managing the territory of the Nenets Autonomous Okrug, in essence, boils down to establishing coordination of economic activities and sharing infrastructure between the authorities of the Nenets Autonomous Okrug, on the one hand, and resource companies, on the other, as well as between individual companies with the mediation of the government Nenets Autonomous Okrug. -the contradiction between the corporate structure of the local economy and the vast undeveloped areas of the okrug through which each company needs to "break through" into the sales markets alone. From the point of view of the interests of the territory and the possibilities of the regulatory influence of the state on companies in the okrug, the transport scheme duplicating the efforts of many corporations is unfavorable. It not only does not contribute to strengthening the integrity of the economic space of the district -but, on the contrary, it fragmentes it even more, cuts the corporate sector out of the local economy, turns it into a single shift enclave, completely devoid of any social roots. Another major contradiction (and it is also characteristic of foreign analogues) is that -against the background of the fact that all oil-producing territories are forced to solve the problems of economic diversification -the mono-profile of the Nenets Autonomous Okrug has not only not decreased in recent years, on the contrary, it has increased significantly. Meanwhile, the level of dependence of the main economic and budgetary parameters of the Autonomous Okrug on oil production is already unprecedentedly high even for the resource region. Oil dependence determines the main problem of the Nenets Autonomous Okrug -the vulnerability of its welfare (gross product, income, investment, tax revenue) to changes in the oil market. The negative consequences of the price shock are exacerbated by the small size of the Nenets Autonomous Okrug, its small population and, therefore, a small domestic market, which deprives the district of the possibility of rapid diversification maneuver. Practical relevance, suggestions and results A drop in oil and gas prices could lead to the unprofitability of new projects and the curtailment of production at the fields with the highest production costs. The imposition of sanctions against Russian oil and gas companies affects their technological equipment. Other external factors for the district are the development of more active shipping along the Northern Sea Route (with an increase in demand for transshipment of goods, services), as well as possible global warming (fraught with a shortened service life of winter roads, as well as transformation of deer pasture landscapes) [11,12]. The policy of the oil companies themselves can also complicate the activities of subsoil users in the district. For example, between the companies of PJSC Lukoil and PJSC NK Rosneft the question arose about the price of transshipment of oil through the Varandey terminal, which is owned by PJSC Lukoil. In the absence of an alternative way of transporting oil from the fields to them. A. Titova and R. Trebs, NK Rosneft PJSC is forced to accept the conditions of high prices for oil shipment, which raises the cost of raw materials [13] and, accordingly, production for PJSC NK Rosneft becomes less profitable. Such a conflict can lead to a decrease in production, a reduction in tax revenues to the NAO budget, and, consequently, a decrease in the level of economic welfare of the district. The most significant internal factors of the long-term socio-economic development of the region are the state of the resource base, the low degree of development of the territory and the development of transport infrastructure, problems of logistic efficiency, informatization, energy efficiency, demographic situation, insufficient human resources, problems of indigenous peoples, environmental degradation, including . deer pasture conditions, relatively high tourism potential and low level of food security. At the moment, there is an increase in hydrocarbon production in the Nenets Autonomous Okrug, and it is expected that reserves in the okrug will be enough for another half a century [14]. Thus, this factor ensures the development of the district in the long term. However, in the NAO there is an acute problem of underdeveloped transport infrastructure. Nenets Autonomous Okrug belongs to the regions of the Russian Federation with the lowest density of paved roads -1.3 km of tracks on 1000 km² as of January 1, 2017 (lower only in the Chukotka Autonomous Okrug -1 km / thousand km²) [15]. The eastern part of the okrug is practically "empty" from the point of view of official statistics, in fact it is a rather dense network of shift camps, winter roads and all-terrain (tractor) roads, energy infrastructure. The almost complete lack of information about this corporate NAO-2, and even more so the lack of coordination of development planning processes for individual companies and the district as a whole, significantly complicates the process of managing the region's socio-economic development: in the Arctic, it would be advisable not to duplicate some infrastructure objects ( roads, energy facilities and other life support facilities, medical services, etc.), but to bring them into joint use by corporate employees and the district's population PPP and conditions and other agreements. The practice of "parallel worlds" of corporate and district networks of territory development (not to mention the practice of mismatching and duplication of infrastructure by different companies) cannot be effective; optimal planning of the territorial development of the district should include the "corporate" segment. Conclusions The main challenges facing the Nenets Autonomous Okrug are the realization of opportunities and the elimination of costs caused by prevailing external and internal factors in the development of the region. Large hydrocarbon reserves, as well as the volatility of oil and gas prices, the complexity and high cost of northern fuel delivery in an undeveloped transport system are significant factors for the development of processing industries (primarily gas chemistry). An important role in improving the efficiency of the Okrug's economy as a whole will be played by the coordination of companies and the optimization of the hydrocarbon transportation system. In many respects, the success of the Okrug's future economic development will also depend on the ability to build a working cluster in Naryan-Mar for information and technological support for the further development of the Arctic, primarily offshore projects; to establish the provision of oil and gas services, training of specialized personnel. One of the key challenges will be the ability of the district, the key subjects of its development to perceive new technologies that reduce costs and ensure efficient management in the Arctic: this is providing the entire territory of the district with high-quality cellular communications and Internet access (in connection with which there is a problem of cheaper satellite communications, so how the use of fiber optic networks is not always effective for remote settlements); the introduction of energyefficient housing technologies, autonomous energy supply technologies. The main threats to long-term development are the already mentioned negative factors: instability of oil prices and falling demand for Russian oil; decrease in investment attractiveness of the region due to unfinished legislation, underdeveloped infrastructure and conflicts of interests of companies; depletion of raw materials (in a very long term). With the growth of hydrocarbon production in the western part of the Arctic shelf, the importance of the Nenets Autonomous Okrug and the state of infrastructure in it will play an increasingly important role. Thus, it is necessary to coordinate the development of port and other transport infrastructure. Due to the lack of trunk pipelines, the creation of marine transport infrastructure facilities is the most promising for the development of the regional economy.
2020-04-16T09:11:59.932Z
2020-04-15T00:00:00.000
{ "year": 2020, "sha1": "145ad50a33d519337cc9816831f199adc7758ab3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/459/4/042081", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "419ac061be140ec0125976c261de46c5576508d7", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Environmental Science" ] }
5545965
pes2o/s2orc
v3-fos-license
Zooplankton diversity analysis through single-gene sequencing of a community sample Background Oceans cover more than 70% of the earth's surface and are critical for the homeostasis of the environment. Among the components of the ocean ecosystem, zooplankton play vital roles in energy and matter transfer through the system. Despite their importance, understanding of zooplankton biodiversity is limited because of their fragile nature, small body size, and the large number of species from various taxonomic phyla. Here we present the results of single-gene zooplankton community analysis using a method that determines a large number of mitochondrial COI gene sequences from a bulk zooplankton sample. This approach will enable us to estimate the species richness of almost the entire zooplankton community. Results A sample was collected from a depth of 721 m to the surface in the western equatorial Pacific off Pohnpei Island, Micronesia, with a plankton net equipped with a 2-m2 mouth opening. A total of 1,336 mitochondrial COI gene sequences were determined from the cDNA library made from the sample. From the determined sequences, the occurrence of 189 species of zooplankton was estimated. BLASTN search results showed high degrees of similarity (>98%) between the query and database for 10 species, including holozooplankton and merozooplankton. Conclusion In conjunction with the Census of Marine Zooplankton and Barcode of Life projects, single-gene zooplankton community analysis will be a powerful tool for estimating the species richness of zooplankton communities. Background The fauna of the world's oceans is dominated in terms of abundance and biomass by drifting organisms collectively referred to as plankton. Plankton occur in all marine waters, throughout all depths, and, for many species, across widespread biogeographical regions. Zooplankton (planktonic animals) support many major fisheries and mediate fluxes of nutrients and chemical elements essential to life on earth. Despite more than a century of sampling the oceans, a comprehensive understanding of zooplankton biodiversity has eluded oceanographers because of the fragile nature and small body size of these organisms, as well as the large number of species from various taxonomic phyla [1,2]. For many zooplankton groups, there are longstanding and unresolved questions of species identification, systematic relationships, genetic diversity, and biogeography. In light of this, we are working toward a taxonomically comprehensive assessment of zooplankton biodiversity throughout the world's oceans through the international project Census of Marine Zooplankton [3]. Results and Discussion A zooplankton sample was collected off Pohnpei Island, Micronesia (6°16'N, 162°09'E). A cDNA mitochondrial COI (cytochrome c oxidase subunit I) gene library was constructed from the sample, and 1,336 inserts containing the mitochondrial COI gene were randomly sequenced [DDBJ: AB332438-AB333773]. A cDNA rather than a gDNA library was constructed to remove pseudogene sequences from the analysis [4]. The mismatch distribution of these 1,336 sequences revealed a high frequency of very small (<0.03) genetic distance sequence pairs ( Figure 1). These sequence pairs with very small genetic distances were assumed to have originated from the same species (discussed below). A second peak was observed around a distance of about 0.14 (from 0.13 to 0.16), and most of these counts were comparisons between two phylogroups in the Copepoda clade ( Figure 2, Clades 1 and 2). The frequencies between these peaks were very low. The minimum frequency (106 counts) was observed in the range between 0.12 and 0.13. Based on this observation, we set the criterion that if the genetic distance of two sequences was greater than 0.12, the sequences were derived from different species. If the genetic distance of two sequences was less than 0.12, then we considered the sequences to be derived from the same species. The genetic distances of the mitochondrial COI gene sequence have been reported from various animal taxa (mainly Vertebrata and Arthropoda), and the general ranges for the intra-and interspecies distances are 0.0001-0.05 and 0.04-0.21, respectively (Kimura two-parameter model) [5]. Although it is not a conclusive value for animal species definition, we have tentatively taken a genetic distance of 0.12 as the boundary between intra-and interspecies distances in this study, and this value was in the range of interspecific genetic distance reported previously [5]. The rarefaction curve was estimated using the criterion of a genetic distance of 0.12 ( Figure 3) using DOTUR [6]. Although the number of observed OTUs is still growing in 1,300 sequenced colonies, the rate of increase of the curve decreased gradually after around 100 sequenced colonies. Figure 4 shows the relationships between species richness estimated by Chao1, rarefaction, and percentage of sequence differences used for the estimation. The figure shows gradual changes of Chao1 and rarefaction around sequence differences of 0.12. As the distance of 0.12 is not a conclusive value for species definition, caution is required in the further use of this value. We conducted BLASTN searches against the GenBank non-redundant database using as queries all sequences derived from the analysis. Among the sequences, those that fulfilled the criteria (BLAST score and similarity greater than 100 and 83%, respectively) were assigned to 11 taxonomic groups ( Figure 2). Several of the assigned sequences showed very high degrees of nucleotide similarity to known species, including Copepoda (Candacia Mismatch distributions of the pairwise genetic distances for the 1,336 mitochondrial COI sequences Unrooted neighbour-joining tree of the 1,336 mitochondrial COI gene sequence Figure 2 Unrooted neighbour-joining tree of the 1,336 mitochondrial COI gene sequence. Numbers beside internal branches indicate bootstrap values (>90%) obtained for 1,000 replicates (indicated for major branches only). Each dot represents a single mitochondrial COI gene sequence. Colour of each dot represent the higher taxonomic groups denoted in the figure with the criterion of the score and similarity more than 100 and 83%, respectively, in the BLAST results. Table 1). The very high degrees of similarity indicated that these species were actually collected in our sampling. Among them, one vertebrate species, Coryphaena hippurus, and two benthic gastropod species, Strombus mutabilis and Strombus wilsoni, were sampled as nonholozooplanktonic animals in the dispersal life history phase as pelagic larvae. These observations indicated that application of this analysis enables the estimation of larval dispersal, which is difficult to achieve based on morphological observations. Figure 2 shows an unrooted neighbour-joining tree of the 1,336 zooplankton COI sequences. Overall, each taxo-nomic group formed a single cluster including Gastropoda, Chaetognatha, Euphausiacea, Decapoda, Vertebrata, Copepoda, and Cephalopoda. There were also two cases in which the taxonomic assignment did not work well. The first was the occurrence of Hexapoda in various clusters, which rarely occurs in the ocean environment, except pleustonic insects of the genus Halobates. The second was the difficulty of assignment of taxonomic groups due to low BLAST scores and similarities (coloured grey in Figure 2). The most plausible reason for these ambiguities is the paucity of mitochondrial COI sequences for some taxa in the DNA database. In general, the mitochondrial COI gene sequences in the DNA database are biased among taxa, and this bias was assumed to be the main reason for the occurrence of Hexapoda in our analysis. The most efficient solution for these problems will be the expansion of zooplankton DNA barcode, and it is hoped that the progress of the Barcode of Life project [7] in collaboration with the Census of Marine Zooplankton will fill these gaps. To our knowledge, the Discovery SOND cruise [8] is the only other attempt to date to estimate the species richness of a whole zooplankton community collected at a single site. In this series of studies [9][10][11][12][13][14][15][16][17][18][19], a total of 618 species of zooplankton were identified and counted in samples collected around the Canary Islands ( Table 2). The extrapolated species richness (Chao1 [20]) of the present study was estimated as 188.90 (95% confidence interval, 156.79-255.60) using DOTUR [6]. Our results cannot be directly compared with the SOND cruise data because of differences in sampling effort between the two studies. In the SOND cruise, two primary types of sampling equipment were used: Isaacs-Kidd midwater trawl and N113H. Furthermore, about 76 vertically stratified zooplankton samples that were collected above 1,000 m were combined to estimate the occurrence of species [19]. In contrast, the present study was conducted based on a single sample collected from a depth of 721 m to the surface. These sampling effort differences may have accounted for the differences in species richness between the SOND cruise and the present study. In addition, the lower species richness in the present study may have been due to our experimental design. In the present study, after construction of the cDNA library from mRNA, the mitochondrial COI genes were amplified with "universal (LCO1490 [21])" and polyT primers. It is possible that some of the mitochondrial COI gene sequences may not have been amplified due to primer mismatch for some species. Although the single-gene zooplankton community analysis approach is an efficient means of collecting sequence information, given technical difficulties due to primer mismatch, further studies and the development of novel Rarefaction analysis of the 1,336 mitochondrial COI gene sequences Figure 3 Rarefaction analysis of the 1,336 mitochondrial COI gene sequences. Relationship between species richness estimated by Chao1, rarefaction, and the percentage sequence difference used for these estimations Figure 4 Relationship between species richness estimated by Chao1, rarefaction, and the percentage sequence difference used for these estimations. methodologies are required to gain a complete understanding of zooplankton diversity. Conclusion Although the estimation of species richness and composition of the community are among the most important aspects of single-gene zooplankton community analysis, these sequence data will be further utilised by construction of a dedicated database. We expect that the accumulation of additional marine animal mitochondrial COI gene sequence data in the barcode project will aid in further clarifying sequences from unknown species. Furthermore, this process of sequence assignment to particular species through database analysis indicated the occurrence of these species in the sampling site for the present study. We have now constructed a publicly accessible zooplankton community analysis database that can be searched using BLASTN [22]. With regard to the future of zooplankton community genetic analysis, adoption of next-generation sequencing technology should enable researchers to read libraries sufficiently to estimate species richness without extrapola-tion [23,24]. We are currently expanding our sampling effort to all oceans to further understand zooplankton biodiversity. Zooplankton sampling The sample was collected off Pohnpei Island, Micronesia (6°16'N, 162°09'E). Collection was performed with a plankton net (ORI net [25]) with a 2-m 2 mouth opening and 0.69-mm mesh aperture. After removal of large animals (more than about 4 cm at their largest measurement), the sample was split into two fractions: one was preserved in ethanol for barcode analysis and the other was homogenised with TRIZol (Invitrogen) and kept at -80°C. A total wet volume of about 30 mL zooplankton was collected and homogenised with 270 mL TRIZol in this step. Total RNA extraction and mRNA purification In the laboratory, total RNA was extracted from the sample following the TRIZol protocol, followed by mRNA purification using Poly(A)Purist MAG (Ambion). A total of 9.6 mL total RNA (aqueous phase) was further purified for mRNA in this step. Mitochondrial COI gene library construction and sequence analysis The purified mRNA was used as the template for Creator SMART cDNA Library Construction Kit (BD Biosciences). Using this constructed cDNA library, we amplified mitochondrial COI genes using COI universal (LCO1490) [21] and polyT primers with restriction sites that were further used to construct a mitochondrial COI gene library with the same kit. We then randomly analysed colonies obtained on agar plates. BLASTN search and taxonomic assignment The lengths of all obtained sequences were adjusted to 500 base pairs, and a BLASTN [26] search against the . BLASTN search against the NCBI non-redundant dataset was also used to infer species or higher taxonomic groups of mitochondrial COI gene sequences determined in the present study. In the BLASTN result list, the species with the highest score was assigned to each sequence with the following criteria. If the BLASTN score was 100 or more and BLASTN similarity was 98% or more, the name of the resulted species was assigned to the sequence and listed in table 1. If the BLASTN score was 100 or more and BLASTN similarity was 83-98%, the name of higher taxon group to which the resulted species belongs was assigned to the sequence and is shown in the figure 2. If BLASTN scores and similarity values did not reach these values of criteria, 'unknown' was assigned to the sequences and are colored gray in the figure 2. Removal of PCR recombination, mismatch distribution analysis, rarefaction curve analysis, phylogenetic analysis To remove sequences produced by PCR recombination, we manually applied a partial treeing approach [27] to the aligned dataset; although some programs and servers are available for related analysis, none worked appropriately for our analysis. Briefly, after the sequence alignment was adjusted using ClustalX [28], square distance matrixes of both the left 100 and right 100 base pairs of the aligned sequence were constructed in MEGA3.1 [29]. Then total absolute deviations of each sequence in these matrixes were calculated. As a result, we deleted one sequence that showed a very large deviation from the others. We assumed this was not the only chimera sequence that occurred in the analysis, but it was not possible to eliminate all PCR recombination sequences because of ambiguity. After removing the PCR recombination sequences from the analysis, we again adjusted alignment of the remaining 1,336 sequences using ClustalX. An unrooted phylogenetic tree was constructed using the neighbourjoining method with nucleotide p-distances (alignment gaps were completely deleted) implemented in PAUP*4.10b [30]. The reliability of each tree node was assessed using the bootstrap method with 1,000 replicates. The mismatch distribution was estimated from the distance matrix. The distance matrix was also calculated using PHYLIP3.66 [31], and the matrix was further used for rarefaction curve and Chao1 calculation using DOTUR [6].
2017-06-19T18:09:25.179Z
2009-09-17T00:00:00.000
{ "year": 2009, "sha1": "ba6da57fa97d4fa73a759780a4b43fcfa869eca8", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-10-438", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba6da57fa97d4fa73a759780a4b43fcfa869eca8", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249964565
pes2o/s2orc
v3-fos-license
Changes in Sleep Patterns in Korean Early Adolescents During Sexual Maturation Purpose: Teenagers’ sleep patterns show physiological delays influenced by sexual maturation and other external time-related factors. However, Korean adolescents show differences in the onset of pubertal development and have shorter sleep durations than other adolescents world-wide. Therefore, we assessed sleep patterns and sexual maturation in Korean early adolescents to evaluate changes in sleep patterns in relation to sexual maturation in early adolescents with sleep deprivation. Methods: From March to August 2017, we surveyed children aged 10 to 12 years in Seongnam (Seongnam Atopy Project). We evaluated items related to sleep and sexual maturation, assessed sleep duration and sleepiness scale scores, and analyzed the relationships of sleep parameters with sex, height, weight, and sexual maturation rating (SMR). Results: In total, 620 children were included. Sleep duration was 8.63±0.81 hours in boys and 8.40±0.98 hours in girls. Sleep started from PM 11:00±AM 0:47 in boys and PM 11:13±AM 1:06 in girls, and ended at AM 7:38±AM 0:27 in boys and AM 7:34±AM 0:27 in girls. After adjusting for sex and standardized body mass index, bedtime was delayed as the SMR increased (mean delay for each rating increase, 0.251 hours; P =0.001; 95% confidence interval [CI], 0.105 to 0.397). SMR did not influence the wake-up time, although sleep duration decreased as the SMR increased (mean decrease for each rating increase, 0.258 hours; P =0.001; 95% CI, –0.403 to –0.114). The sleepiness scale scores showed no relationship with SMR. Conclusion: Sleep patterns, especially sleep duration and bedtimes, show changes with sexual maturation in adolescents, who are vulnerable to sleep deprivation. interval between DLMO and sleep onset is also prolonged in adolescents [4]. Individuals in the mature Tanner stage show increased sleep latency, delayed bedtime [1,5], and increased sensitivity of the circadian system to light in early puberty [6]. Similar to humans, numerous mammals also show physiological sleep delays in adolescence [7]. In combination with earlier school start times, this physiological delay in sleep can induce sleep deprivation in older adolescents. Sleep patterns can also be affected by numerous external factors, especially in teenagers. For example, teenagers in East Asia experience more extreme sleep deprivation than those in other parts of the world. Numerous studies have shown that teenagers in Asia have a sleep duration of 6 to 7 hours on weekdays [8], and sleep deprivation becomes more severe as they get older. This pattern of severe sleep deprivation has been attributed to social and academic needs, which are potent regulators of sleep patterns in school children. The age of onset of sexual development in adolescents in Korea has been reported to be different. Moreover, compared to a study published in 2006 [9], a more recent study from 2020 reported earlier pubertal development onset in Korean adolescents [10]. However, studies on the relationship between the Tanner stage and sleep have not been conducted in Korean adolescents. Therefore, considering the differences in life patterns of East Asian adolescents from those of adolescents in other countries, we believe that the findings of this study will provide useful insights into the relationship between sexual development represented by Tanner stage and sleep patterns in early adolescents, who often experience sleep deprivation. Participants This study surveyed children aged between 10 and 12 years who attended 11 elementary schools in Seongnam from March to August 2017 as part of the Seongnam Atopy Project (SAP). In the SAP, we surveyed the students' characteristics, body weight and height, date of birth, questionnaires about allergic disease and gastrointestinal disease, routine diet, environmental factors, feeding history, sleep and sexual maturation with figures (Fig. 1). We selected questionnaire items to survey the participants' sleep patterns, sleepiness, anthropometric data, and baseline characteristics (Appendix 1). We asked the children's parents to complete the questionnaires prior to physical examinations. The parents of 621 children agreed to participate in this study, and 620 children returned completed demographic questionnaires, including information regarding school year and sex. The physical examination, SAP cohort (n=621) Answer the survey (n=620) which included measurement of weight and height, was performed by a pediatrician and a well-trained technician. Body mass index (BMI) was calculated from the measured height and weight and standardized with reference to the age and sex (BMI-z) [11,12]. Sex and BMI are known to affect sleep based on previous studies, so they were used as correction variables. Sleep duration and daytime sleepiness Participants' usual sleep onset and wake time for the last 7 days were determined using direct questions. Sleep duration was computed from the sleep onset and wake times. Responses describing unusual times of sleep onset and waking, such as sleep onset in the afternoon, getting up in the evening, or sleep duration over 15 hours, were excluded. The degree of sleepiness was estimated using the Korean version of the Pediatric Daytime Sleepiness Scale (PDSS) [13]. The PDSS has eight questions, each of which is scored from 0 to 4 (total range, 0 to 32), and it assesses subjective sleepiness during classes, homework, daytime, and morning. A higher score indicates greater daytime sleepiness [13], and a score of 15 or more in the age group of 14 to 19 years old indicates excessive sleepiness [14]. Sexual maturity The Tanner stage is a common parameter for evaluating children's pubertal development, which is classified using the sexual maturation rating (SMR) scale [15]. Parents answered questionnaires about their children's sexual maturity, which included pictures with serial stages of sexual maturity. The series of pictures outlined the degrees of development of pubic hair and breasts in girls and pubic hair, penis, and testes in boys. On the basis of the responses, for boys, an SMR of 2 implied visible signs of testicular enlargement or development of pubic hair; an SMR of 3 indicated penile growth or further development of pubic hair; and an SMR of 4 indicated increased testes volume or a distributed pattern of pubic hair. For girls, an SMR of 2 indicated the presence of breast buds or development of pubic hair, and SMRs of 3 to 5 indicated patterns of pubic hair and breast development. The selected pattern of SMR was different; the average grade was used, whereas in cases with only one response for the queries regarding sexual development, the SMR was classified on the basis of this response. Ethics statement The survey was approved by the Institutional Review Board of CHA Bundang Medical Center (IRB No. 2017-04-049). Written informed consent was obtained from the parents or caregivers of all participants. Statistical analysis Statistical analyses were performed using IBM SPSS Statistics version 23.0 (IBM Corp., Armonk, NY, USA) and MS Excel version 2203 (Microsoft, Redmond, WA, USA). Frequencies and continuous variables were compared using the chi-square test and analysis of variance, respectively. Multiple linear regression models were used to estimate adjusted differences and 95% confidence intervals (CIs), with adjustments for sex and BMI-z. Sex and BMI have previously been reported as factors that influence sleep patterns and obstructive sleep apnea syndrome [15,16]. Each sleep-related variable was analyzed in a model with SMR, sex, and BMI-z. Statistical significance was defined as a P< 0.05. Sample characteristics We evaluated the findings for 620 children (318 boys and 302 girls) who provided basic data such as sex and grade/school year. The mean ages were 11.51 ± 0.61 years for boys and 11.48 ± 0.90 years for girls. Sleep patterns and sleepiness We investigated sleep patterns and sleepiness using the Korean ver-sion of the PDSS. The mean sleep duration was 8.63 ± 0.81 hours for boys and 8.4 ± 0.98 hours for girls, as calculated based on the bedtime (PM 11:00 ± AM 0:47 for boys and PM 11:13 ± AM 1:06 for girls) and wake time (AM 7:38 ± AM 0:27 for boys and AM 7:34 ± AM 0:27 for girls). The average PDSS score was 11.4 ± 5.02 for boys and 11.65 ± 5.24 for girls. were classified as SMR1, SMR2, SMR3, and SMR4, respectively (Table 1). Table 2 presents the sleep duration data for the 381 children who answered the survey for sleep patterns. Sleep duration in adolescents was inversely proportional to the SMR: 8.65 ± 0.81 hour/ night in SMR1, 8.51 ± 0.80 hour/night in SMR2, 7.89 ± 1.02 hour/night in SMR3, and 7.67 ± 1.26 hour/night in SMR4 (Table 2). Similarly, the bedtime was PM 10:58 ± AM 0:49 in SMR1, PM 11:07 ± AM 0:48 in SMR2, PM 11:42 ± AM 1:02 in SMR3, and PM 11:55 ± AM 0:52 in SMR4. The wake times were similar and did not differ significantly among participants with different SMRs (P= 0.616). The scores on the Korean version of the PDSS tended to increase with the SMR, but the differences were not significant: 11.44 ± 5.04 in SMR1, 11.34 ± 5.10 in SMR2, 12.05 ± 5.50 in SMR3, and 14.50 ± 3.99 in SMR4 (P = 0.423). Even when the differences were analyzed based on values of 15 (the cutoff value of 14 to 19 years) [14] and 11 (the average value of the PDSS), no statistically significant results were obtained. Sleep duration and sleepiness in relation to SMR As shown in Table 3, after adjusting for sex and BMI-z, each rating increase in the SMR resulted in a delay of 0.251 hours in bedtime (95% CI, 0.105 to 0.397 hours; P = 0.001), no significant changes in the wake time (P = 0.817), a reduction of 0.258 hours in sleep duration (95% CI, -0.403 to -0.114 hours; P= 0.001), and no significant change in the PDSS score (P= 0.310). Discussion This study evaluated sleep patterns according to sexual maturation during early puberty in adolescents, who often experience sleep deprivation compared to the recommended sleep duration [16]. The sex-and BMI-adjusted findings suggested that sleep onset was delayed and sleep duration decreased with development of sexual maturity. However, daytime sleepiness showed no significant changes related to SMR. This study is the first to investigate the effect of sexual maturation in Asian adolescents, who often experience sleep deprivation. Several previous studies have reported changes in sleep duration in adolescence. Rutter et al. [17] reported that sleep duration decreased according to the Tanner stage, after adjusting the findings for other confounding factors. Our findings showed that the total sleep duration per day decreased by approximately 0.258 hours for each stage increase in the SMR. Although this study had fewer adolescents with a mature SMR, it showed a similar pattern of reduction in sleep duration as other studies (0.27 to 0.33 hours per Tanner stage) [17,18]. However, a few studies did not observe significant differences in sleep duration in relation to the Tanner stage [19]. The reasons underlying these discrepancies across different studies are unclear. Our findings also showed delayed sleep onset as the SMR increased. Delayed sleep onset manifests as a phase delay in the circadian timing system. Although such phase delays are known to be associated with pubertal and adolescent development, the under- lying mechanism is unknown, and several attempts have been made to identify this mechanism. The intrinsic period of the circadian timing system may slow through puberty in adolescence, thereby delaying the circadian phase and sleep onset. This may be due to the increased sensitivity of light during DLMO [6]. Another study reported that human adolescents show a phase delay in DLMO according to age [4] and sexual maturation [5]. Delayed sleep onset in adolescence may also be caused by increased sleep latency [1,6]. Changes in phase-dependent sensitivity to light may cause phase delays, with strengthening of the delayed response to evening light and weakening of the advance response to morning light. Regarding DLMO, since homeostatic sleep pressure accumulates faster in younger adolescents than in older adolescents, sleep onset after DLMO is faster [4]. Therefore, the older one gets, the slower sleep onset becomes. Our study also showed that sleep onset was delayed by approximately 0.25 hours for each rating increase in SMR, supporting the findings of previous studies. We did not observe significant differences in sleepiness according to the SMR. Delayed sleep onset causes a reduction in total sleep duration due to a fixed awake time, and various studies have shown sleepiness in individuals with a higher SMR [20]. Carskadon et al. [19] reported that teenagers in the higher SMR group were sleepier and presented short sleep latency. Another study also showed that the prevalence of excessive daytime sleepiness increased with advanced pubertal maturation [21]; however, a longitudinal cohort study reported that subjective sleepiness was more closely related to age than to pubertal development [22]. Our study included early adolescents with the same school hours, which are known to be one of the most influential factors for sleep [23]. Therefore, we suggested that our findings regarding the effect of sexual maturation on sleepiness in early adolescents are more relevant than those of previous studies. Sleep deprivation may blunt the effects of sexual maturation on teenagers' sleepiness. Multiple previous studies have reported that Korean and East Asian teenagers have severely limited sleep due to academic reasons [23]. These patterns have also been observed in elementary school students [8,13]. Age has been reported to cause sleepiness in various studies [13,18,20]. Moreover, age or school year can influence sleep duration due to the school start time. Under conditions of sleep deprivation, age and school start time could have a greater influence on sleepiness than sexual maturation. This study had several limitations. Although the inclusion of children with SMR1, SMR2, and SMR3 allowed us to evaluate the effects of sexual maturation in the early stages of puberty, it may have resulted in an underestimation of the effect of later SMR stages. In addition, we did not analyze adolescents' chronotype (i.e., eveningness or morningness), which is also a factor influencing sleepiness in adolescents [24]. Moreover, we were not able to observe differences in the wake time according to the SMR, and this survey was conducted in a single season with participants who had a similar school start time. In conclusion, sleep onset is delayed and sleep duration decreases with sexual maturation. However, this study found that puberty-related changes in sleepiness did not appear in early puberty or in adolescents, who often experience sleep deprivation. Understanding the physiological delays in sleep related to puberty will be helpful for improving teenagers' quality of life. Conflicts of interest No potential conflict of interest relevant to this article was reported.
2022-06-24T15:06:23.125Z
2022-06-22T00:00:00.000
{ "year": 2022, "sha1": "8e02d345ff7a95313202aa91a225d3a6ce9e0a8c", "oa_license": "CCBYNC", "oa_url": "https://www.annchildneurol.org/upload/pdf/acn-2022-00080.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "131b476ec04ac172511f8ef830364df1688a79b2", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
235265785
pes2o/s2orc
v3-fos-license
Onset of space-charge effects in strong-field photocurrents from nanometric needle tips Strong-field photoemission from nanostructures and the associated temporally modulated currents play a key role in the development of ultrafast vacuum optoelectronics. Optical light fields could push their operation bandwidth into the petahertz domain. A critical aspect for their functionality in the context of applications is the role of charge interactions, including space charge effects. Here, we investigated the photoemission and photocurrents from nanometric tungsten needle tips exposed to carrier-envelope phase-controlled few-cycle laser fields. We report a characteristic step-wise increase in the intensity-rescaled cutoff energies of emitted electrons beyond a certain intensity value. By comparison with simulations, we identify this feature as the onset of charge-interaction dominated photoemission dynamics. Our results are anticipated to be relevant also for the strong-field photoemission from other nanostructures, including photoemission from plasmonic nano-bowtie antennas used in carrier-envelope phase-detection and for PHz-scale devices. I. INTRODUCTION The interaction of light with nanostructures exhibits unique features [1]. In plasmonic materials, the excitation of collective oscillations of the electrons with respect to the lattice (i.e. plasmons) leads to a local near-field that can be enhanced up to several orders of magnitude in intensity with respect to the incident field. At the same time, the enhanced fields are confined to the sharp geometrical features of the nanostructure, well below the diffraction limit of light. These nanoplasmonic phenomena have found a vast range of applications (see Ref. [1] for an overview) including biochemical sensing and detection [2], near-field enhanced optical microscopy with nanometer resolution (nanoscopy) [3], surface enhanced Raman spectroscopy [4] with sensitivity down to the single molecule level [5], thermal cancer treatment [6] and waveguiding of optical energy on the nanometer scale [7]. The demonstration that strong-field photoemission from nanometric needle tips [8,9] and nanospheres can be controlled by the electric field of the driving laser [10,11], and that the electron dynamics may be strongly modified by the near-field decay [12,13] has led to the development of attosecond nanophysics as an independent research field [14][15][16][17]. Nanotips illuminated by few-cycle lasers have been used as sources of ultrashort electron pulses for electron microscopy [18][19][20] or for the spatially-resolved detection of the carrier-envelope phase (CEP) of a laser beam [21]. Significant progress has been made in the direction of lightwave electronics [22][23][24], i.e. electronics driven by optical fields on the PHz-scale. Based on plasmonic nano-bowtie antennas, the CEP-detection using currents [25][26][27], a potential PHz-scale diode [28] and recently on-chip field-resolved sampling of optical waveforms [29] have been demonstrated. It is typically argued that the small dimension of the nano-emission site leads to highly divergent trajectories, justifying the neglect of charge interaction [9,30,31]. On the other hand, for certain parameter ranges the electron kinetic energy spectrum from nanometric needle tips with large apex radii (r∼100 nm) seems to be completely dominated by Coulomb interaction [32,33]. Clearly, the miniscule emission area makes nanometric needle tips highly susceptible to space charge effects close to the emission surface. Hence a systematic study of space-charge effects would be highly desirable and of large importance for the ability to maximize the signal in applications related to strong-field emission from nanostructures. Here, we present intensity-dependent photoemission data from nanometric tungsten needle tips in intense few-cycle laser fields recorded via both time-of-flight spectroscopy and photocurrent measurements. By analysis of this data and comparison with simulations, we identify a step-wise increase in the intensity re-scaled cutoff energies of emitted electrons as the onset of charge interaction dominated photoemission dynamics. The results are relevant for many related studies. II. EXPERIMENTAL SETUP The experimental setup is illustrated in Fig. 1. We obtained CEP-stable pulses centered at 750 nm from a hollowcore fiber broadened Ti:sapphire laser system (Spectra Physics, Femtopower Compact Pro HR/CEP) at 10 kHz repetition rate. The pulses were compressed with a set of chirped mirrors (Ultrafast Innovations, PC70) to 4.5 fs in full-width-at-half-maximum (FHWM) of the intensity envelope, which was determined via the Dispersion Scan (dscan) technique [34]. A pair of fused silica wedges was used to control the dispersion and the CEP of the pulses. The few-cycle pulses with controlled CEP were focused onto a nanometric tungsten needle tip using an off-axis parabola (OAP, f=10 cm). The needle tip was produced using wet chemical etching [35], resulting in a typical apex radius of (40±20) nm. The emitted photoelectrons were detected using a time-of-flight (TOF)-spectrometer (Stefan Kaesdorf, ETF10) and a time-to-digital-converter (FAST ComTec, P7889). Due to the vacuum-requirements for the electron spectra measurements and the microchannel-plate detector of the spectrometer, the setup was placed in a vacuum chamber. The photoemitted electrons also resulted in a photocurrent from the nanotip. The photocurrent was amplified using a low-noise high-gain transimpedance amplifier (FEMTO, DLPCA-200) and detected using a lock-in amplifier (Zürich Instruments, HF2LI). For the measurements of the CEP-dependence, the CEP of consecutive laser pulses was flipped between φ 0 and φ 0 + π using the acousto-optic programmable dispersive filter (Fastlite Dazzler) of the Ti:sapphire laser system. The digital lock-in amplifier was capable of demodulating the signal input at several frequencies in parallel. For the first frequency, we chose the laser repetition rate f rep = 10 kHz which resulted in a signal proportional to the total photocurrent. The second frequency was the CEP-flipping frequency f CEP which was set to 5 kHz. We observed the best signal to noise performance at the maximum gain of 10 9 V/A of the transimpedance amplifier even though the nominal gain bandwidth (f −3dB ) was below 5 kHz, which slightly reduced the gain at 10 kHz. The conversion factor between signal amplitude and electrons per shot is approximately given by 0.6 electrons · shot −1 µV −1 . Our experiments are conducted in the multi-electron emission regime with up to several hundred electrons per shot. Similar conditions were found for the nano-bowtie current experiments driven at MHz repetition rates [27]. Their study indirectly indicates 10 3 to 10 4 electrons per shot per nanostructure, based on the CEP-current per shot and nanostructure (0.11 electrons) and the ratio of CEP-current to total current (10 −4 to 10 −5 ). The results of our studies are thus of direct importance also in related experiments where currents from nanostructures are measured. III. RESULTS AND DISCUSSION The connection between the photocurrent emitted from and flowing through the tip (and measured with the lock-in amplifier as discussed above -short 'the photocurrent') and the TOF measurements is established in Fig. 2, which shows the intensity dependence of the number of detected electrons. In our experiments, we observed no significant damage to the nanotip during the measurement even at the highest intensities, as confirmed by the repeatability of the measurements. Both photocurrent (blue crosses and dashed line) and TOF (red triangles and solid line) measurements show a very similar evolution. However, in the photocurrent, approximately a factor 70 more electrons are detected. The main reason is likely the partial transmission of the TOF spectrometer due to the acceptance angle of only around 2.5 • . We intentionally did not make use of the lens in the TOF spectrometer in order to avoid distortions of the spectra. From a rough estimate considering the emission angle [9] we would expect a factor of 40, which is close to our observation. This illustrates the advantage of the photocurrent approach to capture all emitted electrons if this is required to characterize the dynamics which may allow shorter measurement times. The total number of detected electrons per shot in the photocurrent increases from below 5 at the lowest input power of 0.1 mW to above 3000 at 2 mW. The photocurrent may include slow, potentially thermally emitted electrons that are not resolved in the TOF measurement. Most importantly, the change of the slope in Fig. 2 towards a linear emission regime is interpreted as a signature for the onset of substantial charge interaction. : on the order of the oscillations between consecutive datapoints, since the measurements are affected by slow laser power drifts and fluctuations unlike the photocurrent that is measured with a lock-in amplifier. A ratio of CEP-dependent modulation signal to total count rate of around 10 −2 is measured, which is between one and two orders of magnitude above what has been reported in other studies using nano-bowtie structures and MHz repetition rate sources [26]. The reason could be twofold. Firstly, it has been reported that in MHz repetition rate nanotip experiments the strong-field emission is suppressed due to accumulative heating [36]. Secondly, we use shorter input pulses and non-resonant field enhancement that translates into enhanced near-fields with a pulse duration similar to the incident pulse. CEP-averaged photoelectron energy spectra measured by the TOF for various local intensities are shown in Fig. 3 a. At low intensities (black curve), a low-energy peak connected to a plateau is observed, reminiscent of the direct electron and rescattering contributions of strong-field photoemission corresponding to the 2 U loc p and 10 U loc p -cutoffs, similar to e.g. Ref. [11]. Note that here U loc p is the local ponderomotive potential of the enhanced near-field. Beyond the plateau, the spectra decrease rapidly but still extend to quite high kinetic energy. This is measurable thanks to the high dynamic range of the spectrometer. For each data set, the value of the cutoff of the plateau is evaluated by fitting straight lines to the logarithmic plot of the data below (orange) and above (green) the apparent cutoff and determining their intersection point, as illustrated here for the lowest input power shown. As the laser power is increased, the cutoff of the plateau evolves into a peak structure (cf. gray curve in Fig. 3 a). The formation of the peak is consistent with the typical structure of elastic backscattering at higher intensity, where the underlying classical trajectory picture becomes increasingly appropriate [37]. FIG. 3. Impact of charge interaction on the cutoff energies of recollision electrons emitted from a nanometric tungsten needle tip: a) CEP-averaged electron energy spectra for different laser input powers (as indicated). The cutoff energies are defined via the intersection of linear fits of the data below (orange line) and above (green line) the apparent cutoff of the respective recollision plateaus (here visualized only for the lowest power). b) Cutoff energies as function of near-field intensity (black circles). Colored arrows indicate the cutoffs corresponding to the spectra in a). Gray and red symbols show the cutoffs extracted from semi-classical M 3 C simulations performed for the experimental parameters when ex-and including charge interaction, respectively. Gray and black lines visualize the 10 U loc p cutoff law as expected from the simple mans model and a rescaled 16 U loc p cutoff law. The scaling factor between input laser power and local intensity is deduced from matching the lowest three experimental cutoff values to the 10 U loc p -cutoff law. Clear indications for the onset of considerable charge interaction effects are found in both the power scaling of the yield in Fig. 2, which no longer reflects the typical strongly non-linear yield increase with intensity but grows nearly linearly, and the intensity scaling of the plateau cutoff (black circles) shown in Fig. 3b. For the lowest intensities, a linear scaling of the cutoff corresponding to the 10 U loc p -cutoff law (solid gray line) is found. At a local intensity of about 7.5×10 13 W/cm 2 a step-wise increase of the cutoffs is observed, followed by an again nearly linear scaling corresponding to a modified cutoff-law of about 16 U loc p (solid black line). Both effects, i.e. the nearly linear yield evolution with laser power and the increased cutoff are consistent with previous results found for dielectric nanospheres [38,39]. According to these studies, ionization is limited by charge separation at the surface. The capacitor-like field formed by the released electrons and the residual ions remaining in the nanostructure quenches tunnel ionization, leading to a nearly linear growth of the electron yield with power. The additional energy gain is attributed to modified recollison dynamics due to the trapping field resulting from the charge separation and space charge repulsion among the electrons in the departing bunches [38,39]. To substantiate that the observed cutoff enhancement is caused by charge interaction, we performed semi-classical trajectory simulations for the experimental parameters. We adapted our Mean-Field Mie Monte-Carlo (M 3 C) model, previously utilized to study strong-field photoemission and attosecond streaking at dielectric nanospheres [38,[40][41][42], for the description of electron emission from a metallic nanotip. The details of the original nanosphere model are described in [43]. In brief, the near-field of a dielectric sphere is evaluated as a combination of the linear polarization field caused by an incident laser pulse and additional non-linear contributions from charge interaction that we treat as a mean-field in electrostatic approximation. The latter includes both Coulomb interactions among and the additional sphere polarization caused by the free charges, i.e. liberated electrons and residual ions. Electron trajectories are generated by Monte-Carlo sampling of ADK-type tunneling rates [44] and are propagated in the local near-field by integrating classical equations of motion. For electrons propagating in the material, we account for elastic electronatom collisions and inelastic collisions (impact ionization) via respective scattering cross-sections. To describe the photoemission from a metallic nanotip, we considered three key modifications. First, we allow tunneling only from one half-sphere to mimic the apex of the nanotip. Second, we account for the emerging image charges within the metal in the describtion of the mean-field. Third, we use a Fowler-Nordheim type tunneling rate and consider a workfunction of 6.5 eV for the oxidized tungsten nanotip similar to earlier studies [11,45,46]. We note that the rate was scaled with a linear factor for best agreement with the experimental data. The cutoff energies extracted from M 3 C simulations with the mean-field turned off and on are shown in Fig. 3 b) as gray and red symbols, respectively. The simulations excluding the mean-field predict cutoffs following the 10 U loc p -law. When including the mean-field, cutoffs around 10 U loc p are only obtained at low intensities where ionization and thus charge-interaction-induced modifications of the local near-fields are weak. At higher intensities, the cutoff converges to 16 U loc p in close agreement with the experiment. Although the transition between the two cutoff laws is less rapid than in the measured data, the semiclassical model captures the main trends and thus confirms that the observed step-like feature is a clear signature for the onset of charge interaction dominated photoemission. So far, most systematic studies focusing on charge interaction in strong-field photoemission were performed on isolated nanospheres [10,38,41,47]. In these works, several effects could be identified by thorough analysis of the experimental results and extensive numerical simulations. Despite the differences in the strong-field photoemission process between nanometric metallic needle tips and dielectric nanospheres, the similarity of the observed chargeinteraction effects, the strong reduction of the nonlinearity of the photoemission process and the increase of the cutoff energy, are striking. Our measurements indicate that charge interaction starts to affect the electron dynamics above a near-field intensity of 7.5 × 10 13 W/cm 2 or around 1000 e − /shot for 4.5 fs pulses at 750 nm and tungsten needle tips with a tip radius of around 40 nm. While the strong-field tunneling photocurrent experiments on nano-bowties and triangles [25,28,29,31,48] focus mainly on the CEP-dependent current which is on the order of one electron per shot, the total number of charges per shot is typically several orders of magnitude higher [26,27,31] and therefore in a similar regime as for our experiment. Our study also shows that despite the clear charge interaction, waveform-dependent photocurrents can be measured, which is important for the development of applications of field-controlled currents. The discussion of charge interaction given here is therefore of high relevance to these studies as well. Additionally, the higher kinetic energies of electrons due to the charge interaction could prove useful in future applications. IV. CONCLUSIONS We have investigated the photoemission and photocurrents from nanometric tungsten needle tips in CEP-controlled few-cycle laser fields. For multi-electron emission, we identified two regimes, where initially the cutoff energies of emitted electrons closely follow what is expected from near-field enhanced backscattering dynamics. At the onset of charge interactions becoming dominant in the emission process, we observed a step-wise increase in the cutoff energies. The results are relevant also for the strong-field electron emission from other nanostructures, including studies where ultrafast currents from plasmonic bow-tie nanostructures have been used for CEP-detection and PHzscale optoelectronic devices.
2021-06-02T01:16:11.685Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "af380bc101ad69d7136a68ada96f5a7b4851e809", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "af380bc101ad69d7136a68ada96f5a7b4851e809", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258480154
pes2o/s2orc
v3-fos-license
Dynamic cosmography of the local Universe: Laniakea and five more watershed superclusters This article delivers the dynamical cosmography of the Local Universe within z=0.1 (1 giga light-years). We exploit the gravitational velocity field computed using the CosmicFlows-4 catalog of galaxy distances to delineate superclusters as watersheds, publishing for the first time their size, shape, main streams of matter and the location of their central attractor. Laniakea, our home supercluster's size is confirmed to be 2 $\times 10^6$ (Mpc $h^{-1}$)$^3$. Five more known superclusters are now dynamically defined in the same way: Apus, Hercules, Lepus, Perseus-Pisces and Shapley. Also, the central repellers of the Bootes and Sculptor voids are found and the Dipole and Cold Spot repellers now appear as a single gigantic entity. Interestingly the observed superclusters are an order of magnitude larger than the theoretical ones predicted by cosmological $\Lambda$CDM simulations. Introduction Cosmography is the science that maps and measures the largescale structures in the observed Universe that are built from the tug of war between gravitation and space expansion. Mapping the position and spatial extents of clusters, filaments, walls, superclusters and voids of galaxies is most frequently and more easily done using the Hubble-Lemaître law on redshift datasets. However, such positions and sizes are distorted by the local gravitational velocity field that curves clusters of galaxies and elongates them radially to the observer. This local effect also squeezes the thickness of the walls of galaxies that appear more concentrated, less thick in redshift space than they are in real space. Several methodologies are developed to counter these distortions that will be dominant as bigger and bigger datasets arrive. However, in the local Universe, i.e below z = 0.1, the cosmography can be studied using direct distance measurements of galaxy positions that are much less affected by redshift space distortion effects, but are numerically much fewer and consequently prone to strong incompleteness biases collected under the name of "Malmquist biases". The gravitational (a.k.a. peculiar) velocity field derived from direct galaxy distances reconstructs the underlying distribution of mass responsible of these motions, hence allowing a dynamic cosmography of the (dark and luminous) matter distribution. Such a dynamical mapping of the Local Universe volume also provides an evolution in the semantic segmentation of largescale structures with the possibility to compute watersheds, basins of attraction and basins of repulsion. Static filaments, walls and voids are embedded into a more global view of large dynamical superclusters delineated by empty regions. The main advantage of mapping superclusters as watershed basins in the divergence velocity field is the robustness of the definition. For ⋆ adupuy@kias.re.kr example, it allows to quantitatively compare the observed sizes to the predicted ones in cosmological simulations (Peñaranda-Rivera et al. 2021) enabling superclusters to be used as cosmological probes. In mirror, in the hierarchy of structures, voids are regions empty of galaxies, that can be described as large coherent volumes, evacuating flows of matter (Aragon-Calvo & Szalay 2013; Courtois et al. 2022). Dynamic mapping can thus also help using Cosmic voids as probes of cosmology and galaxy evolution models (Platen et al. 2007;Fiorini et al. 2022;Domínguez-Gómez 2023). In this study, large scale structures are defined as regions of space with gravitationally induced coherent inward or outwards motions, which, in this way, describe basins of attraction or basins of repulsion, respectively. In other words, a gravitational basin is the volume of space containing all the motions of mass going towards a common center, defined as an attractor in the case of basins of attraction, or repeller for basins of repulsion. As per this definition, we may also describe basins as watersheds. We obtain then two tessellations of the velocity field into gravitational basins: into basins of attraction on one side, and into basins of repulsion on the other side. A point in space belongs to at most one basin of attraction and one basin of repulsion. This article presents the cosmography of the Local Universe within z = 0.1 by partitioning the volume of space into gravitational watersheds and delivering the positions of core attractors, main cosmic streams and strong repellers at the center of voids. Section 2.1 presents the computation of the peculiar velocity field using the CosmicFlows-4 catalog of galaxy distances Courtois et al. (2023). The methodology to define superclusters as basins of attraction Dupuy et al. (2019) is explained in section 2.2. Section 3 delivers the sizes and locations of the newly cartographied local large-scale structures defined as basins of attraction. An short analysis of streamlines and gravitational val-leys is presented in Section 4. Structures defined as repellers and basins of repulsion are discussed in Section 5. Conclusions are drawn in Section 6. Data: 3-dimensional velocity field The CosmicFlows-4 catalog (CF4, (Tully et al. 2023)) is currently the latest and largest compilation of reliable distances of galaxies and groups of galaxies independent of redshift, obtained through eight methodologies, such as the Tully-Fisher relation for spiral galaxies (TF, Tully & Fisher (1977)), or Fundamental Plane for elliptical galaxies (FP, Dressler et al. (1987b);Djorgovski & Davis (1987)). It provides distances for 55,877 galaxies ( Table 2 in Tully et al. (2023)), and distances for 38,065 groups constructed from the groups of Kourkchi & Tully (2017);Tully (2015); Tempel et al. (2017) (Table 3 in Tully et al. 2023), with a uniform coverage of the sky up to approximately 8,000 km s −1 . Two FP samples provides more distant coverage in two distinct patches of the sky: up to 16,000 km s −1 in south celestial hemisphere (6dF Galaxy Survey or 6dFGSv, Springob et al. (2014)) and up to 30,000 km s −1 in the region covered by Sloan Digital Sky Survey (SDSS, York et al. (2000); Howlett et al. (2022)). From the CF4 galaxy and group distances, it is possible to reconstruct the three-dimensional overdensity δ and related peculiar velocity u fields, by using the method introduced in Courtois et al. (2023). The algorithm uses a Hamiltonian-Monte-Carlo (HMC) algorithm which derives the 3D overdensity and velocity fields through exploring a range of free parameters (matter density parameter Ω m , bias, ...). The reader can refer to Courtois et al. (2023) for more details regarding the reconstruction methodology. In this article, all reconstructions have been derived by considering a ΛCDM cosmology for the initial values: Ω m = 0.3, and a value of the Hubble constant H 0 = 74.6 km s −1 Mpc −1 , compatible with the CF4 assembly of distances as stated in Tully et al. (2023). In order to reach a stable convergence on the free parameters, about 10,000 HMC steps have been computed and the resulting overdensity and peculiar velocity fields are obtained by averaging all steps (except the burning steps). We include in this article two 3D reconstructions of the overdensity and velocity fields: one reconstruction derived from the galaxy sample of the CF4 dataset, and another reconstruction obtained from groups of galaxies also provided by the CF4 compilation. Both reconstructions are computed in a grid of size 128 3 and side length of 1, 000 Mpc h −1 , hence reaching a resolution of 7.8 Mpc h −1 . Figures displaying the overdensity and velocity fields reconstructed from both CF4 galaxies and groups are included in Appendix A. Figure A.1 shows three-dimensional visualizations of the reconstructed overdensity field through 4 levels of isosurfaces ranging from white to dark red. The reconstruction from CF4 galaxies is shown on the top panel, while the reconstruction from CF4 groups is shown on the bottom panel. The related peculiar velocity field is represented by streamlines. The orientation of the visualization is given by the red, green, and blue arrows of length 50 Mpc h −1 , directed along the supergalactic cartesian SGX, SGY and SGZ axes respectively and located at the center of the reconstructed cube. More details on this plot are given in Appendix A. We can recognize known structures in both reconstructions, such as Norma close to the center, Shapley supercluster in -SGX and +SGY, Perseus-Pisces supercluster on the +SGX side, as well as Apus, Pisces-Cetus, or Hercules superclusters. However the two reconstructions differ since the grouped version shows a strong accent on the more distant SDSS region in +SGY. The grouped reconstruction is therefore less satisfying than the ungrouped one, since we notice a "ring" that appears due to a lack of constraints in the ortho-radial direction around the observer and also arising from the intrinsic shaped and size of the SDSS galaxy distances catalog. Watersheds and gravitational basins The methodology we use to identify attractors, repellers and their respective gravitational basins in this article has been introduced in Dupuy et al. (2019). In the case of a time-independent linear velocity field, streamlines are paths that are always tangent to the local value of the velocity field. They are computed by spatially integrating the components of the peculiar velocity field, starting from a given seed point. The integration is performed by the fourth order Runge-Kutta (RK4) numerical integrator. Streamlines converge towards the critical points of the velocity field, i.e attractors or repellers. Hence depending on the direction of integration considered, either forward (integrating u) or backward (integrating −1 × u), we can identify the positions of attractors or repellers, respectively, by finding the points of convergence of streamlines. Then, a gravitational basin can be defined as a set of seed points whose streamlines converge towards the same critical point of the velocity field: an attractor for basins of attraction, or a repeller for basins of repulsion. As the peculiar velocity is available as a grid, streamlines are actually computed for each voxel (a cell in three dimensions) of the velocity grid with a fixed number of RK4 iterations. Attractors (or repellers) are then identified as the voxels where most streamlines are terminated, also called ending points. The velocity grid is then segmented into basins of attraction (or repulsion) defined as sets of (seed) voxels whose streamlines share the same ending point. As streamlines are computed with a fixed number of iterations, they may be terminated before reaching a critical point of the velocity field. Hence several iterations are needed when matching voxels to basins, making the basins "grow" around the attractors or repellers. Finally, we obtain two tessellations of the velocity field: into basins of attraction and into basins of repulsion, depending on the direction of integration. Each voxel of the velocity field belongs to, at most, one basin of attraction and one basin of repulsion. Since the considered peculiar velocity grid does not have periodic boundary conditions, seed voxels whose streamlines go out of the computational grid do not belong to any basin. More details on the basin segmentation algorithm can be found in Dupuy et al. (2019). More extensive testing of this methodology has been conducted using cosmological simulations in Dupuy et al. (2020). Additionally, the methodology of segmenting basins allows the definition of a new quantity that helps visualizing gravitational valleys. Instead of analyzing the convergence of streamlines to tessellate the velocity field into gravitational basins, one can simply construct an histogram from the streamlines by counting how many streamlines are passing through each voxel, and assign that value to the associated voxel to build the histogram. This cube, of the same size and resolution as the grid containing the velocity field, produces a map of the density of streamlines, with regions high or low in streamlines concentrations, showcasing gravitational valleys. This definition allowed a kinematic confirmation of Vela supercluster, known to be hidden within the Zone of Avoidance . We also refer the reader to Dupuy et al. (2019) to find more details about that algorithm. Fig. 1. Visualization of the basins of attraction of Laniakea (in red) and the five superclusters that are now dynamically defined as watershed in the same way, namely Apus, Hercules, Lepus, Perseus-Pisces and Shapley, as well as the basins identified further away in the SDSS area (both in purple). The basins shown are obtained from the ungrouped CosmicFlows-4 (CF4) velocity field. Each galaxy from the CF4 catalog is positioned at its redshift and represented by tiny dots, red for galaxies part of the Laniakea basin of attraction and purple otherwise. Streamlines are also computed for each galaxy with the same color code. The gradient of color is related to the density of streamlines. Streamlines colored white or yellow indicate a high density of streamlines. Each identified basin of attraction is annotated by the associated structure name. For reference, the three supergalactic cartesian orientation axes SGX, SGY, SGZ are drawn on the bottom left, represented respectively by red, green and blue arrows of size 50 Mpc h −1 . Watersheds of the Local Universe and their core attractors The methodology of segmenting watersheds of basins of attraction was performed on both versions of the CosmicFlows-4 (CF4) catalog (ungrouped and grouped galaxies). The main body of the article presents the most robust analysis obtained using individual galaxy distances (ungrouped CF4). However all results for the grouped version are available in detail in Appendix A. Nine basins of attraction were found in the velocity field reconstructed from the CF4 individual galaxies. They can be visualized in Figure 1. The new basin of attraction of our home supercluster Laniakea is shown in red, other basins, namely the five superclusters now dynamically defined as watersheds (Apus, Hercules, Lepus, Perseus-Pisces and Shapley) and basins identified in the area covered by SDSS, are shown in purple. Streamlines are computed from each individual CF4 galaxy, which are themselves represented by tiny dots. The gradient of color (from red to yellow and purple to white for Laniakea and other basins, respectively) is related to the density of streamlines. White or yellow streamlines denote a high concentration of streamlines in that area. Each basin of attraction is annotated by the name of the associated supercluster. For practical purposes, the three supergalactic cartesian orientation axes SGX, SGY and SGZ are displayed at the bottom left of the Figure, rendered by red, green and blue arrows of size 50 Mpc h −1 , respectively. The basin of attraction corresponding to the Laniakea supercluster is only identified in the ungrouped reconstruction. In the grouped reconstruction it appears as part of the Shapley basin of attraction. The segmented Laniakea basin has a volume of 1.9 × 10 6 (Mpc h −1 ) 3 . It can be compared to the earliest estimates of CosmicFlows-2 (CF2) dataset: eye-ball measurement computed a decade ago giving 1.7 × 10 6 (Mpc h −1 ) 3 in Tully et al. (2014) and using the segmentation algorithm on CF2 data gives 2.3 × 10 6 (Mpc h −1 ) 3 . The Laniakea central attractor is located at [−62, −8, 39] Mpc h −1 in supergalactic cartesian coordinates, equivalent to sgl = 187 • , sgb = 32 • and cz = 7370 km s −1 . This is in full accordance with the direction of the Great Attractor (Dressler et al. 1987a). In the same way as Laniakea, the Apus supercluster is only identified as a single basin of attraction in the velocity field reconstructed from individual CF4 galaxies. Its associated attractor is found to be located at [−125, −23, −31] Mpc h −1 , while the segmented basin has a volume of 9.5 × 10 6 (Mpc h −1 ) 3 . The attractor coordinates for the Hercules supercluster show a great coherence between the two different reconstructions, being detected at [−39, 70, 78] Mpc h −1 and [−47, 78, 78] Mpc h −1 for ungrouped and grouped galaxies, respectively, although the volumes of the basin differ a lot (3.1×10 6 (Mpc h −1 ) 3 for individual galaxies and 0.8 × 10 6 (Mpc h −1 ) 3 for galaxy groups), mainly due to stronger emphasis of the SDSS data in the grouped CF4. We can compare the Hercules attractor coordinates to the position of the Abell cluster A2151 (Abell et al. 1989), also known as the Hercules cluster of galaxies, with a right ascension and declination of R.A. = 241 • and Dec. = +17 • . The Lepus supercluster is a basin of attraction in both reconstructions, with a volume of 8.1 × 10 6 (Mpc h −1 ) 3 for individual galaxies and 6.9 × 10 6 (Mpc h −1 ) 3 for galaxy groups, The Perseus-Pisces supercluster basin of attraction displays a volume of 4.8 × 10 6 (Mpc h −1 ) 3 in the individual galaxies reconstruction, and 2.0 × 10 6 (Mpc h −1 ) 3 in the reconstruction obtained from CF4 galaxy groups. Its central attractor is located at [47, −23, −31] Mpc h −1 and [47, −23, −39] Mpc h −1 for CF4 galaxies and groups, respectively, showing an important consistency between the two reconstructions. The coordinates of the Perseus-Pisces attractor differ from the location of the richest cluster in the Perseus-Pisces supercluster, the Abell cluster A426, with R.A. = 50 • and Dec. = +41 • (also see Jõeveer et al. 1978;Tully 2015). However due to the low resolution of the reconstructions considered in this manuscript (7.8 Mpc h −1 ), the positions derived from the segmentation method are not accurate and cannot be compared to redshift positions of clusters. Last, the Shapley supercluster has also been identified as a basin of attraction around an attractor located at [−141, 62, −16] Mpc h −1 in both velocity fields. The position of the attractor is in good agreement with, for example, two Abell clusters known to be part of the Shapley supercluster: A3570 located at R.A, Dec. = (207 • , −37 • ), and A3575 at R.A, Dec. = (208 • , −32 • ). The volume of Shapley is evaluated to be 7.9 × 10 6 (Mpc h −1 ) 3 and 27.0 × 10 6 (Mpc h −1 ) 3 for CF4 ungrouped and grouped galaxies, respectively. In both cases, the basin includes the Coma supercluster. This may be an indication that Coma supercluster could be a sub-basin of Shapley supercluster. Shapley's basin found using the CF4 grouped version is much larger as it also encompasses the Apus and Laniakea superclusters, each identified as separate basins in the reconstruction from CF4 ungrouped galaxies. A few attractors have also been identified in the more distant region covered by the SDSS addition to the CF4 catalog. However, the computed basins of attraction cannot be constrained correctly because streamlines are overflowing up to the borders of the computational box, due to the lack of further distant data needed to delimit the basins on their far side. Also the errors on the galaxy distance data are proportional to distance, for this reason we will not comment on the volumes of the basins, though the instability in the values between the two reconstructions is easy to notice. For the sake of clarity, the SDSS attractors are classified in two groups depending on their position in the SDSS reconstructed overdensity. In the ungrouped reconstruction, only three attractors are identified. The main source of error in the current cosmography is the difference between the two CosmicFlows-4 catalog's variations: individual galaxies and groups of galaxies. The fact of grouping or not galaxy distances impacts strongly the resulting peculiar velocity fields. More work is needed in the preparation of catalogs of galaxy distance in order to mitigate that problem. The grouped version of CF4 gives too much importance to the SDSS subset of data, inflating the signal in that more distant region where the errors in the data are harder to constrain. This results in an overestimation of the SDSS basins sizes. Streamlines and gravitational valleys Figure 2 left and middle panels show the density of streamlines computed from the peculiar velocity field reconstructed from CF4 individual galaxies (left) and groups of galaxies (right). Both panels correspond to a supergalactic cartesian SGX-SGY slice, centered on SGZ = 0 Mpc h −1 . Regions with a high concentration of streams are shown in yellow, while regions with less streams are shown in black. One can notice how the density of streamlines is depicting a "skeleton" of the peculiar velocity field, as one can spot the superclusters present in this slice. The gradient of yellow and purple present around each structure also helps visually identify the coverage of the associated basins of attraction as described earlier in the manuscript. Repellers of the Local Universe Repellers, and associated basins of repulsion, can be identified in a mirroring way as attractors and basins of attraction, by simply computing streamlines in the backward direction, i.e integrating −1 × u. The segmenting algorithm detected three repellers (with their associated basin of repulsion) in the ungrouped reconstruction. A basin of repulsion is identified in the Sculptor Void region, centered on a repeller located at [23, −109, −16] Mpc h −1 Fig. 2. Left and middle: Gravitational valleys in the Local Universe. Both panels show a supergalactic cartesian SGX-SGY slice of width 7.8 Mpc/h, centered on SGZ = 0 Mpc h −1 , of the density of streamlines, derived from the ungrouped (left) and grouped (right) CosmicFlows-4 velocity field. Yellow represents voxels highly concentrated in streamlines, while black represents a low concentration of streamlines. Right: Density of streamlines from a ΛCDM simulation constrained with the CF2 data, with the same color code. Structures are annotated on the Figure. Note that the scale of the axes is different from the two other panels. As a matter of fact CF2 catalog was much sparser in data (seven times less data) and much smaller in cartographied volume (ten times smaller compared to CF4), thus this simulation is loosely constrained on scales up to 150 Mpc h −1 maximum. Also at that time, we had poor coverage of the southern hemisphere large scale structures. and encompassing a volume of 1.3 ×10 6 (Mpc h −1 ) 3 . Similarly, a repeller in the SDSS area is detected near the Bootes Void region, at [−31, 125, 8] Courtois et al. 2017). The Dipole and Cold Spot repellers now appear as a single gigantic extended entity, of volume 163 ×10 6 (Mpc h −1 ) 3 . In the grouped reconstruction, two smaller new regions evacuating matter are cartographied : one in the zone of avoidance and one within the SDSS surveyed volume. A full table is available in Appendix A, accompanied by three-dimensional visualizations of the detected repellers and their related basins of repulsion. Conclusion This article presents the dynamic cosmography of the local Universe, up to z = 0.1, by analysing the gravitational velocity field reconstructed from the Cosmicflows-4 galaxy distances. This analysis allows to identify the large scale structures of the Universe as gravitational basins. A decade after its discovery, we confirm the shape, size and the coordinates of the attractor of our home supercluster Laniakea, which now englobes a volume of 1.9×10 6 (Mpc h −1 ) 3 . Five known superclusters, namely Apus, Hercules, Lepus, Perseus-Pisces and Shapley, are now dynamically defined as basins of attraction or watersheds for the first time, as well as basins detected in the region covered by SDSS. We notice that the supercluster Coma is part of the basin of Shapley, hinting that it may be sub-structure of Shapley. Superclusters and gravitational basins can also be visualized as gravitational valleys, such as in Figure 2. The coverage of CF4 also allows to define a few basins of repulsion, namely the basins of the Sculptor Void, a basin near the Bootes Void, in the region covered by SDSS and in the Zone of Avoidance. The Dipole Repeller and the Cold Spot Repeller are now defined as a single gigantic entity. It is worth pointing that the observed superclusters defined in this article are larger than the ones found in cosmological ΛCDM simulations. Dupuy et al. (2020) studies of basins of attraction detected in a constrained cosmological simulation, using the same segmentation algorithm as in this manuscript. A SGX-SGY slice (centered on SGZ = 0 Mpc h −1 ) of the density of streamlines derived from this simulation and extracted from Dupuy et al. (2020) is shown on the right panel of Figure 2, with the same color code. The authors showed that the typical size of basins of attraction is within 1 × 10 5 and 1 × 10 6 (Mpc h −1 ) 3 . The simulation being constrained by the Cosmicflows-2 dataset, they defined the simulated basins of Laniakea and Perseus-Pisces, containing a volume of 5 × 10 5 (Mpc h −1 ) 3 and 7 × 10 5 (Mpc h −1 ) 3 . This difference in volume between the simulated and observed volumes may come from the fact we are comparing structures from a simulation constrained with the Cosmicflows-2 (CF2) data to structures detected in reconstructions from CF4, with a now much larger coverage and larger number of galaxies than CF2. Errors on the definition of large-scale structures as gravitational basins can be derived in this work from the difference between the two reconstructions of the local velocity field considered, namely from distances of galaxies and of groups of galaxies, as grouping galaxies and their distances has a direct impact on the resulting peculiar velocity field. Particularly, the grouped version of CF4 gives too much importance to the SDSS subset of data, overestimating the SDSS basins sizes. Additionally, identifying attractors of the gravitational watersheds of the local Universe as known rich galaxy clusters may define an interesting follow-up study of this work. For example, the attractor of the SDSS-2c basin of attraction may point to the very rich A2142 Abell cluster, described in Einasto et al. (2020). However as mentioned above, the current resolution of the reconstruction may be too low to get accurate attractor positions, comparable to redshift positions of rich clusters. At this point in time, the main contingency regarding the identification of large-scale structures comes from the inclusion of the SDSS data into the CosmicFlows catalog which is a composite of several sources of observations. There is great hope that large independent surveys upcoming in the next few years such as WALLABY (Widefield ASKAP L-band Legacy All-sky Blind surveY Koribalski et al. (2020)), bringing 90,000 galaxies up to z = 0.1, DESI (Dark Energy Spectroscopic Instrument DESI Collaboration et al. (2016)) and 4HS (4MOST Hemisphere Survey de Jong et al. (2019)), each deliverying 500,000 galaxies up to z = 0.15, will further enrich the cosmography of our Universe. The main part of the article discusses only two figures, however a more detailed description is given in Appendix A, along with additional figures, a table summarizing all detected basins of attraction and repulsion, and links to interactive threedimensional visualizations. Declarations -Materials and Correspondence: A. Dupuy is the author to whom correspondence and material requests should be addressed. -Competing interests : The authors declare no competing interests. -Availability of data and materials : The datasets generated during and/or analysed during the current study are available online on the CosmicFlows project page 1 . Grids are of the same size as the velocity field discussed in this manuscript (1000 Mpc h −1 and 128 3 ) and are filled with integers from 0 to n where n is the number of basins detected. Each integer labels a single basin (in the same order as listed in Table A.1, ignoring blank lines), and the value of all voxels part of a given basin is set to that same integer. Voxels set to 0 are not part of any basin. For example, in the case of basins of attraction in the CF4 galaxy velocity field: voxels filled with 1 are part of Laniakea, 2 of Apus, 3 of Hercules, etc. The methodology described above has been applied to both reconstructions obtained from CF4 individual galaxies and CF4 groups. Structures identified as basins of attraction and repulsion in each reconstruction are listed in the first column of Table A.1 along with their volume in units of 10 6 (Mpc h −1 ) 3 in columns 5 and 9, and with the supergalactic cartesian coordinates of the respective attractors and repellers in Mpc h −1 , in columns 2 to 4 and 6 to 8. The respective attractors are plotted as black spheres in Figure A.1 with their associated basin of attraction, annotated with the name of the corresponding structure. The reconstruction from CF4 galaxies is shown on the top panel, and the reconstruction from CF4 groups is shown on the bottom panel. The orientation of the two visualizations is given by the red, green, and blue arrows of length 50 Mpc h −1 , directed along the supergalactic cartesian SGX, SGY and SGZ axes respectively and located at the center of the reconstructed cube. We recall the reader that the 4 levels of isosurfaces ranging from white to dark red are displaying the reconstructed overdensity field, while streamlines represent the related peculiar velocity field. Additionally, for a sense of clarity, streamlines are computed by considering only voxels that are part of all basins of attraction as seed points, allowing to focus on basins identified in the velocity field and remove extra noise caused by streamlines going out of the computational box (hence not part of any basin). For each basin of attraction, the streamlines are nicely converging towards their respective core attractor. Additionally, Figure A.2 shows a SGX-SGY slice, centered on SGZ = 0 Mpc h −1 , of the overdensity field associated to the velocity field reconstructed from CF4 distances of galaxies (left) and groups (right). The overlapping contours represent the gravitational basins, obtained from the tessellation of the velocity field into basins of attraction (filled contours without borders), and basins of repulsion (empty contours with borders), along with the names of each associated structure. . Attractors in the local universe reconstructed from the CosmicFlows-4 galaxies (top) and groups (bottom). The reconstructed overdensity field is represented by 4 levels of isosurfaces from white to dark red: 0.75, 1, 1.4 and 1.75 for CF4 galaxies, and 0.5, 0.8, 1.2 and 1.5 for CF4 groups. Streamlines are derived by integrating the reconstructed peculiar velocity field in the forward direction. For clarity purposes, extra noise has been removed by showing only streamlines part of basins of attraction identified with the segmentation algorithm, listed in the top part of Table A.1. Black spheres are located at the coordinates of attractors identified in the velocity field, i.e at the points where the streamlines are converging. The center of the reconstructed cube is located by the red, green and blue arrows of length 50 Mpc h −1 and directing along the supergalactic cartesian SGX, SGY and SGZ axes respectively. Sketchfab interactive visualizations are available for both analysis: galaxies and groups. Article number, page 7 of 9 A&A proofs: manuscript no. main Gravitational basins identified in the peculiar velocity field reconstructed from the CosmicFlows-4 individual galaxies (columns 2 to 8) and groups (columns 9 to 15). Column 1 lists the names of structures identified as basins of attraction (lines 3 to 15) and basins of repulsion (lines 16 to 20) in each reconstruction. Columns 2-9, 3-10 and 4-11 give the equatorial coordinates (R.A. and Dec., both in degrees) and the velocity in the CMB frame of rest V CMB in km/s. Columns 5-12, 6-13 and 7-14 give the supergalactic cartesian coordinates of the corresponding attractor or repeller, SGX, SGY and SGZ, respectively, in units of Mpc h −1 . Columns 8-15 provides the volume of the gravitational basins (attraction or repulsion), in units of 10 6 (Mpc h −1 ) 3 . Lines are left blank if no basin corresponding to the respective structure has been identified in the reconstructions. Gravitational watersheds identified in the peculiar velocity field reconstructed from CF4 galaxies and groups, each represented by a blue contour and streamlines. In the case of Laniakea, the basin of attraction found in the CosmicFlows-2 reconstruction is also shown in yellow. The reconstructed overdensity field is represented by 4 levels of isosurfaces. The center of the reconstruction is located by the red, green and blue arrows of length 50 Mpc h −1 each directing along SGX, SGY and SGZ. . Repellers in the local universe reconstructed from the CosmicFlows-4 galaxies (top) and groups (bottom). Overdensities are represented in different red shades by 3 levels of isosurfaces: 1, 1.4 and 1.75 for CF4 galaxies, and 0.8, 1.2 and 1.5 for CF4 groups. Underdensities are highlighted in less transparent, blue shades and represented by 2 levels of isosurfaces: -1.27 and -1.6 for CF4 galaxies, -1 and -1.35 for CF4 groups. Streamlines are derived by integrating the reconstructed peculiar velocity field in the backward direction (i.e integrating −1 × u). CF4 For the sake of clarity, extra noise has been removed by showing only streamlines part of basins of repulsion identified with the segmentation algorithm, listed in the bottom part of Table A.1. Black spheres are located at the coordinates of repellers identified in the velocity field, i.e at the points where the streamlines are converging. The center of the reconstructed cube is located by the red, green and blue arrows of length 50 Mpc h −1 and directing along the supergalactic cartesian SGX, SGY and SGZ axes respectively. Sketchfab interactive visualizations are available for both CF4 reconstructions: galaxies and groups.
2023-05-05T01:15:32.507Z
2023-05-03T00:00:00.000
{ "year": 2023, "sha1": "3dae2e2ea3b9a0c9c8505fd88f210d198fdd30b1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3dae2e2ea3b9a0c9c8505fd88f210d198fdd30b1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249448987
pes2o/s2orc
v3-fos-license
Book Review: Wechsler, H. S., & Diner, S. J. (2022). Unwelcome Guests: A History of Access to American Higher Education. United States: Johns Hopkins University Press. Intricately woven into the tapestry of “diversity” in higher education institutions comes this profoundly important book written by Harold S. Wechsler and Steven J. Diner. Organized in five chapters, with a Preface, a section of Acknowledgements, Introduction, Conclusion, and extensive Notes, Unwelcome Guests is a plea for a re-evaluation of college admissions policies and goals from the Civil Rights era of the late 1960’s and early 1970s to the twenty-first century. Amos A. Phelps (1834), a Yale graduate, Congregationalist minister, and editor of Emancipation, who provided documents of actual enrollments in some colleges, but also noted "increased antislavery sentiment at others. " (p. 5) He also asserted that he had found five non-discriminating Massachusetts institutions: Amherst College, Williams Colleges, Bridgewater Academy, Andover Seminary, and the Normal School at Lexington. Another school comes into the foreground when Oneida Institute is praised for a higher number of students of color and with an enrollment in 1840 of about 20 African Americans and Native people, some of whom had been rejected by other schools. (p. 9) Along the same line, we find an extremely poignant comment on the graduation orations of Oneida's African American students: The performances, as a whole, were highly creditable to the Institution. Some of them possessed unusual merit. But we forbear the invidious task of particularizing. It was gratifying to find marked evidence that a dark colored skin, and crisped hair, do not stand as indices of mental inferiority and barrenness. ("Oneida Institute," Friend of Man, 1838) More names have become noteworthy in the authors' view of colleges open to Black women. One such example would be Grace A. Mapps, "often called the first Black woman to graduate from a predominantly white four-year college, who received her degree from NYCC in 1852. " (Quoted in Notable Black American Women, 1996, New York Central College, originally Free Mission College) was the second of New York's integrated colleges, founded by Cyrus Pitt Grosvenor in 1849, whose aim was "to open a 'free institution' offering 'literary, scientific, moral, and physical education of both sexes and of all classes of youth." (Morning Star, 1995) As the authors note, Oneida Institute gradually turned into Oberlin College, "the third predominantly while antebellum college with significant Black enrollments. " (p. 11) It is important to stress that African American students represented 5% of the Oberlin student body in the 1850s, and that the numbers increased to 8% after the Civil War. During the same period of time, according to Cummings (1886), we find John Baldwin, who, in 1845, founded the Baldwin Institute, where students were accepted "regardless of race, gender, creed," or ability to pay. (p. 12) When most colleges did not accept Black students, the reader is informed that Thomas Paul Sr., the founder of the First African Baptist Church in Boston was not accepted at Brown University, and that James McCune was rejected by Columbia and by Geneva Medical College, although he finally earned his bachelor's and medical degrees at the University of Glasgow. (p. 17) Extra information is added when we find only 28 African Americans are thought to have received bachelor's degrees before the Civil War. The whole situation required drastic changes, and they came in 1847, when the National Convention of Colored People and Their Friends (open to white delegates) defended the idea of a separate Black College in the following words: The colored youth, under care of colored teachers, associating with those of his own complexion and condition, would not feel depressed as likely to be in other institutions, surrounded by those whom he had always regarded as opposed to his equality, and, therefore, colored colleges were the most favorable to his mental growth. (Quoted in College for Colored Youth, p. 5) The authors' unflinching perspective switches to life lessons learned from witnessing horrible conditions of the enslaved people on his trips to the South led the Reverend Charles Avery, a wealthy merchant and abolitionist, to found the Allegheny Institute and Mission Church in Allegheny City in Pennsylvania in 1849. When the school became a liberal arts college in 1858 and then closed permanently in 1873, Avery provided in his will for a dozen scholarships enabling Black people to attend what is now the University of Pittsburgh, and left a considerable sum of money for scholarships at Oberlin College, which were used by at least 77 "indigent and worthy" colored students. ("Educational Funds," in Negro Yearbook, 1918-1919 A smooth transition takes us to what happened after the Civil War. As mentioned in the book, African Americans and leaders of white missionary societies used religious conversion as a means to increase their efforts at race education in the South. Before World War II, we find Black people attending "colored colleges," and sometimes "predominantly white private institutions." In such a context, two sides of the same coin were presented by Kelly Miller, a Black mathematician, who asserted that, on one hand, "separating the races would not perpetuate prejudice," but, on the other hand, "Black students could not relate to the white faculty and administrators at white institutions, " (p. 21) Race and racial coeducation, in the authors' view, remained for a while a seemingly debatable issue. Howard University, for example, was known by the mid-twentieth century as a "Negro university," while Atlanta University, a private institution, considered that "There are things more dangerous than coeducation." (p. 23) Morphed into the same discourse, and equally revealing, comes Samuel J. Lee, board president at the University of South Carolina, declaring that "In the chapel, recitation room, on the ball ground and in the study," he found that "the lessons of equality and mutual self-respect have been inculcated." A student (as quoted on p. 24) went even further when he specified that Black and white people studied together "without the black feeling honored or the whites disgraced." (Cohen, 2012, p. 118-227) Many insights may be gleaned from the presentation of private colleges, but one stands out: In 1904 the Kentucky legislature enacted the Day Law, which interdicted coeducation at private colleges, and "outlawed Black colleges within 25 miles of a white school." Northern journals presented their angle and criticized "race hatred, attacks on academic freedom, and the lack of equity." (p. 30) Nevertheless, the Day Law stayed unchanged until 1950, when, according to an amendment, African Americans were allowed to attend white colleges, graduate schools, and professional schools "if Kentucky State College did not offer equivalent programs." (ibid.) As poignantly outlined in the section regarding Military Academies, we learn that, during the Reconstruction period, the US Naval Academy in Annapolis, Maryland, and the US Military Academy in West Point, New York, "accorded African Americans formal opportunities." (p. 35) Along the same lines, we find that in 1918 "about 550 campuses, including about 20 Black colleges, housed units of the Student Army Training Corps." (p. 36) Inspired by the numbers in the early twentieth century, the authors postulate that Black colleges continued to enroll most African American college students, and they quote 1938 when the percentage went up to 97. A small decline in Black student enrollment was noticed in 1980, "but the proportion of Black people attending any historically Black college fell to 18 percent in 1976." (p. 37) The last chapter paragraphs epitomize the problems caused by policies and practices that became fixed by the turn of the twentieth century when Black people attended mostly southern segregated colleges, a few northern Black institutions, or "predominantly white northern colleges. " (p. 39) Underscoring the effect of an unprecedented influx of migrants coming to the United States between the 1880s and the 1920s, Chapter Two, Ethnic Minorities, starts with a 41-volume report published in 1907 that included 5 volumes entitled Children of Immigrants in Schools. In 1908, surveys were done of nationalities, races, and ethnicities of students at 77 colleges and universities. Specifically, except for Jews, "most firstand second-generation US students came from Northern and Western Europe and the British Isles." (p. 41) Furthermore, according to the Dillingham Commission's data collected in 1927, in 55 geographically distributed colleges, "98.24 percent of surveyed students were born in the United States." (ibid.) A similar appraisal found that 88 percent of students second-generation Americans had fathers born in the United States. Consequently, "many nationality, racial, and ethnic groups and subgroups, in addition to Native Americans and African Americans, encountered an unreceptive white student majority when venturing into college. " (ibid.) The information presented in the main subdivisions of the chapter, European Americans, Hispanic Americans, and Asian Americans, clearly delineates the patterns created by students from these parts of the world in their pursuit of higher education. Some migrant families of European origin "learned that higher education could offer economic rewards and intergenerational mobility." (p. 42) With so much valuable information at hand, the authors resort to interesting examples to justify their selections. For example, some Eastern European nationalities seemed to share familial characteristics similar to Italian Americans, seeing little advantage to education. The prevalence of economic over cultural factors was therefore emphasized to explain "the persistence of low school and college attendance levels," not only for Italians but also for the Poles, who "came relatively late to higher education." (p. 45) The general discussion, as the authors claim, also took into consideration the opinions of community leaders who feared that attending denominational colleges or nineteenth-century state universities risked the religious welfare of certain students. Research showed that "Entering college could mean leaving one's community physically; mental departure could occur even before enrollment. Biographical and fictional accounts viewed parental prohibition, based on fears of estrangement, indifference to religion, and poverty, as determining the educational fate of second-generation women. " (p. 47) On the basis of evidence, although viewed by some people as one separate group, we find Hispanic students can be subdivided according to ethnicity, nationality, and social class. As theorized by Cohen (2000), some schools, like Rollins College (founded in 1885) in Winter Park, Florida, never hesitated to claim distinction as the nation's only college "in which the [white] grandchildren of abolitionists and confederate soldiers, in about equal numbers, sit together in the same classroom and play together on the same athletic field, and learn thus to understand, respect, and live with one another." (p. 49) When Cuban students were analyzed, though, Rollins College was found to have admitted white Cubans while turning away Cubans with darker skin." (ibid.) In a nuanced perspective of the legal status of Hispanics in the southwestern states, Mexican Americans were declared "to be white in Independent School District v. Salvatiera (1930), though the court also authorized the continued segregation of children with special language needs." (p 50) Language difficulties became the core tenet in addressing unequal academic starting points. As concluded by a 1944 Yale PhD thesis, "a language handicap impaired academic achievement at the [New Mexico] university. Classwork and reading material bewildered students; their speech and writing indicated an imperfect knowledge of English." (p. 54) Generally speaking, racial cooperation remained unattainable and led to a political and social battleground. The impact of Asian students is masterfully integrated into the body of the project with relevant examples and survey results focusing on what the new immigrants brought into the melting pot, their high expectations, as well as the reaction coming from high education leaders. In 1910, journalist E. E. Slosson acknowledged the intense race prejudice on the West Coast. Nonetheless, he also added that the University of California, for example, was among the first to predict that Asian students would bring "commercial, industrial, and educational opportunities for usefulness and profit." (Quoted on p. 49) the Japanese, as well as the Chinese and the Filipino students had to navigate through decades of limitations and restrictions brought about by regulations and misconceptions, but they all gradually became embedded in the social and cultural life of their communities. When referring to Asian students in general, the authors' argument was that college admission did not necessarily imply acceptance by the white majority culture. In a report entitled Sending Students to America, the Philippine commission asserted that "The quickest way for Filipino youth to acquire the English language and to arrive at an understanding of Western civilization is to live among Americans in the United States and be taught in American schools. " (p. 63) The overall picture shows that the Philippines established scholarships and, of the initial 100 students, most came from prominent families. However, even though a lot of money was spent on their education, "tolerance was not the norm," and some students were even accused of cheating. The chapter concludes that "Asian students fell victim to stereotypes, customs, and the law. " (p. 64) The general contours of Chapter Three, Streetcar College, build and then expand the idea that, although some college founders expressed their concerns that cities may have negative effects on young men and women, "some US intellectuals and academics celebrated urban colleges and their growing enrollments" in places like New York, Pittsburgh, Akron, Detroit, Chicago, and Louisville. (p. 67) One obvious reason was that "cities were increasingly populated by immigrants, Black people, and other ethnic and religious minorities." (ibid.) When asked if a student should go to a city or country college, Western Reserve president made the following remark: Nearly all the colleges in the United States are, like the Jerusalem of David, beautiful for situation. Country colleges, say the advocates, are relatively inexpensive, freer from moral temptation; offer students greater extra-curricular opportunities, and more intimate association with nature. Cities, by contrast, offer contact with the best of humanity and its creations; the mightiest life of the nation pours into the city. (Thwing, 1899) In 1903, Princeton president Woodrow Wilson stated that "A university which one goes to in a street car cannot, in seems to me, fulfill the true ideal of what a university should be." By comparison, there were other institutions of higher education that praised city colleges because of their highly cultivated and stirring communities. To support this idea, the authors postulate that, according to the 1920 US Census, more Americans lived in cities than in rural areas. More specifically, several private nondenominational colleges, municipal colleges, proprietary institutions, and colleges with religious backing were opened in eastern and mid-western cities. "In short, the growth of cities and of colleges within them greatly increased the number of students from racial, ethnic, and religious minorities who could attend." (p. 69). As research shows, since few students were able to buy a car before World War II, subways, streetcars and buses provided the means of transportation to most city colleges. To begin with, The University of Pittsburgh and the University of Newark are used as perfect examples. With a nickel for one-way fare, this affordable mode of transportation paved the way for other urban colleges to grow and diversify. As the chancellor of Washington University argued, great universities are located in great cities, where great institutions, courts, industries, monuments, engineering feats and "great men" are to be found. Unfortunately, racial, ethnic, and religious minority students' enrollment was generally limited, although the cultural mix was considered an experiment in intergroup relations. (p. 72) Ingrained in the urban atmosphere are several colleges that attracted young people to study in New York: First, the University of the City of New York (now New York University) opened in 1831 "without significant denominational support or a benefactor's endowment." It was followed by the Cooper Institute for the Advancement of Science and Art (later renamed Cooper Union for the Advancement of Science and Art) was established by philanthropist Peter Cooper in 1859. Next, Townsend Harris founded the Free Academy of New York in 1847. For women who wanted to become teachers, the Normal College opened in 1870, and later on changed its name to Hunter College. In the same context we find the city's immigrant population predominating at City College and Hunter, both staying single-sex institutions until after World War II. In 1923 a New York Times editorial strongly supported public funding for the two colleges and that paved the way for Brooklyn College opened in 1930 and Queens College in Flushing also opened in 1937. (p. 79) Recounting "triumph over diversity", many colleges in Midwestern cities remained "a loosely integrated group of preparatory and professional schools," where "large populations created institutional competition." Where rapid enrollment growth created problems, if sufficient funds were not received from large gifts, citizens were solicited to make much needed donations. Additionally, cultural, economic, and political channels did their best to promote "cordial town-gown relations." (p. 83) Places of higher learning like Temple University in Philadelphia had no entrance examination and the goal was to: give all classes, the unemployed and the employed, the opportunity to rise from the middle or the lowest to the highest intellectual plane, that thereby they may advance themselves financially, socially, morally, and spiritually, and may thus increase their ability to become benefactors to the world… (The Temple College Catalogue, 1895, p. 10) After a quick view of the University of Buffalo and that of the University of Newark (merged with Rutgers in 1946), the authors provide a smooth transition to debates about college life, with dormitories at their center. "Supporters of dormitories emphasized their commitment to diversity, and they viewed student unions as venues where staff could promote community and nudge students toward salutary academic, vocational, and social growth." (p. 98) As time went on, students became aware of the benefits of being integrated into existing campus life and "the advantages of a rigorous education by professors offering tough love were worth sacrificing college town life." (p. 100) Although the real issue facing urban colleges remained their concern over the racial and ethnic makeup of the student body, several schools morphed from street colleges into commuter colleges, where dorms and parking garages were built to accommodate recruitment of suburban and out-of-town students. The long and troubling history of discrimination, viewed from the student perspective, takes center stage in Chapter Four, Minority Student Experiences. Carefully addressed in this section, minority students' race, ethnicity, and religion are unraveled through the eyes of those who had to endure greater hostility and segregation in student activities and organizations than in classrooms and in their interactions with faculty. More hostile than others were encounters on the academic side, whereby "Some instructors created a chilly or even frigid racial climatesubjecting minorities to differential standards, ignoring their presence, banishing them to the last row of seats, and even failing them -unless and until deans or colleagues intervened." (p. 106) The underrepresented students had to cope with some faculty members who were obviously hostile, while most faculty showed indifference to the race problem. (ibid.) The effects of racial and ethnic dynamics are then better recognized and understood, as asserted by the authors, when we explore autobiographies written by people like Laura Zametkin a Cornell University student, who objected to her name being mispronounced by several instructors. A similar example comes from Gustavus Adolphus Steward, a Black educator and businessman, who expressed his disbelief at professors who embarrassed their audience with "darkey jokes" or "by repeating statistics covering the prevalence of crime, bastardy, and diseases of filth in the section of the town where 'white supremacy' forces them to crowd." (Steward, p. 245; quoted on pp. 109-110) Life lessons were quickly learned when several fraternities at CCNY refused to admit Bernard Baruch, or when, in 1867, "a Columbia College literary society denied admission to Oscar Straus, a Jew who later became secretary of commerce under Theodore Roosevelt. " (p. 110) The reality of campus culture, with its restrictions, exclusions, and discrimination, led minority students "to devise strategies to coexist with the official and peer culture," through their local minority community and their raceor nationality-based clubs. (p. 111) Not confined to their origins in 1906, Black Greek organizations, when Alpha Phi Alpha began at Cornell University, similar fraternities and sororities were founded at Black Howard University. (ibid.) Fear, prejudice, and ignorance are delineated as the major reasons for limitation and restrictions. Editorials published in the Daily Illini argued that freedom or association had nothing to do with fear and "prejudice […] based on ignorance." The same student newspaper went even further when a 1932 editorial posited that "There can be small honor in membership in an honorary society which creates the false impression that all achievement is confined to the white race." (Pickens,p. 4; quoted on p. 120) Nevertheless, the chapter ends on a positive note when the book mentions Harvard and Columbia among those higher education institutions that "pioneered discriminatory admissions practices among northern colleges after World War I. " (p. 125) The debate about race and prejudice continues in Chapter Five, Lowering the Barriers, with a plea coming from Leila B. Strayhorn, a freshman at Lincoln University, who, in 1945, strongly advised Black students "to attend integrated colleges, asserting that it would cause white colleges to incorporate Black history into the curriculum, would raise the competition level for Black people, and would enable a Black student to rid himself 'of the bitterness with which he regards the white race. '" (p. 126) The counterargument comes from people like Vivian Freeman, a junior at Fisk University, and Mildred E. Delaney, a student at Talladega College, who defended the choice of Black colleges which "supplied academic, social, and peer support." In seeking to deal with admissions discrimination and segregation criteria, the authors resort to assertions offered by leaders of higher education associations, like The American Council on Education, who, in 1949, advocated adherence to meritocratic principles. In their view, opportunities for higher education should never be restricted "on any bases whatsoever other than the ability and interest of the individual." (Davis, 1949; quoted on p. 128) Navigating through the wealth of factual information, we discover social scientists like Robert Redfield from the University of Chicago, who "argued "that legislation and administration expressed social mores, but they also created social more." Stimulated by campus demonstrations, social scientists researched "ways of ameliorating the negative experiences of minority students on predominantly white campuses. " (p. 130) In this context, the Commission on Higher Education was appointed by President Harry S. Truman in 1946, and their 1947 report suggested "a major expansion of public higher education," which led to the creation of two-year colleges, and, in time, encouraged many states to change their teachers colleges into state colleges and then into regional state universities. (p. 131) Going one step further, southern states offered "differential scholarships" helping resident Black people to enroll in northern non-segregated professional schools and covering "the difference between in-state resident tuition and the receiving institution's out-of-state tuition charges. " In 1921 and1929, when Missouri began the practice, other states followed: Maryland (1933), Virginia (1936), and Kentucky (1936. A somewhat different but interesting approach came from the Southern Regional Education Impact (1948) which decided to concentrate Black professional and graduate students in segregated regional schools. The segregated system continued well into the 1970s "when federal intervention led to the desegregation of public medical schools in the South. " (p. 133) Race relations at some universities, as the authors claim, changed drastically after World War II, when many students campaigned, picketed, and boycotted several actions taken by those who defended segregation and racial superiority. The Liberal Union at Princeton, Penn State, and Wayne University are relevant examples of such activities. Scenes of protest erupted in Black colleges during the 1960s, and they were compounded by "on-campus pressures from students, faculty (sometimes), and administrators (sometimes directly and sometimes working through the students)" which resulted in quiet integration and outright resistance on and off campus. (p. 137) Other results, like voluntary desegregation, included a considerable number of southern colleges, not to mention the University of Kentucky (in 1950) and three Catholic colleges in the same state -Nazareth and Ursuline for women and Bellarmine for menwhich started to accept Black people. Unfortunately, surveys conducted in those years "showed discrimination persisted into the 1960s. " (p. 142) According to the authors, the postwar period brought significant changes in racial and ethnic stratification, with the rise of community colleges (and simultaneously regional colleges) taking the pressure off four-year colleges. "By 1970s, about 55 percent of all Hispanic students in the United States attended community colleges. African Americans would have shown similar representation if one looked only at predominantly white colleges." (p. 150) The chapter wrap-up points to an increased number of Black students at predominantly white colleges, mostly due to the civil rights movement. The compelling effort to achieve diversity, therefore, as research shows, has led to a completely different view of the students' race, ethnicity, and religion. And here we are with some supporting consideration of race in college admissions as a remedy for ongoing racial inequality, and with others opposing this view, believing that it is a violation of the country's commitment to civil rights. (p. 155) In conclusion, we can say that Unwelcome Guests has brought a considerable contribution to the debate regarding race, nationality, and ethnicity in the context of college admissions decisions, which all might lead to better criteria, improved academic disciplines, incorporated professional schools, on expanded and beautiful campuses all over the nation. On one side we have supporters of affirmative action as a means of changing college admissions practices and their racist effects; on the other side, we find college leaders, mass media, and policy analysts pointing to the violation of civil rights laws that prohibit racial discrimination which reinforces the historical racial bias in higher education, but now directed toward whites. One way or another, "Long-standing policies and practices that make minority students feel like unwelcome guests in another's house have proved difficult to undo, even when all sides express goodwill." (p. 160) The debate continues and the authors, as well as the readers, might come up with more questions and with interesting alternatives. The readers are definitely going to appreciate the depth and the relevance of the authors' methodical research that presented, developed, expanded, and then elevated the effects of racist practices on minorities at a level where the role of race becomes an integral part of the history of American higher education.
2022-06-08T15:04:11.039Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "3551ac8aade6e25a5a54a2929967078d8e825a5a", "oa_license": "CCBY", "oa_url": "https://www.jpse.gta.org.uk/index.php/home/article/download/51/79", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f6c1c8120687236bc067db7867f820396372ff81", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
218920441
pes2o/s2orc
v3-fos-license
Does technology flatten authenticity? Exploring the use of digital storytelling as a learning tool in mental health nurse education ABSTRACT The article reflects on digital storytelling as an approach designed to apply the theory of authentic learning in a co-productive context. It explores the suitability of digital stories as pedagogical tools and examines the connection made between the individual and group interpretation of these stories. A participant group (n = 7) comprising family carers, people with lived experience and mental health nursing students were invited to join two facilitated workshops. The group reviewed four contrasting forms of digital stories with the aim of eliciting and sharing their perspectives. It was found that digital audio compared less well to visual media in authenticity scales. Still photobook-style images were also perceived to be less authentic than dramatic film employing professional actors. Furthermore, it was found that the essence of authenticity became richer as the process and activities of co-productive engagement developed. It is proposed that creating digital scenarios co-productively provides a relational environment in which the essence of authenticity can be felt and expressed. The article will explore the suitability of digital stories as pedagogical tools and examine the process of co-production as an approach which accentuates realism. Introduction When individuals experience something perceived as authentic, the experience is intense and deeply trusted as being 'true' to a reality that connects symbiotically with their own unique world. Efforts to contextualise the phenomenon of authenticity in higher education have alternated between surface interpretations of simply generating 'understanding', through to Freire's (1970) more complex interpretation whereby authenticity must be a conscious, challenging and transformative experience (Freire in Serrano et al., 2018). In this study, the enquiry focused on examining the relationship between digital media and authenticity in scenario-based learning. In the study, action research was employed as the enquiry's methodology with co-production the approach to knowledge generation. As in many other people-centred occupations, mental health nursing is essentially an activity focused on two people: the nurse and 'the patient'. The role of the nurse is to engage with the patient through relational care to support and guide during them episodes of acute psychological distress (Peplau, 1952). As the scope of the profession has expanded to include specialist areas such as forensic care, so increasingly complex challenges present themselves (Norman & Ryrie, 2018). It is within the principles of humanism, inclusion and recognition of individuality (Evans & Hannigan, 2016) that nurses are deemed, by virtue of a professional Code of Conduct (Nursing and Midwifery Council, 2018), to respond. These principles informed the design of this study. They also provided the motivation within the principle enquiry question: do digital scenarios present an authentic reality and therefore an effective learning tool? The participant group comprised family carers, people with lived experience and mental health student nurses and aimed to reflect the co-collaboration of the nurse/patient relationship. Data were gathered from two facilitated workshops. The workshop facilitator aimed to create an environment in which the expression of authenticity could be identified, extrapolated and examined in an approach similar to the ethos of relationship-based care. A significant difference, however, was that the selected scenarios, all familiar in some way to the participant group, were presented through digital media rather than through written word or real-world encounters. Cockelbergh and Reijers (2016) asserted that technologies do have narrative capacity, but their value is dependent on the ethical standpoint of the individual. The study aimed to examine the strength of each of the digital scenarios from the perspective of the participants. In addition, it aimed to explore if authenticity can be achieved using digital media, particularly those formed around simulation, drama or reconstruction of a real-life event. Background Stories are omnipresent across cultures and time (Palacios et al., 2015). They are tools of communication which create connections, and challenge realities (Alterio & McDrury, 2016;Moon & Fowler, 2008). In nurse education, as in other educational contexts, narrative is an essential pedagogical tool utilised to enable students to apply knowledge, to build understanding and to test assumptions. Narrative is utilised in a wide range of teaching and learning contexts. In mental health nurse education, stories are commonly used to examine constructs such as stigma, power and medical hegemony. Herrington et al. (2006), amongst others, postulated that the presence of authenticity in narrative theory is an essential ingredient to transformative learning. The experience of sensing authenticity provides the learner with the possibility to liberate themselves from previously held assumptions (Serrano et al., 2018). In this study, the hypothesis was that when a digital story is employed as a learning tool, a higher blend of authenticity may be experienced. For this to be achieved, pedagogical consideration must be given to the students as individuals as well as a collective group, the objectives of the learning task and the influence of the type of media utilised. In addition, consideration must be given as to whether the selected choice of the scenario should be in the control of the teacher, or more democratically within the overall learning design. In the context of the educational needs of mental health nurses, as with other people-led occupations, storytelling and the personal narrative have a place of significance in knowledge construction and relationship building. Mental health professionals and mental health services have been profoundly affected by the emergence of the survivor movement in the 1980s. This initiated a seismic change in the way in which mental health is understood (CAPS, 2010) with a growing recognition that the person is an expert of their own experience, thus impacting on the power dynamic with paid professionals. The Recovery Movement, as it became known, asserted the importance of the unique experience of the individual. People began to tell their personal stories and challenged the medical hegemony which laid claim to mental illness as a largely biological event that required the expertise of professionals to manage 'symptoms'. In this context service users started to gain voice and strength through storytelling, with the consequence that attitudes, behaviours and values began to shift (Byrne et al., 2014). The influence of the Recovery Movement has had a profound effect on the education of mental health nurses. Over the past 10 years, there has been widespread agreement that involving people with lived experience in professional programmes of education will produce a workforce capable of improving the outcomes and experience of people using mental health services (Lathlean et al., 2008;Mckeown et al., 2012). However, the evidence on which to base this assertion is lacking in relation to measurable impact and added value for health and social care education (McIntosh, 2018;Rhodes, 2013). Yet, the inclusion of personal narratives in mental health education is now expected as a quality indicator of nurse education. It has also become a method of demonstrating the role of the service user as an equal partner whose views have equal importance. Crookes et al. (2013, p. 242) concluded that stories offer students a vital link between theory and practice and an 'engagement beyond the classroom'. However, the exponential growth in online learning (Selwyn, 2014), combined with the increasing pressure on finding classroom space, has led to more learning activities simply being placed on learning platforms as Word documents. Activities such as these have been shown to limit students' ability to engage with the scenario. Instead, more creative approaches are required to ensure deeper learning (Ackland-Tilbrook & Warland, 2015). Drawing on findings from a small-scale study, this article examines the theoretical and philosophical questions that arise from seeking to enable authenticity in digital storytelling and better understanding its value to mental health nurse education. Literature review Literature surrounding storytelling in nurse education, authenticity and use of digital technologies was examined to provide context to the enquiry. Perhaps confusingly, stories are variously described as narratives, case studies, critical incidents, life histories, anecdotes, scenarios and illustrations, amongst others, making identifying relevant literature complex. Stories may also be spoken, written, filmed, mimed or acted, and might be in digital format (Moon & Fowler, 2008). When stories are formalised in educational contexts, Alterio and McDrury (2016) understood that the potential learning power of stories relates to their capacity to capture the everyday, turning it into the focus for learning and reflection, with the intention of enabling deeper understanding and critical thinking. Given this, we should not be surprised then that the use of stories in educational practice is commonplace, in both school-based and higher education settings. Focusing on the higher education context specifically, Moon and Fowler (2008, p. 232) recognised the power of stories to 'capture the holistic and lived experience of the subject being taught'. As such, they can facilitate learning in ways that traditional lectures and tutorials cannot, an assertion which owes much to their ability to 'tap into imagination, emotions and form new and meaningful connections between existing areas of knowledge' (p. 232). Kinsey and Moore (2015) picked up the potential of stories and their capacity to enrich learning opportunities in subjects where they might not seem to fit, in their case, mathematics. Citing Aristotle, they identify the key component of a story as being a well-constructed plot, into which they add the need for conflict and its resolution. Hendry (2009) pushes thinking a little further, explaining that the purpose of stories is to facilitate meaning and knowing with the intention of effecting altered perspectives and/or new insights. Crookes et al. (2013) explored the value of stories, alongside other techniques, in the specific context of nursing education, where they see particular challenges for nursing students who have to develop within a very practical discipline within which they are expected to learn, but may not always be exposed to all of the experiences they need. Thus, stories can afford insight into the unfamiliar, with Jack and Hampshire (2016) suggesting the value of student nurses writing the stories to express their ideas and feelings. Again in the context of nurse education, Chan (2013) had another perspective, proposing that stories can be used to maintain student interest and motivation to learn. Gidman (2013) added another possibility into the mix, reporting the success of a project within which student nurses actively sought out patients' stories in the practice setting. Moving now to authenticity, the word 'authentic' derives from the Greek autonthenkes, auto being 'self' and thenkes being 'genuine'. In each of the scenarios selected for this enquiry, the potency of genuineness was largely created by the choice of story, the form of the digital media and the impressions of the teacher making the selection. However, 'believability' is only truly sensed when the individual viewer is able to release and relate their own experience to the viewing. The ability of the teacher to manipulate the believability factor in either a pre-formed podcast or film is limited as the media presents as a finished product that cannot be adapted. This contrasts with the world of online gaming, where the creation of a believable 'other world' is an art and a craft. If there is an absence of a believable reality, the 'gamer' is likely to disengage and consider the product undesirable. Klabbers (2003) examined the craft of constructing believable worlds and considered the way in which the parts of the simulated world are constructed. He concluded that it requires individual parts to be organised and controlled manually to produce a 'natural' effect. These perspectives resonate with Mitty's (2010) view that storying can be mutually beneficial, validating and transformative for the listener, not least because learning to listen to a story, and to engage with the storyteller, can significantly enhance communication skills, as well as gain insight. If stories are to be effective tools, the experience of watching or listening must be immersive and connect in some way to the professional or personal world of the student. Herrington et al. (2006) identified this as authentic learning and suggested that it is only likely to occur when the individual student is challenged with competing perspectives and interpretations of a situation. This raises an important point in that in their quest to ensure the alignment of learning activities with intended outcomes and assessment (see, for example, Biggs & Tang, 2011), educators create stories, or even perhaps over-engineer them, with the consequent loss of authenticity from a student perspective. The New Media Consortium Report (Adams Becker et al., 2017 seems to concur with the argument that ensuring authentic learning opportunities is a significant challenge. Although stories can provoke curiosity and stimulate critical thinking, their production can also risk superficial interpretation and a narrow projection of human relationships. Matthews (2014) speculated that digital media have the potential to provide the platform for transforming the story or narrative into a format that is open and flexible. However, this brings to the fore another challenge in that while good teaching includes technology, its effective use requires digital competence alongside an awareness of the potential effect of each component part of the learning activity. This issue of the relational influences between the digital media and genre of story is one that will be returned to in the discussion section of the article. Study design The initial research enquiry focused primarily on a comparative analysis of digital media and their ability to convey realism and authenticity as perceived by individuals in the participant group. The objective was to generate knowledge and understanding about the felt experience of authenticity from each of the selected media. A secondary objective was to explore the pedagogical capability of digital storytelling for creating and selecting digital resources that hold a high tariff of authenticity. The suggestion that authenticity requires a 'natural' effect leads to the question about the pedagogical approach of storytelling. Rule (2006) proposed that employing a sociocultural approach is necessary. In this, real-world problems connect with workplace roles, and provide open-ended enquiry and opportunity for discourse among a community. Students can then self-regulate learning and have a sense of empowerment in the process. This approach somewhat mirrors the views of the researcher's choice of methodology and design of the enquiry. A co-productive approach was taken in which the participant group were equal creators of the enquiry as it developed, thus representing the social constructivist views held by the researchers and played out in the process of the enquiry. However, the purity of the co-productive approach was restricted by the teachers selecting the scenarios to be reviewed. The participant group (n = 7) comprised family carers, people with lived experience and mental health student nurses. Membership of the group aimed to replicate and represent the two protagonist roles in the nurse/patient relationship, that is, the 'student nurse' and the 'individual with lived experience or their family member'. The group viewed four digital scenarios in which the subject matter presented a narrative that would resonate with the group participants. With the exception of one scenario, the stories were selected with the two protagonist groups prominent in the story. In addition, woven through the scenarios, was an issue of social, moral or value-based conflict. It was this conflict that provided the fertile ground for group members to exchange perspectives, interpretations and emotional connections with the story, articulated through the lens of their own realities. Details of the selected scenarios and the digital medium are as follows: (1) A filmed simulation using amateur actors depicting a mental health nursing assessment of a child and his mother. The two facilitated workshops enabled qualitative and quantitative data to be generated. In the first workshop, participants' perceptions of authenticity were rated using Herrington et al.'s (2006) Scale of Authenticity. The qualitative data were extrapolated into the second workshop where the rating results were revealed to the group, facilitating group discussion which was recorded and thematically analysed. Findings The findings can be most effectively summarised into three areas: The relationship between the form of digital media, the content and the experience of immersion Of the four scenarios, the drama (3) had the highest rating on Herrington et al.'s (2006) Scale of Authenticity, with the storybook (1) having a slightly lower rating. The challenging nature of the narrative story in the drama along with the power of the imagery in conveying atmosphere and emotion accounted for it having the highest rating on Scale of Authenticity. The audio podcast (4) performed the poorest, accounted for by its lack of relevance and shortness of length. The activity of listening was a factor in the poor scoring. The students were unfamiliar with listening as a single activity. These findings can be partially attributed to the sense of immersion the participants experienced when watching or listening to the scenario. The film was a powerful exposé of life for young people on the margins of society. The kernel of the story centred on a young woman battling addiction and choosing to have her baby despite professional advice to do otherwise. This provoked some contentious discussion between the participant members. The student nurses related to scenarios (2) and (3) through their own learning experiences of supporting individuals with addiction issues. They reflected on the challenges of understanding choices made by 'patients' which seemed to lead to further disadvantage and ill health. The scenarios led them to reflect back on these experiences and reconsider the view that people ('drug users') simply made 'bad choices'. The participants with lived experience, on the other hand, reinforced the sense of dehumanisation that can occur when individuals are exposed to a chronic sense of community marginalisation. The difference in realities and interpretative explanations between the protagonist groups The level of projected realism and explicit conflict in the scenario was proportionate to the expression of difference of interpretation between the two groups. So, the two scenarios that featured individuals with addictions (2 & 3) provoked most dissent. The student nurses expressed largely patriarchal views around the need for care and control, whilst the family members dwelt on the impoverished living conditions and limited life choice opportunities available to the characters. This was in contrast to scenario (1) whose amateur production seemed to fail to convey sufficient authenticity to warrant much attention or conflict in the group. The subject matter of death and dying in the audio podcast was considered to be largely irrelevant to mental health nursing by the student nurses. This was in contrast to the family and users of mental health services who were tearful and saddened by the listening experience. Challenging perceptions and confronting stereotypes The form of media and the content of the material were significant in tempering the sense of immersion in each of the narratives. Immersion presents as an essential quality in which the individual experiences a simultaneous sense of absorption with the media alongside a disconnection with immediacy in reality. Challenging nurses to think beyond the boundaries of widely accepted societal attitudes and explanations for mental ill health is central to any undergraduate nursing programme. However, this study would suggest that to do that effectively, the scenario must convey realism. The participant group agreed that digital scenarios provided an important medium for challenging beliefs, concurring on the importance of the story having a sufficient level of complexity for exploration and critique if new learning is to be achieved. Discussion and reflections The aim of the enquiry was to generate new understanding about the relationship between digital media and authenticity in the learning journey of mental health nursing students. Data gathered from the workshops provided a platform from which the relationship between the media, the subject matter and the elements of the narrative such as length and genre could be examined. A further enquiry aim was to examine the influence of employing a co-productive approach to the methodology. The two workshops only served to begin to signal some of the more complex and subtle issues around the use of digitally conveyed storytelling as an educational tool. The capacity and rationale for a second phase of research are now explored in relation to the findings and reflections of the author group. It can be readily concluded that the formal use of stories in education generally, and nursing education specifically, is uncontested. However, this study generates new knowledge about the importance of considering the nuances of the digital media as well as the method of producing the scenario. Despite the different ways in which stories might be employed, stories can initiate reflective learning, a practice which promotes the contemplation of new knowledge and invites in challenge to that which is already accepted as true (Adamson & Dewar, 2015). The findings from this enquiry would concur with the assertion that stories can be effective learning tools. Meier and Stremmel (2010, p. 249) identified an alternative conception of a story. They diverged from the conception of a story having as 'a beginning and an end', and as 'something you fix, frame and give meaning to'. Instead, they regarded the story as being inseparable from the person, and which lives with them, extending its relevance 'in a multitude of ways and situations'. From this frame of reference, stories are not 'out there' and may therefore not be benign in their influence, despite the intention of the teller. Rather, for Meier and Stremmel (2010) stories present 'as universal mirrors that show us the truth about ourselveswho we are and why we do what we do' (p. 49). In this sense, they reflect something of Frank's (2010, p. 3) view which is that 'stories animate life' because that is 'their work' and then they go on 'to instigate'. As such, they have agency and are therefore neither passive nor necessarily innocuous. In some ways, this explains the diverging explanations between the participant groups. The protagonists in the scenarios were reflected in the composition of that group. The student nurses seemed compelled to protect and defend their own professional values exhibited in the scenarios, though there was a sense of unexpressed discomfort. The family members and service users, on the other hand, readily, but gently, drew attention to the social, cultural and economic complexities of each of the stories. A third and fourth workshop may have enabled the group to feel emotionally safe to explore, express and work through these conflicts and, in doing so, enable the participants to construct alternative realities. The two workshops left the enquiry feeling a little 'unfinished'. However, this sense of incompleteness does not diminish what the authors were able to learn. The need for nurses to engage with people who may have different values, beliefs and perspectives is essential, however not always directly available. Enquiry/scenario-based learning using genuine stories offers educationalists an opportunity to maximise the value of stories while maintaining the level of involvement of people with lived experience through the co-production of the learning resources. The impact of sharing stories with individuals with lived experience requires attention to ensure the intense 'reality' of the shared experience does not impact negatively on either party (McIntosh, 2018). Digital stories offer a practical and accessible alternative that avoids the labour of work on the service user as stories can be developed, recorded and replayed without the presence of the service user and all the pressure that might cause. In addition, if the stories are co-produced then the authenticity value is likely to be enhanced. Co-creating stories would serve to capture essential knowledge around technological, pedagogical and conceptual aspects of mental health nursing education. Key participants would include educationalists, students and people with lived experience. As a group, digital stories can be created that can authentically capture layers and detail of human experience. As previously noted, authenticity involves the experience of reviewing and reconsidering previously held assumptions. It is assumed perhaps that values are fixed and shared with some certainty. Typically, the teacher selects the scenario with the objective of exploring a particular ethical or moral dilemma. Values, however, are unique to the individual, dynamic in nature and subject to individual interpretation. Individual attribution of a specific value is very much mediated by experience. If the teacher selects the scenario, the 'real-world' issue is somewhat manufactured working on the assumption that the students hold similar or the same concerns. However, personal values require expression and discussion. The role of the community becomes extremely important in ensuring one reality does not dominate. In the workshops, the facilitator predicted that the podcast (4) was likely to hold the strongest potency. However, this was the least favoured by the participant group. This signals the need for coproduction in the selection of the teaching and learning materials if dominant views of the teacher are not going to overshadow the realities of the student nurses and service users alike. So, the process of selection and perhaps also production of the scenario has to some extent to replicate the values of authentic learning. Recommendations The study focus was to explore the value of digital storytelling through the prism of authenticity. The theory of authentic learning was exercised through digital stories as a stimulus learning tool to support the critical development of ethical values such as compassion and empathy, essential to all relational encounters in nursing activities. The study evaluated the way in which authenticity was perceived by viewers and listeners of digital stories whose own stories, in some way, resonated with those being presented. When reflecting on the theorising of creating authentic learning experiences, we conclude • that authenticity can be manipulated to some extent. The word 'manipulated' is used purposefully as it is suggestive of a didactic approach to learning which, if employed, is likely to significantly distil the potency of authenticity. • therefore, the quality of authenticity is dependent on the way the media is produced and on who selects it. • that the quality of authenticity is similar to a 'value' in that it is not 'present' unless (a) articulated and (b) contested. • that, in visual stories, there is much more dependence on the quality of the visual cues (so in content; tone; realism of images). The inability to be able to check back as one would in a twoway conversation. • that if the digital story is not experienced as authentic, it is largely discarded. Experiences are quite polarised which may be interpreted as tangible evidence that the theory of authenticity is being applied. Given these reflections on the theorising the application of authenticity, the authors suggest two recommendations: Recommendation one Scenarios are selected and produced using a co-productive approach. Digital media must be selected carefully not just in relation to the learning and teaching needs of the student group, but also in relation to social and cultural norms and the digital skills of the student group. Reflection on the study demonstrated that relevance of content is a necessity, but that it should be judged not by the teacher, rather by the participant group it concerns. A significant limitation of this enquiry was that the scenarios were selected by the teaching team, introducing bias in perspective and in preference of specific forms of digital media. Yet, a co-productive approach to the methodology attempted to mirror the shifting of power and from the teacher to the participant group and produced some very different results to those anticipated in the choice of the story presenting the greatest authentic quality. Two workshops alone were not sufficient to work through the deeper learning in a meaningful manner. Conflict between participant members was felt, but relatively little was expressed. However, there was a real sense by the end of the second workshop that the respective group were beginning to hear one another in a new way. Mutuality of trust and understanding was tentatively growing and if further workshops had taken place, this participant connectedness may have enabled values and beliefs to be challenged and explored safely. From this, there was potential for new stories to grow, created and finally narrated by the members of the participant group. Recommendation two Educationalists demonstrate caution when using simulation to present scenario-based learning. The findings from the study suggest that the genre and form of digital media significantly influence the authentic quality of the learning experience. Whilst simulation is a relatively accessible form for producing materials, the power of drama, specifically drama which is professionally produced, as opposed to amateur simulated drama-based scenarios, supersedes any of the other digital media in its ability to provide an immersive quality that sat with the participants for some days following the first viewing. The experience of listening alone offered by the podcast was unfamiliar to the student participants and their difficulties with focus and concentration obscured the potential for immersion in the scenario. Conclusion The aim of the enquiry was to generate more understanding about the relationship between digital media and authenticity in the learning journey of mental health nursing students. The enquiry provided a platform from which the relationship between the media, the subject matter and the elements of the narrative such as length and genre could be examined. Viewing the study through the prism of authenticity enabled new learning about the theory of authentic learning. The feedback from the participant group enabled insights into the construction of digital stories that can be applied in the development of future educational resources. Disclosure statement No potential conflict of interest was reported by the authors. Notes on contributors Margaret M. M. Conlon is a Lecturer and Field Lead of the Undergraduate Mental Health nursing programme at the University of Stirling. As a Senior Fellow of the Higher Education Academy since 2016, Margaret is a passionate teacher and experienced mental health nurse whose interests include developing contemporary digital teaching approaches in nursing education. Dr Fiona Smart is an Associate Professor in Learning and Teaching and Head of the Department of Learning and Teaching Enhancement at Edinburgh Napier. She is a Principal Fellow of the HEA and an International HE Consultant. Gwenne McIntosh is an experienced senior lecturer and mental health nurse with a keen interest in co-production in education and has research experience surrounding the impact of co-production in nurse education from a student and family carer perspective. Gwenne is a Senior Fellow of Higher Education Academy (SFHEA), holds an MSc in Nursing and Applied Education and is a Registered Mental Health Nurse.
2020-05-07T09:15:16.705Z
2020-05-04T00:00:00.000
{ "year": 2020, "sha1": "3c949256c87ab64966fc675cabf982ba8b817ffe", "oa_license": "CC0", "oa_url": "http://dspace.stir.ac.uk/bitstream/1893/31257/1/Conlon%20Smart%20McIntosh.pdf", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "d7e778426364d435678680f7b6b9dada9dc6178f", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
237616057
pes2o/s2orc
v3-fos-license
Brain magnetic resonance imaging and cognitive alterations after ablation in patients with atrial fibrillation Catheter ablation is an important non-pharmacological intervention for atrial fibrillation (AF), but its effect on the incidence of asymptomatic cerebral emboli and long-term effects on cognitive function remain unknown. We prospectively enrolled 101 patients who underwent AF ablation. Brain magnetic resonance imaging (MRI) (72 patients) and neuropsychological assessments (66 patients) were performed 1–3 days (baseline) and 6 months after ablation. Immediately after ablation, diffusion-weighted MRI and 3-dimensional double inversion recovery (3D-DIR) detected embolic microinfarctions in 63 patients (87.5%) and 62 patients (86.1%), respectively. After 6 months, DIR lesions disappeared in 41 patients. Microbleeds (MBs) increased by 17%, and 65% of the de novo MBs were exactly at the same location as the microinfarctions. Average Mini-Mental State Examination scores improved from 27.9 ± 2.4 to 28.5 ± 1.7 (p = 0.037), and detailed neuropsychological assessment scores showed improvement in memory, constructional, and frontal lobe functions. Ejection fraction, left atrial volume index and brain natriuretic peptide level improved from baseline to 3–6 months after ablation. Despite incidental microemboli, cognitive function was preserved 6 months after ablation. www.nature.com/scientificreports/ Moreover, AF is related to a decrease in brain volume 16 . Although these complex mechanisms may underlie the association between AF and dementia, no prospective studies to date have been conducted to determine the mechanism that leads to cognitive decline and dementia. The mainstay of treatment in patients with AF includes prevention of ischemic stroke, administration of anti-arrhythmic medications, and non-pharmacological intervention. Pharmacological treatment with rate-and rhythm-control showed no significant differences in cognitive function over a 3-year follow-up period 17 . Catheter ablation is an important non-pharmacological intervention for AF. Although clinical data suggests that catheter ablation is superior to current pharmacological treatment, it is associated with procedural complications such as periprocedural transient ischemic attack (TIA) and cerebrovascular accidents 18 . Moreover, the incidence rates of asymptomatic cerebral emboli associated with catheter ablation for AF have been reported to range from 1.7 to 38% according to previous studies 19 . The long-term effect of catheter ablation on cognitive function remains to be elucidated. Bunch et al. 11 retrospectively reviewed the medical records of patients with AF who underwent catheter ablation. They found a significantly lower incidence and risk of dementia, including both VaD and AD, in AF patients treated with catheter ablation than in those without ablation 11 . On the contrary, a prospective study by Kochhauser et al. 20 found no significant differences in the neuropsychological findings before and immediately after or 6 months after ablation. However, the effect of catheter ablation on microembolic infarctions was not evaluated in this study, thereby making it difficult to determine its net effect on cognitive function. In this prospective study, we evaluated whether catheter ablation has a protective effect on cognitive function by evaluating the incidence of cerebral microembolism as well as the neuropsychological outcome after catheter ablation in patients with AF. Methods Study protocol. This prospective study was approved by the ethical review board of the Mie University Hospital (permit number 3038), and all subjects provided written informed consent. All methods were performed in accordance with the Declaration of Helsinki. A total of 101 patients who were admitted to the hospital for AF ablation were recruited from the Department of Cardiology of Mie University Hospital between August 2018 and September 2019. We obtained clinical, laboratory, and imaging data corresponding to baseline and follow-up timepoints after catheter ablation. Patient characteristics including age, sex, hypertension, status of diabetes mellitus and dyslipidemia, history of stroke and/or TIA, and AF type were recorded. Cardiac function was assessed by measuring ejection fraction (EF) and left atrial volume index (LAVI) using transthoracic echocardiography, and brain natriuretic peptide (BNP) levels before ablation (baseline) and during the 3 to 6-month period after ablation (follow-up). For the measurement of LAVI, we used a biplane (apical 2 chamber view and 4 chamber view) modified Simpson method. Ablation procedure. Catheter ablation was performed as described previously 21 . An electrophysiological study was performed in the post-absorptive state under light sedation. Trans-esophageal echocardiography was performed to exclude the possibility of LA appendage thrombus just before ablation in all patients. After internal jugular and femoral vein punctures, a heparin bolus (100 U/kg) was administered, and continuous heparin infusion provided thereafter to maintain an activated clotting time of 250-350 s. A diagnostic duodecapolar catheter was placed in the coronary sinus via the jugular vein. Three long sheaths were inserted through the femoral vein and introduced in the LA through a single transseptal puncture guided by intracardiac echocardiography. Eicosapolar circumferential catheter (Lasso 2515, Biosense Webster, Diamond Bar, CA, USA) and multi-spline mapping catheter (PentaRay, Biosense Webster, Diamond Bar, CA, USA) were introduced in the LA through the transseptal long sheaths. All imaging was performed using a biplane flat-panel detector angiographic suite (Allura Xper FD10/10 angio system; Philips Healthcare, Best, Netherlands). Electroanatomical mapping was performed using the CARTO3 mapping system (Biosense Webster, Diamond Bar, CA, USA). Radiofrequency ablation was performed with an irrigated catheter (EZ Steer Thermocool, Biosense Webster, Diamond Bar, CA, USA) using 0.9% normal saline and a point-by-point technique. Extensive encircling pulmonary vein isolation (EEPVI) was performed in patients with paroxysmal AF, and entrance and exit blocks documented in all cases using Lasso2515 and PentaRay multipolar catheters. In addition to EEPVI, patients with persistent AF received LA posterior wall isolation; additional linear ablation along the LA roof to connect the left superior pulmonary vein to the right superior pulmonary vein and linear ablation along the LA floor to connect the inferior margin of the left inferior pulmonary vein to the right inferior pulmonary vein were performed to gain a block into the posterior wall. Bidirectional block was confirmed across all linear ablations using differential pacing techniques. If common atrial flutter was induced by atrial burst or extra stimulus pacing, cavotricuspid isthmus line ablation was performed in patients with both paroxysmal AF and persistent AF. MRI protocol. The MRI studies were performed at 1-3 days (baseline) and 6 months after ablation (followup) with a 3 T MR unit (Ingenia, Philips Medical System, The Netherlands) using a 32-channel phased-array head coil. We used diffusion-weighted imaging (DWI), 3-dimentional (3D) fluid-attenuated inversion recovery (3D-FLAIR), 3D double inversion recovery (3D-DIR), and 3D T1-weighted imaging (3D-T1WI) to detect microemboli 22,23 . Acute microinfarctions were diagnosed with DWI sequences, while chronic microinfarctions were evaluated with 3D-DIR, 3D-FLAIR, and 3D-T1WI sequences. To detect MBs, susceptibility-weighted imaging (SWI) was used. 3D Management of oral anticoagulants. All patients were taking direct oral anticoagulants (DOACs) or warfarin before ablation. Warfarin was continued during the ablation procedure, and DOACs were omitted only on the day of ablation and continued until 3 months after ablation in all patients. If there was no AF recurrence > 3 months after ablation, the CHADS2 score was calculated. If the CHADS2 score was 0 or 1, oral anticoagulants were stopped. If the CHADS2 score was 2 to 6, oral anticoagulants were continued because of the higher risk of cerebral infarction in these patients. Neuropsychological assessment. To quantify intellectual function, the Mini-Mental State Examination (MMSE) and the Japanese Raven's colored progressive matrices (RCPM) were administered. RCPM measures not only intelligence but also patients' performance time, which reflects psychomotor speed. Memory was evaluated using the Rivermead Behavioral Memory Test (RBMT), which consisted of immediate and delayed recall of a short story. The assessment of constructional ability was based on the method described by Strub and Black 24 . A cube was shown to the examinees, and they were asked to draw it. Their drawing was scored by assigning one of 4 possible grades (0: poor, 1: fair, 2: good, and 3: excellent). Mie Constructional Apraxia Scale (MCAS) was also used for the assessment of constructional visuospatial ability 25 . The MCAS was invented to assess constructional disabilities by checking not only the shape of a drawn Necker cube but also the drawing process, with larger scores corresponding to more severe symptoms. Frontal function was assessed on the basis of two types of tasks: word fluency (WF) and trail-making test A,B (TMT-A,B). The WF test consisted of two domains: category and letters. For the categorical WF test, subjects were asked to name as many animals/vegetables as possible in one minute. For the letter WF test, subjects were asked to name objects that begin with each of 4 phonemes ka, sa, ta, and te. We used the average scores of these 4 phonemes for statistical analysis. It is generally accepted that the cognitive processing of categorical and letter WF is somewhat different; categorical WF is more reflective of memory function than letter WF 26 . TMT is a test of visual scanning speed with two parts. Part A consists of 25 circles numbered from 1 to 25 distributed on a piece of paper. The task is to "connect the dots" as quickly as possible. Part B consists of 25 circles with the numbers 1 to 13 and letters in sequence. The score corresponds to the number of seconds required to finish each part. Statistical analysis. For the analysis of differences in demographic characteristics, the Mann-Whitney U test and the χ-squared test were used. Differences in neuropsychological scores and cardiac function between the baseline and follow-up timepoints were analyzed using paired t-test or Mann-Whitney U test, and the improvement in MMSE score according to the type of AF and past history was valuated using Mann-Whitney U test. To test the correlations of neuropsychological test score changes with age, EF, LAVI, and BNP, Pearson's and Spearman's correlation coefficient was used. Differences showing a p value < 0.05 were considered statistically significant. Statistical analyses were performed using IBM SPSS Statistics software version 24 (IBM Corp., Armonk, NY, United States) ("https:// www. ibm. com/ produ cts/ spss-stati stics"). Results Patient characteristics. None of the patients showed neurological deficits immediately after ablation. At baseline, 100 patients underwent MRI and 92 underwent neuropsychological assessments. One patient could not undergo MRI because he previously underwent coil embolization for a cerebral arteriovenous malformation. Nine patients were unable to undergo neuropsychological assessment because of lack of time for assessment before discharge. At 6 months after ablation, 72 patients underwent MRI and 66 underwent neuropsychological assessment. Twenty-seven patients dropped out, 2 declined to undergo MRI, and 8 declined the neuropsychological assessment (Fig. 1). We analyzed the data of a total of 74 patients who underwent MRI and/or neuropsychological assessment both at baseline and 6 months after ablation. Sixty-four patients underwent both MRI and neuropsychological assessment. Eight patients underwent only MRI because they did not have sufficient time to finish the neuropsychological assessment after ablation. Two patients experienced claustrophobia while undergoing MRI at baseline and underwent only neuropsychological assessment after 6 months. Cardiac function was evaluated in all patients before ablation, and repeated in 40 patients at 3-6 months after ablation. Thirty-two out of 72 patients were lost to follow-up because they were discharged to the care of their primary physicians at 1-2 months after ablation. At baseline, SWI detected 577 MBs. Follow-up SWI after 6 months detected 98 new MBs. Sixty-four out of 98 MBs (65.3%) were exactly at the same location where microinfarctions were found at baseline. Correlation between the number of microinfarctions and ablation parameters. The correlation between the number of microinfarctions and duration of the ablation procedure was evaluated, where procedure duration was defined as the time required from the start to end of the ablation. A mild positive correlation was observed (p = 0.036, γ = 0.248) (Fig. 3D). Figure 1. Study protocol. We recruited 101 patients; 9 were excluded because they were unable to conduct neuropsychological assessment and/or brain MRI. Therefore, one hundred patients underwent MRI and 92 underwent neuropsychological assessment at baseline. At 6 months after ablation, 27 patients dropped out, 2 declined to undergo MRI, and 8 declined to perform neuropsychological assessment. Finally, 72 and 66 patients underwent MRI and neuropsychological assessment, respectively. MRI Magnetic resonance imaging. Neuropsychological findings after ablation. The MMSE score was 27.9 ± 2.4 at baseline. At 6 months after ablation, the average MMSE score improved significantly to 28.5 ± 1.7 (p = 0.037). RBMT (both immediate and delayed recall: p < 0.001 and p < 0.01, respectively), MCAS (p < 0.01), and TMT-A (p = 0.001) scores improved www.nature.com/scientificreports/ significantly at 6 months after ablation (Table 2). To evaluate cognitive enhancement variability, we performed correlation analysis between baseline scores and the change value of MMSE, RBMT immediate recall, RBMT delayed recall, MCAS, TMT-A, which showed significant improvements at 6 months after ablation. Patients with lower baseline cognitive function showed better improvement in almost all these scores (p < 0.01, p = 0.016, p = 0.01, p < 0.01, p < 0.01, respectively.) We evaluated the improvement in neuropsychological scores according to history of hypertension, diabetes mellitus, dyslipidemia, and stroke/TIA. There was no significant difference in the changes of any scores between patients with and without hypertension, diabetes mellitus, dyslipidemia, and history of stroke/TIA. Furthermore, we analyzed the difference in cognitive changes between persistent and paroxysmal AF, with and without AF recurrence, with and without oral anticoagulant use at 6 months after ablation, but no difference was observed. Further, in 20 patients, MMSE was evaluated 1 day before ablation, and the score was 25-30 (average score: 27.9 ± 1.8). We found no significant differences between MMSE scores evaluated before and immediately after catheter ablation (p = 0.14). Cardiac function after ablation and correlation with neuropsychological findings. EF values significantly increased (p = 0.025), whereas LAVI and BNP significantly decreased (p = 0.002, p = 0.001, respectively) between baseline and at 3-6 months after ablation ( Table 2). We compared changes in cardiac function in patients with persistent and paroxysmal AF, with and without AF recurrence within 6 months after ablation, and presence and absence of oral anticoagulant use at 6 months after ablation. No significant difference in cardiac function changes was observed. No correlation was observed between EF percent increase and the changes in MMSE scores or other findings. However, there were positive correlations between LAVI percent reduction and WF changes (animal) (p = 0.04, γ = 0.331), between BNP reduction and WF changes (animal) (p ≦ 0.001, γ = 0.546), and between BNP reduction and RBMT delayed recall (p = 0.026, γ = 0.351) (Fig. 4). In addition, there was no correlation with any neuropsychological score change between the patients who showed LAVI/BNP improvement or not. Discussion Our prospective study indicated that ablation has beneficial effects on overall neuropsychological scores despite the incidental embolic microinfarctions caused by the procedure. After catheter ablation, there was an increase in EF and a decrease in LAVI and BNP, which may be attributable to improved cardiac function, and which might have led to a net beneficial effect on neuropsychological scores. Both prospective and retrospective studies have examined the relationship between AF and cognitive impairment 27 . AF can become a risk factor of dementia even in the absence of embolic stroke. One of the plausible mechanisms by which AF induces dementia may be chronic cerebral hypoperfusion. It has been suggested that chronic cerebral hypoperfusion is causally related to both AD and VaD 28,29 . AF causes cerebral hypoperfusion through beat-to-beat variability and an overall reduced cardiac output owing to the lack of atrioventricular synchrony 30 . Indeed, cerebral blood flow (CBF) is significantly lower in patients with persistent AF than in those without 31 . Furthermore, the CBF level of patients with paroxysmal AF lies between persistent AF and sinus rhythm 11 , and electrical cardioversion could restore CBF in patients with AF 32 . Cerebral hypoperfusion has been correlated with brain atrophy and dementia 33 , and AF per se is also associated with brain atrophy with a stronger association in persistent/permanent AF than in paroxysmal AF 16 . The other possible mechanism by which AF induces dementia is by remitting microembolism, which may cause accumulation of cortical microinfarcts (CMIs) and subsequently, dementia [34][35][36] . Catheter ablation may potentially suppress the chronic incidence of microembolism in AF 37 , while subsequent prevention of microembolism may have a beneficial effect on cognitive dysfunction. However, this possibility could not explain the improvement of neuropsychological scores encompassing overall cognitive domains in the present study, because small vessel diseases, including CMI, usually contribute to frontal lobe dysfunction 38 . Notably, in the present study, improvements were not limited to scores related to frontal lobe function, but universally observed in every cognitive domain including the temporal lobe. This result may indicate that improvement of cardiac output and subsequent CBF might have led to the recovery of neuropsychological scores in patients with AF. CMIs are caused by various pathological factors such as microembolism, arteriosclerosis, and cerebral amyloid angiopathy 35 . In our study, catheter ablation caused microinfarctions in 85% of patients. This result contradicts earlier published results where microinfarcts were detected in < 30% of patients. The MRI protocol used in this study is very sensitive for detection of microlesions. Indeed, in our MR protocol, DWI slice thickness is 3 mm with no gap. On the other hand, previous studies used a DWI slice thickness of 5 mm with 2 mm gap, so that microinfarcts with 2-3 mm in size could go undetected. Nakamura et al. described that, compared with conventional DWI, thin section (3 mm with no gap) DWI at 1.5 T permitted better lesion conspicuity and more precise stroke diagnosis 39 . Moreover, we used a 3 T MR machine, with better signal-to-noise ratio than 1.5 T. Therefore, www.nature.com/scientificreports/ we consider that the higher percentage of patients with acute microinfarcts in this study was probably due to the high resolution DWI employed. Bergui et al. showed that silent embolic microinfarctions after AF ablation were more commonly found in the cerebral cortex 40 . In our study, 80% of microinfarctions were found in the cerebral cortex. After 6 months, a significantly higher proportion of microinfarctions disappeared from the cerebral cortex than from subcortical regions (96.2% vs. 69.1%, respectively) on 3D-DIR images. This result is consistent with the findings of Terge et al., who evaluated the cumulative incidence of acute CMI and found that all acute CMIs disappeared on followup MRI (DWI, T1, FLAIR) 41 . Similarly, Havsteen et al. showed that cortical lesions in TIA disappeared more frequently than those in subcortical ones and hypothesized that a strong leptomeningeal collateral circulation in the cortical gray matter may prevent signs of persistent infarction in small gray matter lesions 42 . Another notable finding is the numerical increase in MBs on follow-up MRIs. Previous studies reported a higher incidence of MBs in patients with AF than in those without 43 , but the pathogenesis of MBs in AF and the relationship between MBs and cognitive function remain unclear. In our study, there was a 17% increase in the total number of MBs during the 6 months after ablation and most de novo MBs corresponded with embolic microinfarctions detected at baseline. Previously, Ito et al. reported 3 cases of de novo lobar MBs transformed from a small cortical infarction 44 . Our results imply that AF-related microemboli may cause CMI, subsequently leading to MBs. Our present study has several limitations. First, it did not include a control group of AF patients who did not undergo ablation treatment. A memory clinic study showed a significantly higher prevalence of CMIs in the brains of AF patients 45 , and therefore, patients with AF may have had preexisting CMIs. Our study may indicate that most CMIs disappear and that only a small number remains. Second, our study did not perform pre-ablation MRI, so we cannot confirm that all new microinfarctions and MBs detected after ablation were caused by the ablation procedure. However, we conducted pre-ablation MRI in 13 out of 74 patients, none of whom showed new microinfarctions. Moreover, we could not perform pre-ablation neuropsychological assessment in all patients because of time constraints. We conducted a pre-ablation MMSE estimation in 21 patients, among whom we found no significant changes between pre-ablation and post-ablation MMSE scores. We also compared pre-ablation MMSE scores with 6-month follow-up scores in 16 patients (5 out of 21 patients who underwent pre-ablation MMSE dropped out after 6 months), and found that follow-up scores significantly improved (p = 0.023). Third, we did not monitor electrocardiography during neuropsychological assessments. Many patients have AF episodes early after ablation, so the influence of arrythmia on neuropsychological scores at 1-3 days after ablation is undeniable. Similarly, the sedation and stress caused by the ablation procedure might also affect cognitive assessment scores, but none of the patients showed any signs of neuropsychological alterations including delirium. As the average hospitalization period after an ablation procedure was 4-5 days, we performed the examinations 1-3 days after ablation and confirmed absence of neuropsychological worsening between pre-ablation period and immediately after ablation in 21 patients. If patients were too ill to perform neuropsychological assessment, we postponed it to the next day or excluded the patients from the study. Fourth, we could not measure cerebral perfusion by CBF single photon emission computed tomography or arterial spin labeling MRI techniques. Further studies are needed to investigate the incidence of embolic microinfarctions in patients with AF and CBF improvements after ablation. In conclusion, our study showed preserved cognitive function at 6 months after ablation despite the incidence of embolic microinfarctions. The improvement of neuropsychological scores across all cognitive domains might indicate that ablation improves cognitive function by restoring cardiac function and mitigating chronic cerebral hypoperfusion. Data availability Data are available upon reasonable request from the corresponding author.
2021-09-25T06:17:01.305Z
2021-09-23T00:00:00.000
{ "year": 2021, "sha1": "e9cdc52891e71470bbf217d7eeb3b4fb62d4dafe", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-98484-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7cc6b0bb810cc2a9df1c6941d365f8bb3ab35caf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265745003
pes2o/s2orc
v3-fos-license
Comparative Safety and Effectiveness of Heterologous CoronaVac–ChAdOx1 versus Homologous CoronaVac Vaccination in a Real-World Setting: A Retrospective Cohort Study Background: The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has outpaced vaccine availability and delivery from vaccine manufacturers, and thus, a scarcity of vaccines happened to many countries around the world. In Thailand, the mixing of different types of vaccines was approved and clinically implemented partially due to concerns about the availability and efficacy of one vaccine. Objective: This study aimed to investigate the effectiveness and safety of heterologous CoronaVac–ChAdOx1 nCoV-19 vaccines compared with the usual regimen of homologous CoronaVac–CoronaVac. A retrospective cohort study was conducted by dividing patients into the CoronaVac–CoronaVac group and the CoronaVac–ChAdOx1 group. Results: A total of 875 patients received vaccinations at Srisangwan Hospital between April to October 2021 and were included for analysis. The patients in both homologous and heterologous groups had low rates of COVID-19 infection. In addition, the hospitalization rates in the 40 days after the second vaccination were low in both regimens. Minimal adverse events (AE) were reported in both groups, including local AE (e.g., discomfort at the injection site, rash, soreness, swelling, and redness) and systemic AE (e.g., fever, headache, weariness, nausea, vomiting, diarrhoea, and myalgia). Moreover, several factors were associated with lower adverse events following immunization (AEFIs), including age ≥ 50 years, male, and body weight ≥ 50 kg. In contrast, thyroid disease, diabetes mellitus, allergic rhinitis, and psychiatric disorders were independent risk factors associated with an increase in AEFIs. Conclusions: The heterologous CoronaVac–ChAdOx1 and homologous CoronaVac–CoronaVac regimens were promising vaccination strategies for the prevention of SARS-CoV-2 infection. However, the heterologous CoronaVac–ChAdOx1 potentially caused fewer AEFIs compared with the homologous CoronaVac–CoronaVac regimen. Introduction The most significant scientific breakthrough in the fight against the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic was the speed of vaccine creation.Despite the fact that the efficacy and safety of all approved vaccines were demonstrated in many large clinical trials, post-marketing surveillance study in a larger population in routine clinical practice has recently been considered necessary [1].The rapid escalation rate of COVID-19 has outstripped vaccine availability and delivery time [2,3], and thus, the availability of vaccines, especially in Thailand and other less industrialized countries, usually relies on the government's capability. The first type of COVID-19 vaccine that was approved by the Thai Food and Drug Administration (FDA) and used in Thailand was CoronaVac from CoronaVac Life Sciences, Beijing, China.It was developed using inactivated SARS-CoV-2 and was approved for people aged 18 to 59 years.A clinical study involving over 10 million Chilean people demonstrated 65.9% effectiveness after receiving two doses of CoronaVac [3].The latter type of vaccine approved by the Thai FDA is ChAdOx1 nCoV-19 vaccine from AstraZeneca, Oxford, United Kingdom.Unlike CoronaVac, this vaccine was developed using an adenoviral vector encoding SARS-CoV-2 spike (S) protein, which was shown to elicit a substantial immune response in several phase II/III trials [4][5][6]. At the time that the ChAdOx1 vaccine was available in Thailand, CoronaVac had been distributed to a large number of Thai people and became the fundamental vaccine in the country.Once the ChAdOx1 vaccine was rolled out, the majority of Thai people got their first inoculation with CoronaVac and therefore had to choose the second dose of vaccine from either CoronaVac or ChAdOx1.Based on the information available at that time, the Thai Advisory Committee on Immunization Practice recommended administering the second dose of ChAdOx1 following the first dose of CoronaVac, especially for individuals with severe comorbid conditions or those who experienced adverse events following immunization (AEFIs).These adverse events included body rashes, allergies, and vaccination-stress-related responses that required hospitalization after receiving the CoronaVac vaccine [7].Moreover, this recommendation was informed by immunogenicity data from Yorsaeng et al. [8], who found that the heterologous CoronaVac-ChAdOx1 regimen produced antibody levels comparable with those resulting from two doses of ChAdOx1.As a result, in July 2021, the heterologous regimen was proposed in the national vaccination programme in Thailand, permitting a 3-to 4-week interval between the first dose of CoronaVac and the second dose of ChAdOx1 nCoV-19 [7,8]. One of the benefits of the heterologous CoronaVac-ChAdOx1-S regimen is the shorter interval compared with two doses of ChAdOx1-S; the waiting time can be reduced from 10 weeks to 4 weeks with similar antibody levels [9,10].The immunogenicity and adverse effects between heterologous the CoronaVac-ChAdOx1 regimen and the homologous Coro-naVac and homologous ChAdOx1 regimens were reported in a pilot study in 2021 [11]; 354 participants were recruited to four vaccination groups: CoronaVac-ChAdOx1 group (n = 155), homologous CoronaVac group (n = 32), homologous ChAdOx1 group (n = 47), and COVID-19 patients (n = 120).The amount of IgG antibodies against the receptor-binding domain (anti-SRBD) in the CoronaVac-ChAdOx1 group after the second booster dose at 2 weeks was higher than at 4 weeks.The anti-SRBD level in the CoronaVac-ChAdOx1 group was significantly greater at 4 weeks following the second booster dosage than in the homologous CoronaVac, homologous ChAdOx1, and control groups (p < 0.001) [11].These results were similar to the study of Wanlapakorn et al. [9]; the heterologous CoronaVac-ChAdOx1 regimen generated stronger SARS-CoV-2 RBD-specific antibody responses and neutralizing activity against the wild type and variants of concern than the licensed CoronaVac-CoronaVac vaccine schedule.However, despite these results, the effectiveness and adverse events (AEs) in a larger number of samples remain limited, emphasizing the urgent need for more data to support vaccination recommendations [8]. One of the major concerns about heterologous vaccination regimens, such as CoronaVac-ChAdOx1 or ChAdOx1-mRNA vaccines is the safety profile.Although reported AEs were not often and mainly not serious [1,12], the interaction between two different types of the vaccine was still unknown.Therefore, this study aimed to compare the effectiveness, safety, and risk factors associated with the AEFIs of homologous and heterologous types of vaccine administered to healthy individuals; the homologous regimen was defined as two doses of CoronaVac, and the heterologous regimen was defined as the first dose of CoronaVac followed by a ChAdOx1 dose. Materials and Methods A retrospective cohort study was conducted at Srisangwan Hospital, which is a secondary hospital in Maehongson province, Thailand, from April 2021 to October 2021 during the period of the Delta variant COVID-19 pandemic.Due to the significant developments and changes in vaccine availability, as well as the government strategies during the pandemic, the data in this study were obtained from April to October 2021 to emphasize the unique dynamics and challenges of that period in Thailand.This study was approved by the ethics committee on human research of the Maehongson Provincial Public Health Office in Maehongson province (MSH REC-001.2565) with a waiver of informed consent for retrospective data collection under the condition of anonymously stored data collected.All methods were performed in accordance with the relevant guidelines and regulations. Participants and Treatment Those eligible for inclusion were people older than or equal to 18 years old of age who received both the first and second doses of the COVID-19 vaccine at Srisangwan Hospital between April and October 2021; only people with the first dose of the CoronaVac vaccine were included.The information on recruited people was collected from the Srisangwan Hospital database. Immunization regimens for all people were selected depending on the availability of vaccines at that time, together with the official recommendation of the Thai Advisory Committee on Immunization Practice; therefore, there were two groups of people in this study.The first group received the CoronaVac vaccine as a first dose and the CoronaVac vaccine three weeks after as a second dose.The other received the CoronaVac vaccine as a first dose but received a second dose of the ChAdOx1 vaccine three weeks after the first dose. Data Collection Information was gathered from the recruited population using several approaches.General characteristics, including age, sex, drug allergy history, comorbid diseases, and the interval between first and second doses, were recorded from the patient database of the hospital.The incidence of COVID-19-positive cases and hospitalizations due to COVID-19 was monitored from day 0 (14 days after the second inoculation) up to day 40. For the safety profiles, there were two surveillance systems used in this study.First, active AEFI surveillance was performed in the hospital by healthcare professionals.People who received any vaccine had to stay in the hospital for 30-60 min to observe any acute adverse effects that might happen.Furthermore, if there was no reported adverse effect in the hospital, they were asked to self-report any adverse effect that might occur at home via a mobile application called 'Mo Prom'.Mo Prom is a national AEFI surveillance application that reminds an individual to self-report any delayed adverse effect on days 1, 7, and 30 after inoculation.This study gathered information on adverse effects from both reports by healthcare professionals and people themselves via the application.The safety information after the first and second doses of vaccination was analyzed in this study. The adverse effects collected in this study were classified as AEFIs.They included the information as follows: • The number of AEFIs. • Age and gender differences in the distribution of AEFIs. • The effects of AEFIs: the various symptoms or adverse events that people may encounter after being vaccinated.• Causality of AEFIs defined by the World Health Organization: The determination or assessment of whether a given adverse event is caused by the administration of a vaccine.The WHO's causality assessment typically involves the following categories: certain, probable, possible, unrelated, and indeterminate. • AEs of special interest (AESIs), such as myocarditis and anaphylactic shock. Statistical Analysis Descriptive statistics were used to describe general characteristics and basic information of people in the two groups, i.e., homologous (CoronaVac-CoronaVac) and heterologous (CoronaVac-ChAdOx1) first and second doses.The chi-squared test or the Fisher's exact test (if less than five observations) for categorical data was performed to compare basic characteristics, while either an independent T-test or Mann-Whitney U test was used depending on a normal distribution of the data.A significance level of 0.05 was set for all analyses. Due to an imbalance in baseline characteristics between the treatment groups, an inverse-probability-weighted (IPW) propensity score adjustment was performed to reduce potential bias.This technique is widely used to minimize the potential of confounding factors between treatment arms.The propensity score was calculated using multivariable logistic regression.The variables included in the propensity score calculation were body weight, hypertension, age, diabetes mellitus, history of drug allergy, and allergic rhinitis.The variables were selected for analysis based on their association with outcomes of interest and baseline covariates with p-values of <0.20. The Cox proportional hazards regression model was used to investigate the effect of time on outcomes.Also, univariate Cox regression analysis was used to investigate the variables associated with AEFIs.The adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) of related factors were evaluated by a full model of Cox proportional hazards regression analysis (inverse probability weighting using the propensity score for baseline covariate adjustment).A 2-sided α of 0.05 was considered statistically significant for all analyses.We calculated the incidence rates of AEFIs per 1000 people within 30 days after vaccination.The data analysis was performed using Stata software, version 14 (Stata Corp, College Station, TX, USA). Baseline Characteristics A total of 1339 participants who received the COVID-19 vaccine at the hospital were screened, and 464 of them were excluded because they received ChAdOx1-ChAdOx1 vaccines (n = 450) and because of incomplete data (n = 14).Of all 875 participants analyzed, 430 participants received homologous CoronaVac-CoronaVac and 445 participants received heterologous CoronaVac-ChAdOx1 vaccines (Figure 1).The mean age of the homologous CoronaVac-CoronaVac group was 40.20 ± 10.01 years and of the CoronaVac-ChAdOx1 group was 41.83 ± 14.03 years.The summary of characteristics of people receiving the COVID-19 regimens is presented in Table 1. Baseline Characteristics A total of 1339 participants who received the COVID-19 vaccine at the hospital were screened, and 464 of them were excluded because they received ChAdOx1-ChAdOx1 vaccines (n = 450) and because of incomplete data (n = 14).Of all 875 participants analyzed, 430 participants received homologous CoronaVac-CoronaVac and 445 participants received heterologous CoronaVac-ChAdOx1 vaccines (Figure 1).The mean age of the homologous CoronaVac-CoronaVac group was 40.20 ± 10.01 years and of the CoronaVac-ChAdOx1 group was 41.83 ± 14.03 years.The summary of characteristics of people receiving the COVID-19 regimens is presented in Table 1. Effectiveness and Safety The incidence of positive cases and hospitalization of Delta sub-variants COVID-19, which were collected from the 14th day of the second vaccination until the 40th day, are presented in Table 2 and Figure 2.There was no patient who was admitted to the ICU, had died, or needed mechanical ventilation after diagnosis of a breakthrough infection.Moreover, the results showed less risk of hospitalization within 40 days in people who received the heterologous CoronaVac-ChAdOx1 regimen compared with the homologous CoronaVac-CoronaVac regimen (aHR 0.13, 95% CI 0.02 to 0.82, p = 0.030) (Table 2). Effectiveness and Safety The incidence of positive cases and hospitalization of Delta sub-variants COVID-19, which were collected from the 14th day of the second vaccination until the 40th day, are presented in Table 2 and Figure 2.There was no patient who was admitted to the ICU, had died, or needed mechanical ventilation after diagnosis of a breakthrough infection.Moreover, the results showed less risk of hospitalization within 40 days in people who received the heterologous CoronaVac-ChAdOx1 regimen compared with the homologous Coro-naVac-CoronaVac regimen (aHR 0.13, 95% CI 0.02 to 0.82, p = 0.030) (Table 2).In addition, the CoronaVac-ChAdOx1 regimen was related to a significantly lower rate of AEFIs compared with the CoronaVac-CoronaVac regimen (aHR 0.42, 95% CI 0.32 to 0.54; p = 0.001).In particular, the CoronaVac-ChAdOx1 regimen was independently associated with a lower frequency of any local reactions (aHR 0.55, 95% CI 0.32 to 0.93, p = 0.027) and a lower frequency of any systemic reactions (aHR 0.29, 95% CI 0.20 to 0.41, p = 0.001) compared with the CoronaVac prime and boost regimen.Table 3 shows the association between AEs and COVID-19 vaccination after IPW propensity scoring. Participants who received the homologous CoronaVac booster dose reported significantly more systemic side effects, such as headache (4.41 per 1000 persons), myalgia (6.07 per 1000 persons), and nausea and vomiting (1.31 per 1000 persons).For local reactions, the majority of side effects were pain, swelling, and redness at the injection site (Table 3).A slightly higher frequency of AESIs was observed in people who received the homologous CoronaVac-CoronaVac regimen compared with the heterologous CoronaVac-ChAdOx1 regimen; the higher rates included paraesthesia, chest tightness, and palpitation (Table 4).There was no significance in the severity of all reactions.Four participants with palpitations had normal EKGs and the symptoms resolved after the administration of sublingual isosorbide dinitrate.One participant had an anaphylactic reaction and was successfully treated with adrenaline (Table 4). Risk Factors Associated with AEFIs The results of the univariable analysis indicated that age ≥ 50 years, male, and CoronaVac-ChAdOx1 regimen were independent risk factors associated with fewer AEFIs.In contrast, a history of drug allergy, hypertension, dyslipidaemia, diabetes mellitus, allergic rhinitis, and asthma were independent risk factors associated with higher AEFIs among the population receiving the COVID-19 vaccination (Table 5).Using multivariable analysis, the regimen of CoronaVac-ChAdOx1 was associated with fewer AEFIs than CoronaVac-CoronaVac (aHR 0.39, 95% CI 0.30 to 0.49, p = 0.001) after adjusting the propensity score, which included age, body weight, hypertension, diabetes mellitus, history of drug allergy, and allergic rhinitis.Moreover, Figure 3 shows the hazard ratio for AEFIs due to COVID-19 vaccination classified by risk factors.Similarly, Figures 4 and 5 Discussion This study mainly evaluated the effectiveness and safety of two different regimens of COVID-19 inoculation: CoronaVac-ChAdOx1 compared with CoronaVac prime and CoronaVac booster doses.The results indicate that the heterologous CoronaVac-ChA-dOx1 regimen was safe, well-tolerated, and effective.Most participants reported only minor adverse effects.Additionally, the results show that independent factors associated with decreased AEFIs were age ≥ 50 years, male gender, body weight ≥ 50 kg, and the CoronaVac-ChAdOx1 regimen, while independent factors associated with increased AEFIs were thyroid disease, diabetes mellitus, allergic rhinitis, and psychiatry disease. Recent evidence showed that the delta variant of SARS-CoV-2 could escape immune protection and diminish the efficacy of the current regimens (two doses of the same vaccine type) [3,12,13].Therefore, several studies suggested that heterologous types of vaccines could be a solution for this as a variety of vaccine regimens could increase immune response when administered as boosters [8,13,14]. Based on the previous studies, together with the spreading of the Delta variant, the Ministry of Public Health of Thailand highly encouraged people to be inoculated with a mixing regimen of CoronaVac-ChAdOx1 because the regimen of CoronaVac only provided a 51% efficacy rate for the Delta variant [15].However, the safety profile of the Coro-naVac-ChAdOx1 regimen needs to be further studied, especially in a larger population around the world.The minor adverse reactions in our study were mostly caused by vaccine reactogenicity, which relates to pyrogenic cytokines such as IL-1, IL-6, PGE2, and TNF-α [16].There was no severe adverse event in either the CoronaVac or ChAdOx1 receivers, except in one case, who was suspected to have anaphylaxis from the second dose of the CoronaVac vaccine.In general, the adverse event profile was similar to the previous literature on inactivated and adenoviral-vectored COVID-19 vaccines [1,14].This study found a little lower rate of adverse events than other studies [12,17], but further comparative studies are needed to confirm these findings. Discussion This study mainly evaluated the effectiveness and safety of two different regimens of COVID-19 inoculation: CoronaVac-ChAdOx1 compared with CoronaVac prime and CoronaVac booster doses.The results indicate that the heterologous CoronaVac-ChAdOx1 regimen was safe, well-tolerated, and effective.Most participants reported only minor adverse effects.Additionally, the results show that independent factors associated with decreased AEFIs were age ≥ 50 years, male gender, body weight ≥ 50 kg, and the CoronaVac-ChAdOx1 regimen, while independent factors associated with increased AEFIs were thyroid disease, diabetes mellitus, allergic rhinitis, and psychiatry disease. Recent evidence showed that the delta variant of SARS-CoV-2 could escape immune protection and diminish the efficacy of the current regimens (two doses of the same vaccine type) [3,12,13].Therefore, several studies suggested that heterologous types of vaccines could be a solution for this as a variety of vaccine regimens could increase immune response when administered as boosters [8,13,14]. Based on the previous studies, together with the spreading of the Delta variant, the Ministry of Public Health of Thailand highly encouraged people to be inoculated with a mixing regimen of CoronaVac-ChAdOx1 because the regimen of CoronaVac only provided a 51% efficacy rate for the Delta variant [15].However, the safety profile of the CoronaVac-ChAdOx1 regimen needs to be further studied, especially in a larger population around the world.The minor adverse reactions in our study were mostly caused by vaccine reactogenicity, which relates to pyrogenic cytokines such as IL-1, IL-6, PGE2, and TNFα [16].There was no severe adverse event in either the CoronaVac or ChAdOx1 receivers, except in one case, who was suspected to have anaphylaxis from the second dose of the CoronaVac vaccine.In general, the adverse event profile was similar to the previous literature on inactivated and adenoviral-vectored COVID-19 vaccines [1,14].This study found a little lower rate of adverse events than other studies [12,17], but further comparative studies are needed to confirm these findings. Our study found that the breakthrough infection of COVID-19 from the 14th day of the second vaccination until the 40th day was 0.69% in people who received the heterologous CoronaVac-ChAdOx1 regimen and 0.67% for the homologous CoronaVac-CoronaVac regimen.The clinical efficacy of both regimens of COVID-19 vaccine, i.e., homologous and heterologous, was promising.Patients who received either of the regimens had less severe symptoms in terms of mechanical ventilation needs and ICU admission.Moreover, this study showed a possible effect of the CoronaVac-ChAdOx1 regimen on the reduction of the hospitalization rate of people due to COVID-19 infection.The phase III trials showed that ChAdOx1 was 76% [6] effective and CoronaVac was 50.8% [18] effective at preventing symptomatic infection, and both vaccines were 100% effective at preventing serious illnesses [6,18].The efficacy in the real world may differ from the trials due to the heterogeneity of the people who got the vaccines.Moreover, the studies were conducted in different settings and with limitations.Therefore, the vaccine efficacy in this study was much higher than in the published data. Although the efficacy of vaccines could not be directly compared due to the differences in characteristics of the population, infectious agents, and laboratory methods, the results from real clinical settings were considered useful for the determination of the inferiority of each vaccine in terms of effectiveness.In Thailand, the Delta wave has occurred since August 2021, and the current study was conducted during this period; therefore, the results help to determine whether the Thai population should receive CoronaVac or ChAdOx1 as the second dose of vaccine. Focusing on the risk factors of adverse events, a study indicated that people who received heterologous COVID-19 vaccination were prone to have a higher incidence of common vaccination-related adverse effects, such as fever [19].The results of this study show no severe adverse effects and the mild adverse effects were similar to those reported in homologous COVID-19 vaccination regimens.However, the patients who received a heterologous COVID-19 vaccination were more likely to experience common vaccination-related adverse effects, such as pain, swelling, redness, and fever.In contrast with the previous studies [17,20], the current study showed that the heterologous CoronaVac-ChAdOx1 regimen was related to fewer AEFIs when compared with the homologous CoronaVac-CoronaVac regimen.Both local and systemic AEs were reported more with the CoronaVac-CoronaVac regimen in the current study.This difference might result from population-dependent vaccine effects, vaccination regimens, vaccine administration, and handling practices [21].Moreover, several differences in study design, e.g., randomized controlled trial vs. observational study, immunization interval, and study population demographics could explain this disparity [12]. Apart from different types of vaccines, the current study summarized the factors correlated with fewer and more AEFIs.These conformed to a prospective observational study in India [22] that studied people who received two doses of ChAdOx1.The results described several risk factors associated with an increase in AEFIs, including age < 40 years (adjusted odd ratio: aOR 1.40, p < 0.05), female gender (aOR 1.80, p < 0.001), hypothyroidism (aOR 2.76, p = 0.04), and hypertension (aOR 1.96, p = 0.02).Another study also reported female gender as a risk factor for systemic AEFIs following COVID-19 vaccination [23]. A possible hypothesis that might explain the effect of gender on AEFIs was hormones that regulate cytokine levels and the immunological response to vaccination.It was shown that women developed stronger neutralizing titres after vaccination than men.The higher immune response in females might result in a more severe response to the vaccines than in males [23][24][25].More severe local and systemic adverse effects were reported in women who received trivalent inactivated seasonal influenza vaccination (TIV) [26], as well as most other pathogen vaccines [27].In contrast, the level of testosterone was inversely related to TIV antibody titres, and therefore caused more doses of TIV for men to achieve the same titre as women [28,29].Laboratory results additionally indicated that female B cells produced higher antigen-specific IgG [30], and thus, the difference between the sexes in terms of the immune response due to COVID-19 vaccines was proposed [31]. Regarding the correlation between age and AEFIs, many studies suggested that younger people tended to have more AEFIs than older people due to stronger immunological responses [24,25].Studies also provided evidence showing low levels of CRP, IL-10, and IL-6 cytokines in the elderly were lower, resulting in fewer systemic side effects [23].For body weight, the hypothesis was high fat in the body reduced adiponectin levels.Adiponectin is an adipocyte hormone that was demonstrated to reduce macrophage activation and the production of pro-inflammatory cytokines such as TNF, IL-6, and NFkB [32]. Interestingly, there was an association between thyroid disorders and immunization found in several studies [33,34].A proposed hypothesis of this phenomenon was that COVID-19 vaccination increased blood viscosity, leading to hyperviscosity [34].Furthermore, hyperviscosity caused an abnormally high thyroid hormone level [35].There was a case report that mentioned a female patient who acquired sub-acute thyroiditis shortly after receiving the adenovirus-vectored COVID-19 (ChAdOx1) vaccine [36].Likewise, another case report of sub-acute thyroiditis after inactive SARS-CoV-2 virus (CoronaVac) vaccination was published [37].However, further studies on thyroid function in healthy people and thyroid patients who received the COVID-19 vaccine are needed [38]. Focusing on the allergic profile of the vaccines, CoronaVac, which contains the entire inactivated virus, aluminum hydroxide adjuvant [20], and some mineral salts, was reported to cause urticaria in approximately 0.8% of vaccine receivers in Turkey [39,40].The US Center for Disease Control and Prevention (CDC) recommended that anyone who experiences an immediate allergic response within 4 h of vaccination should not receive the same vaccine again [41], and thus, heterologous types of vaccines, such as the CoronaVac-ChAdOx1 regimen could be an alternative for such people.Although the effectiveness and safety of vaccine regimens in this study were not compared with unvaccinated people due to ethical reasons and others, the findings in this study were consistent with previous studies and enough to advise the use of heterologous vaccines in people if needed [42]. There were some limitations in this study that should be noted.The difference in many baseline characteristics between the two groups was observed and might cause different results.Although the IPW propensity score method and multivariable Cox regression analysis were used to adjust this difference, all readers should bear in mind that the results were from two groups of people who might have some dissimilar backgrounds.In addition, the study was conducted in only one centre, and thus, the characteristics of the population in this study might not be the same as in other centres or other areas.Interpretation of the results should be cautious in case other people have different reactogenicity to the vaccines.Because immunogenicity was not measured in this study, it could not be concluded that people recruited in this area had the same immune response as others.Moreover, this study primarily focused on the safety aspects of different vaccine regimens.While it meticulously evaluated vaccine safety and adverse events following immunization (AEFIs), it did not include an assessment of vaccine efficacy in terms of antibody titration and neutralization efficacy.This aspect should be considered in future research.In addition, the focus of this study on the homologous regimen consisting of two doses of CoronaVac only was one of the limitations; the homologous regimen involving two doses of ChAdOx1 was not investigated in this study.However, the homologous regimen involving two doses of ChAdOx1 was not investigated.The decision to explore the homologous regimen with two doses of ChAdOx1 holds research value due to several significant considerations.While the ChAdOx1 vaccine showed promising efficacy and immunogenicity in clinical trials, the investigation of its safety and effectiveness in a real-world setting remains pivotal.Understanding the real-world performance of the ChAdOx1 vaccine in a homologous regimen contributes to evidence-based decision-making in public health strategies.Lastly, regarding sample size, this study was considered a small study with less than a thousand participants, although this study included the largest number of people who received the heterologous CoronaVac-ChAdOx1 regimen in Thailand.Studies in the future should collect data from more people, especially at the time when more vaccines were available in Thailand.Moreover, the safety and efficacy of COVID-19 vaccines in special populations, including children, older patients, pregnant women, and lactating mothers, remain essential areas of study.While this study primarily focused on adult populations, future research should investigate the performance of different vaccination regimens in these vulnerable groups to ensure comprehensive protection against COVID-19.However, the ongoing evolution of the virus and the emergence of new variants underscores the need for continued vaccine development.Research into vaccines that offer broad protection against a range of SARS-CoV-2 variants should also be emphasized.Additionally, exploring novel vaccine platforms and technologies may provide innovative solutions for future pandemics. Conclusions This study found that a heterologous schedule of CoronaVac-ChAdOx1 was effective in preventing symptomatic COVID-19 with higher effectiveness against hospitalization mainly due to the Delta variant.Moreover, the administration of the CoronaVac vaccine first and the ChAdOx1 nCoV-19 vaccine 3 weeks after could reduce vaccine-associated adverse effects.The risk factors that produced fewer AEFIs included age ≥ 50 years, male gender, and body weight ≥ 50 kg.Thyroid disease, diabetes mellitus, allergic rhinitis, and psychiatric disorders were the risk factors associated with an increase in AEFIs.The government immunization programme should consider the implementation of the heterologous schedule of CoronaVac-ChAdOx1, especially for individuals who have previously experienced AEFIs with other vaccine doses and during the period of Delta variant COVID-19.The findings in this study underscore the significance of incorporating heterologous vaccination regimens as a primary option for the government immunization programme. Figure 1 . Figure 1.Flowchart of participants in this study. Figure 2 . Figure 2. The effectiveness of COVID-19 vaccination against COVID-19 infection after 14 days of second dose injection, as assessed using the log-rank test. Figure 2 . Figure 2. The effectiveness of COVID-19 vaccination against COVID-19 infection after 14 days of second dose injection, as assessed using the log-rank test. present subgroup analyses of the local adverse reactions and systemic adverse reactions due to COVID-19 vaccination, respectively.Vaccines 2023, 11, x FOR PEER REVIEW 9 of 15 Crude HR, crude hazard ratio; adjusted HR, adjusted hazard ratio.* Univariate Cox proportional hazards model; ** multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis).Moreover, Figure3shows the hazard ratio for AEFIs due to COVID-19 vaccination classified by risk factors.Similarly, Figures4 and 5present subgroup analyses of the local adverse reactions and systemic adverse reactions due to COVID-19 vaccination, respectively. Figure 3 . Figure 3. Hazard ratios for AEFIs after COVID-19 vaccination, as assessed using the multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis). Figure 3 . Figure 3. Hazard ratios for AEFIs after COVID-19 vaccination, as assessed using the multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis). Figure 3 . Hazard ratios for AEFIs after COVID-19 vaccination, as assessed using the multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis). Figure 4 . Figure 4. Hazard ratio for any local reactions after COVID-19 vaccination, as assessed using the multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis). Figure 4 . 15 Figure 5 . Figure 4. Hazard ratio for any local reactions after COVID-19 vaccination, as assessed using the multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis).Vaccines 2023, 11, x FOR PEER REVIEW 10 of 15 Figure 5 . Figure 5. Hazard ratio for any systemic reactions after COVID-19 vaccination, as assessed using the multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis). Table 1 . Demographic and clinical characteristics of population who received COVID-19 vaccines. Table 2 . Incidence of people with COVID-19 positive and hospitalization due to COVID-19 within 40 days of second dose of COVID-19 vaccination.One person might have had more than one comorbidity.The comparisons across two COVID-19 vaccine regimens were performed using the chi-squared test or Fisher's exact test (if less than five observations) for categorical data, and an independent t-test for continuous data. Crude HR, crude hazard ratio; adjusted HR, adjusted hazard ratio.*Incidence rate is a crude incidence expressed as an event per 10,000 persons.**Univariate Cox proportional hazards model; *** multivariate cox proportional hazards model (adjusted for propensity score, which included body weight, hypertension, age, history of drug allergy, and allergic rhinitis).* Table 2 . Incidence of people with COVID-19 positive and hospitalization due to COVID-19 within 40 days of second dose of COVID-19 vaccination. Table 3 . Frequency and hazard ratio of the adverse events reported by people that received different regimens of COVID-19 vaccine. Note: One person might have had more than one event.Incidence was reported within 30 days since any of the two doses.Crude HR, crude hazard ratio; adjusted HR, adjusted hazard ratio.* Incidence rate is a crude incidence expressed as an event per 1000 persons.** Univariate Cox proportional hazards model; *** multivariate Cox proportional hazards model (adjusted for propensity score, which included body weight, age, diabetes mellitus, hypertension, history of drug allergy, and allergic rhinitis). Table 4 . Adverse events of special interest (AESIs) that were reported within 30 days after two doses of vaccines. Table 5 . Univariate and multivariate analyses identifying risk factors of AEFIs among all of the population with COVID-19 vaccination.
2023-09-07T15:06:57.674Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "4a17e29d578fa029456a85472cb24c6d53795fe3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/11/9/1458/pdf?version=1693878611", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "adc9f482fdde8cdfb255a98ba6c64a06f315321a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
103955405
pes2o/s2orc
v3-fos-license
Predicting Glass-to-Glass and Liquid-to-Liquid Phase Transitions in Water using Classical Nucleation Temperature Glass-to-glass and liquid-to-liquid phase transitions are observed in bulk and confined water, with or without applied pressure. They result from the competition of two liquid phases separated by an enthalpy difference depending on temperature. The classical nucleation equation of these phases is completed by this quantity existing at all temperatures, a pressure contribution, and an enthalpy excess. This equation leads to two homogeneous nucleation temperatures in each liquid phase, the first one being the formation temperature of an ordered liquid phase below the melting temperature and the second one corresponding to the overheating temperature. Thermodynamic properties, double glass transition temperatures, sharp enthalpy and volume changes are predicted in agreement with experimental results. The first-order transition line between fragile and strong liquids joins two critical points. Glass phase above its transition temperature becomes ordered liquid phase disappearing at a first-order transition temperature at low pressure and at a temperature larger than the melting temperature at high pressure. of bulk, amorphous water produces an endothermic event just below the crystallization temperature, occurring at around 136 K. [17][18][19]. The glass transition is characterized by a specific heat jump preceding the occurrence of crystallization. Applying high pressure to ice, reduces T m and produces amorphous water. Sharp transformations, viewed as first-order transitions are observed under pressure at 77 K, as well as at higher temperatures. These findings show that a bulk, amorphous liquid that has a low density can be transformed under pressure into a high-density amorphous liquid [6,9,18]. Three amorphous states have been identified under pressure: low density amorphous (LDA); high density amorphous l (HDA); and very high density amorphous (VHDA) after quenching to 77 K. Transformations of LDA to HDA and VHDA are observed after pressure is increased up to 16 kbar, and decompression taken down to residual pressures at temperatures lower than T g . Some VHDA, HDA and LDA have been studied after complete decompression, down to temperatures of 77 K [7,10,20]. HDA obtained after decompression down to 100 bar and at a temperature of 77 K is also transformed by heating at ~140 K in LDA [7]. HDA is recovered by a new compression at p=0.32 GPa. Measurements of water confined within silica gel in 1.1nm pores show the existence of a broad and high specific heat peak, at 227.5 K. (-45.6°C), and two heat flow changes at 124-136 K and 163-173 K, indicating the presence of two glass transitions [21,22]. A pronounced minimum of compressibility is still observed in water at a temperature of +45.5 °C, which is symmetrical with regard to T m at the ambient pressure of the transition at -45.6 °C [23,24]. A sharp specific heat increase below 273 K (already equal to 30 J.K. -1 mole -1 at 235 K) is still observed in bulk supercooled water at ambient pressure, down to the crystallization temperature [16,25]. This confirms the possible existence, in the absence of crystallization, of a LLTP at temperatures smaller than 235 K [21,26]. In the first model (of two liquids), these phenomena are attributed, to the existence of a critical point leading to a line of first-order LLPTs [23,[27][28][29][30][31]. The two liquids have the same chemical composition and contain low-and high-density species forming differently bonded domains. Such LLPTs form part of the general phenomenology for a wide range of liquids [32]. These ideas have successfully explained the existence of LLPTs, but are not able to predict glass thermodynamic properties, because they view them as a result of freezing, instead of a thermodynamic transition related to the difference of enthalpy of Phase 1 and Phase 2. The LLPT at 227.5 K looks like a first-order transformation of strong glass to fragile liquid [28]. In this paper, the glass transition is viewed as having a thermodynamic origin. There are many models describing it as a true phase transformation and experimental evidence favors this interpretation. The glass transition is seen as a manifestation of critical slowing down near a second-order phase transition with the possible existence of several classes of universality [33]. A model predicting the specific heat jump is based on a percolation-type phase transition with formation of dynamical fractal structures near the percolation threshold [34][35][36][37][38][39][40]. Macroscopic percolating clusters formed at the glass transition have been visualized [40]. High precision measurements of third-and fifth-order, non linear dielectric susceptibilities lead to a fractal dimension d F =3 for the growing transient domains [41]. An observation of structural characteristics of medium-range order with neutrons and X-rays, leads to d F =2.31 [42]. Another model, entirely based on thermodynamics, predicts the specific heat jump of strong and fragile glasses and liquid-to-liquid phase transitions [1,43]. For that, the classical nucleation equation is completed by introducing the enthalpy savings - ls× H m, - gs ×H m , and - lg ×H m , respectively, associated with the growth critical nucleus formation leading to Phase 1 and Phase 2 above T g and Phase 3 below T g , where H m is the melting heat [43]. The enthalpy difference  lg ×H m , associated with the formation of vitreous Phase 3 below T g , is then equal to ( ls - gs )×H m . The coefficients  ls and  gs are linear functions of  2 =(T-T m ) 2 /T m 2 , as shown by studying supercooling rate maxima of liquid elements [44,45]. A positive sign of  lg =( ls - gs ) above T g and T m at a reduced temperature  shows that Phase 1 is favored; a negative value would indicate that it is Phase 2 [1]. The first-order transition to a glass of confined liquid helium under pressure has been described using  ls0 = gs0 =0.217 [45]. This glass is ultrastable if there is no more enthalpy to relax in this state. These values of  ls0 and  gs0 , determined in many pure liquid elements at their melting temperature (T m ), correspond to the Lindemann coefficient 0.103 [46]. The transformation temperature (T sg ) of fragile glasses in ultrastable phases with higher density has been defined as a function of an enthalpy excess ×H m frozen after quenching. The denser ultrastable glass attains its lowest enthalpy at a transformation temperature T sg for a value of equal tothe frozen enthalpy of the glass below T g [5]. Phase 3 can be transformed into polyamorphous phases producing sharp enthalpy changes at various temperatures (T sg ) depending on smaller  The most-ultrastable phase is up to now the fullyrelaxed glass. 2-Basic equations applied to water The completed nucleation equation is given by (1): where G is the Gibbs free energy change per volume unit, (associated with the formation of a spherical growth nucleus of radius R),  is a fraction of the melting enthalpy H m (equal to  ls for a nucleus of Phase 1,  gs for a nucleus of Phase 2,  lg for a nucleus of Phase 3), V m is the molar volume, and =(T-T m )/T m is the reduced temperature. The melting heat H m and T m are assumed to be the same, whatever the nucleus radius R is, and not dependent on R. The critical nucleus can give rise to Phase 1, Phase 2, glass Phase 3, or various LLPT, according to the thermal variations of . The new surface energy is (1+)× 1 instead of  1 . The classical equation is obtained for =0 [47]. The homogeneous nucleation temperatures are  n-=(-2)/3 for <0 and  n+ = for  >0 [1,43]. The critical radius is infinite at the homogeneous nucleation temperature obtained for = instead of =0 for the classical equation. A catastrophe of nucleation occurs at = for crystals protected against surface melting [48]. The coefficients  ls and  gs in equations (2) and (3) represent values of (), and lead to the nucleus formation having the critical radius for Phase 1 and Phase 2 supercluster formations under pressure: where  is the coefficient of enthalpy excess in Phase 2 being frozen after quenching Phase 1; P 1 =(pp 0 )×V m1 /H m and P 2 =(p-p 0 )×V m2 /H m are the contributions of the pressure (p) to the enthalpy coefficients  ls and  gs , and p 0 is the ambient pressure [5]. The coefficients  ls and  gs are equal to zero at the reduced temperatures  0m and  0g for  and P 1 =0, and they correspond to the Vogel-Fulcher-Tammann temperatures above and below T g , respectively. Equations (2) and (3) are applicable at the homogeneous nucleation temperatures  n-in Phase 1 and Phase 2 respectively. Equation (4) determines  n-for Phase 2, combining (3) with  n-=( gs -2)/3 [43]: The solutions for  n-are given by (5):  n-of Phase 2 for the sign + is called  2 , given by (5). Equation (6) determines the homogeneous nucleation temperature  n-, for Phase 1, combining (2) at this temperature with  n-=( ls -2)/3: The reduced homogeneous nucleation temperature  n-of Phase 1 under pressure in (7) is deduced from (6):  n-inis called  1 for the sign +. The glass transition occurs at  g when  ls () in (2) is equal to gs () in (3).  1 and  2 are equal to  g in strong glasses because  ls ( g )= gs ( g ) for =0 and P 1 =P 2 =0. As water is a strong glass at low temperatures, the coefficients  gs0 in equation (8) and  ls0 in equation (9), deduced from equation (4) with P 2 =0 and from equation (6) with P 1 =0 and =0, are determined from the knowledge of  g,  0g , and  0m [43]: where the reduced temperatures  0g and  0m are equal to -1 and -2/3, respectively, because the Vogel-Fulcher Tamman temperatures are equal to 0 K below T g and to T m /3 (above T g ) for many pure, strong liquid elements [44]. With T g =136.6 K,  g =-0.5,  gs0 is equal to 0.66667 and  ls0 to 1.14286. The frozen enthalpy at T g is equal to the minimum value -0.3704×H m of ( ls - gs )×H m obtained for  ls =0 at = 0m = -2/3 without imposing any entropy constraint [15]. The heat capacity jump at T g is equal to: (d ls /dT-d gs /dT)×H m =-1.905*H m /T m =41.9 JK -1 mole -1 in agreement with old measurements [18], as shown in Figure 1. The specific heat excess C p (T)=d( ls - gs )/dT×H m of supercooled liquid only exists above the Kauzmann temperature because the entropy excess of supercooled liquid cannot be larger than the fusion entropy S m . C p (T) is used to evaluate the Kauzmann temperature (T K ) of supercooled water. The entropy excess (S m ) of supercooled water is equal to H m /T m =6000/273.1=22 J.K. -1 mole -1 between 227.5 K and T=116.5 K and between 119.7 K and 273.1K. The Kauzmann temperature occurs at T K 116.5 K-119.7 K. For confined supercooled water in pores of radius R=0.55 nm [21,22],  2 is equal to -0.167 (227.5 K) for P 2 =0.8505 and =0 in equation (5). Using the Young-Laplace equation, p is equal to 2/R=0.31±0.02 GPa, with a value of the surface tension =0.085±0.005J/m 2 at 227.5 K, extrapolated from its thermal variation above 250 K [23]. The enthalpy coefficient (P 2 ) is deduced to be close to 0.8505 with V m2 16.5×10 -6 m 3 and p=0.31 GPa. The enthalpy difference coefficient  lg between vitreous Phase 3 and liquid Phase 1 is given by equation (10) The difference P=(P 1 -P 2 )=V×p/H m is proportional to the volume change (V), and to the pressure (p) at the transformation temperature ( (P) is equal to zero for V=0 in the absence of latent heat. The homogeneous nucleation temperature of Phase 3 also occurs for  lg =0 with =0 because Phase 1 and Phase 2 have the same homogeneous nucleation temperature  1 = 2 = g . A sharp enthalpy difference between non-relaxed glass Phase 3 and fully-relaxed glass Phase 3 can be induced in all glasses below T g for  lg =0 in (10), when an enthalpy excess coefficient ( exists after rapid cooling, as already described for an ultrastable glass formation [5,10]. This enthalpy difference is equal to -2× lg ()×H m above  K for =0 and P 1 =P 2 =0 because it cannot exceed the frozen enthalpy which is available at any temperature below T g . This transformation temperature T sg for a stable glass formation given in (11) is equal to, or larger than, T K and is also induced by pressure. It depends on V and on the value of  at this temperature. After decompression, the enthalpy change coefficients of supercooled water are represented in Figure 2. The line  lg =0 at the origin corresponds to a quenched glass phase, containing a positive enthalpy excess which is equal to ]×H m above the line 1 because  lg of the glass phase is negative for =0. The total enthalpy of this quenched phase is then equal to that of Phase 1. The nonrelaxed glass phase is represented by Line 1 and the fully-relaxed phase at thermodynamic equilibrium by Line 2. A sharp, spontaneous transition is observed at 117 K during heating of the HDA phase [10] and this corresponds to an enthalpy coefficient excess =0.146 at =-0.5735 as shown in Figure 2. The latent heat measured using continuous heating is 757±144 J.mole -1 [6] and it corresponds to the relaxation of an enthalpy excess equal to 0.146×H m =876 J.mole -1 . An isothermal relaxation at T sg =117 K would have to deliver a latent heat two times larger (and equal to 1752 J.mole -1 ) as shown in Figure 2. The sample volumes of HDA in Figure 3 have been also measured at p=0 after a duration of about 3 hours of isotherm annealing [10]. In these conditions, a sharp volume change of 0.16×10 -6 m 3 g -1 occurs at 115±0.5 K confirming the existence of spontaneous and high enthalpy relaxation of about 1752 J.mole -1 from HDA to LDA phases. Following this analysis, this transition temperature at T sg is the first observation of a glass Kauzmann temperature (T K ) because spontaneous and sharp enthalpy relaxation is only possible above T K . Latent heats are still produced at various temperatures (T sg ) depending on  and they correspond to partial relaxations of Phase 3. This type of transition has already been observed in other glasses obtained by vapour deposition on substrates maintained at the temperature T sg using very slow deposition rates [2,4] and described by the same model [5]. This analysis is based on the existence of two main glass phases (Phase 3), the first one being the nonrelaxed classical glass phase with its frozen enthalpy equal to -0.37037×H m and the second one the ultrastable glass phase which is expected to be fully relaxed with a maximum enthalpy reduction equal to -0.2923×H m (at approximately T K 117 K). given by equation (10) Sharp, exothermic latent heats are still observed in water after decompression of VHDA at 77K from various pressures, see Figure 4 [20]. VHDA under pressure has a much larger density than HDA. The glass transition at 136.6 K and after decompression is not detected in these samples ( Figure 4). The sharp, exothermic latent heats observed below 136 K decrease the density and give rise to ice which contains orientational disorder instead of fully relaxed Phase 3 [49]. The latent heat at the crystallization temperature of 164 K seems to depend on the preceding exothermic heat. The absence of glass transition at 136.6 K confirms that the sharp transitions of polyamorphous phases at p=0 [20] lead to amorphous ice resulting from molecular reorientation processes [49]. The enthalpy coefficient along Line 2 in Figure 2 cannot be attained without amorphous ice formation. Sharp exothermic latent heats are observed around T sg =125, 126, 130, 132, 134 K and predicted to be equal along Line 2 to 1000, 876, 562, 383, 217 J.mole -1 respectively. The latent heats at 130, 132, and 134 K are in rough agreement with those observed in Figure 4, whereas those at 125 and 126.5 K are smaller because the annealing time at these temperatures is too small during a continuous heating. There are two regions of water crystallization (at approximately 230-250 K and 135-165K) confirming the existence of a LLPT between this two regions in all the samples studied. Supercooled water undergoes a first-order phase transition that separates fragile from strong states. Fragile liquids have values of  ls0 given in (12) [43]: where a=1 leads to a specific heat excess C p (T) of the supercooled melt at the glass transition equal to 1.5×H m /T m [50][51][52]. The reduced temperature  0m is given by (13), and is a double solution for (6): New parameters  gs0 and  0g are fixed at T g and below T g in equations (14) and (15) to give a double solution for (4) with a=1 because a<1 leads to a too high nucleation temperature (T 1 ) in Phase 1;  gs0 is maximized by (14) and (15) The first-order transition under Laplace pressure of fragile-to-strong water in confined space, occurs at T LL =227.5 K,  LL =-0.167, and T m =273.1 K for the melting temperature of superclusters percolating in the glass state as assumed in equation (1) [22]. The two temperatures, where  lg in equation (10) is equal to zero, cannot depend on the pressure because there is no volume change there. In the fragile state of water, these two reduced temperatures are  LL =±0.16705 and they are symmetrical with regard to T m in Figure 5. For  lg =0 above T m , there is a compressibility minimum of bulk water at 45.6°C because Phase 2 replaces Phase 1 without volume change at this temperature as shown in Figure 5 [23,53,54]. Then, the first-order transition of fragile-to-strong liquid does not depend on the pressure at  LL =-0.167.The value a=1 used in Figure 5 at  LL =-0.167 [22], is due to an LLPT [23,[27][28][29][30]. This increase is not only observed at zero pressure, but also under a Laplace pressure of 0.31±0.02 GPa in 1.1 nm pores, slightly increasing with decreasing temperature [22,[55][56][57]. In Figure 5, the LLPT at 227.5 K and zero pressure is accompanied (during heating) by an exothermic latent heat of (-1.5+1.07102+0.42298)×H m =-0.006×H m associated with Phase 1 and Phase 3 transformations because glass Phase 3 is transformed in liquid Phase 3 at T g and continues to exist as a liquid phase above T g . As expected [22][23][24][25][26][27][28], a critical point seems to exist for p=0 in these conditions. 3-First-order transformations LDA-HDA under pressure High pressure applied to ice samples followed by complete decompression, induces an LDA phase which has a density and, consequently, an enthalpy close to that of ice 6. The LDA phase is viewed as having an enthalpy difference of -0.3704×H m with that of the glass phase at all temperatures T<T g . The LDA to HDA involving high pressures and an enthalpy excess ( due to Phase 1 quenching is frozen because the melting temperature (T m ) is strongly decreased, and the sample is cooled during decompression from temperatures much higher than T m . In Figure 6, compression experiments of this LDA phase at various temperatures, transform it into an HDA phase at a well-defined pressure (p). This sharp transformation is also viewed as an HDA to LDA transformation because this first-order transition is reversible at T sg . The volume change V in (10) does not depend on the pressure (p) and is equal to 0.2×10 -6 m 3 g -1 , as shown in Figure 6 [8]. All values of various quantities are given in Table 1. The sharp enthalpy changes under pressure (p) are equal to P 2 =0.3704×H m and occur at = sg , as given by equation (11). LDA is viewed as having the enthalpy of nonrelaxed glass Phase 3 and an effective enthalpy excess  eff , and it is expected to have an enthalpy difference -P 2 with HDA for T sg <T<T g . This LDA-HDA first-order transformation is subject to an enthalpy constraint setting that the total enthalpy increase at equilibrium cannot be larger than the maximum frozen enthalpy 0.3704×H m (produced at =-2/3). In Figure 7, the values of  eff =+P 1 given in Table 1 (obtained using (10) for = sg and represented as a function of   sg )depend on pressure via P 1 which is the initial enthalpy change under pressure associated with the glass volume change below T g , before the occurrence of the first-order transition. The values of  eff at the temperature  sg are negative because P 1 is negative. The enthalpy excess  in the absence of pressure, is equal to  lg  gs ) and varies from 0.3704 at T=77 K to zero at 136.6 K, as shown in Table 1. Enthalpy excess depends on the reduced temperature belowas has already been observed in hyperquenched glasses below T g [58][59][60]. The values of P 1 are deduced from the difference:  eff -. The melting temperatures (T m ) under pressure are assumed to be those of hexagonal ice [9]. They lead in water to the maximum change of the enthalpy coefficient  lg =0.3704 at the transformation temperature T sg and for 0.31<p≤0.6 GPa. The first-order transformation of LDA-HDA, takes into account the entropy constraint which could not be respected for an enthalpy relaxation. Table 1. In Figure 8, the transformation under pressure starts from Line 3 and leads to HDA, including  eff on line 1 which is characterized by an enthalpy increase equal to the frozen enthalpy (0.3704×H m ) below T g =136.6K. The volume change (V=0.3704*H m /18/p=0.206×10 -6 m 3 g -1 GPa -1 for p=0.6GPa) is constant under various pressures (p), and equal to the experimental value presented in Figure 6. The enthalpy difference coefficient (P 2 =-0.3704) is the sum of  eff and P 1 . P 1 is negative and proportional to the applied pressure in agreement with Figure 6. The slope d eff /dp corresponds to the measured value (dV/dp= 0.21g -1 cm 3 GPa -1 ) of the LDA phase [13, Figure 14]. The LDA-to-HDA transformation, occurring for p=0. 35 GPa in the interval 130-140 K, is reversible when the pressure is decreased down to p=0.05 GPa [8]. This reversibility still proves its firstorder character. The enthalpy change induced at T sg is recovered near T g 0.5×T m after decompression down to 0.05 GPa. The enthalpy excess ( eff ×H m ) is fully recovered at T=T sg and p=0 because the pressure changes the enthalpy from Line 1 to Line 3, in Figure 8, and decreases the volume by a constant quantity which is recovered after decompression. Figure 8: Enthalpy excess coefficients  eff versus pressure p and sharp enthalpy coefficient changes  lg at = sg : The line 1 is the HDA line represented by  eff versus p (GPa). 2-The line 2 represents the change  lg =-0.3704=P 2 which is equal to  eff +P 1 given in Table 1. P 1 is proportional to p. 3-The line 3 is the LDA line with a slope corresponding to dV/dp= 0.206 g -1 cm 3 GPa -1 . Calculated points roughly correspond to Mishima's measurements reproduced in Figure 6 at T=77K for p=0.6 GPa instead of 0.55 GPa, for p=0.5 instead of 0.45 GPa, for p=0.42 instead of 0.38 GPa , and for p= 0.35 GPa instead of 0.32 GPa [8]. A relaxation of the enthalpy excess  has been observed [22] for water confined into 1.1nm pores being submitted to Laplace pressure and this is reproduced in Figure 9. "The systematic heatevolution and heat-absorption effects for the rapidly and slowly cooled samples are characteristic of a glass transition, and two transitions are found" between the ranges 124-136 K and 163-172 K. The glass transition of all LDA phases above T sg is given by (5) where -P 2 =0 and  gs0 is replaced by  gs0 +P 2 =0.66667+0.37034=1.037 because P 2 results from the first-order transition at T sg instead of relaxation. It is equal to  g =-0.3677 and T g =0.632×T m . With T m =273.1 K, the T g of confined water is expected to occur at 173 K. =- lg disappears at  g =-0.5 (T g =136.6 K) when  lg =0. The relation  eff =+P 1 shows that  eff =P 1 occurs at T136.6 K. This equality occurs in Figure 9 for P 1 =-0.3704/2=-0.185. The Laplace pressure at T136 K is then equal to 0.30 GPa in good agreement with 0.31 GPa at T LL =227.5K. The glass transition calculated by equation (5) occurs where  g =-0.5 (T g =0.5×T m ) at low pressures in the absence of an LDA-HDA transition. The HDA phase has a larger enthalpy, and the new glass transition temperature of HDA is still equal to (5) and T g =0.632×T m depending on the melting temperature (T m ) under pressure. For p=0.6 MPa, T m =154 K, T g 97.4 K; for confined water in 1.1 nm pores, T g =173 K which is in agreement with Figure 9. The first-order LDA-HDA transition at T sg under pressure is accompanied by an entropy change equal to 0.3704×H m /T sg =f×H m /T m where f is a fraction of the fusion entropy at the melting temperature under pressure which is recovered at the glass transition, T gS : f = 0.3704×T m /T sg (16) Values for f are given in Table 1. The entropy (S) of HDA at T gS in Table 1 is counted from T gS to the melting temperature T m using the specific heat, deduced from d lg /dT, because the LLPT has disappeared above p=0.31 GPa (see section 3). Some glass transitions (under a pressure at  gS and T gS calculated using the entropy constraint) are given in Table 1 and they roughly equate to the calculated values  g and T g using equation (5). The uncertainty on S values is estimated to be 8.6% from the difference between S=7.59 J.K. -1 mole -1 at  g and S =5.7 J.K. -1 mole -1 for p=0.6 GPa. The first-order transition of HDA to LDA induces an enthalpy decrease equal to -0.37037×H m and then a very stable glass state up to T g =136.6 K. In addition, a spontaneous enthalpy relaxation of this state is observed at, or above, T K in order to attain the enthalpy of the ultrastable state after isothermal and complete relaxation of the frozen enthalpy at T sg =T K . This complete relaxation gives rise to orientational disorder in ice instead of ultrastable glass phase [49]. Recent studies have reported on in-situ structural characterization of LDA after decompression and relaxation between 96K and 160K by synchrotron x-ray diffraction [61]. An intermediate crystalline phase at 100K, prior to complete amorphization at 133K is observed. These results show that LDA exists under various forms depending on the relaxation temperature because any phase having a fusion entropy smaller than H m /T m =6000/273.1=22 J.K -1 mole -1 can be condensed at a temperature equal or larger that its own Kauzmann temperature. Another publication classifies HDA as a "derailed" state along the ice Ih to high-density ice IV pathway [62]. These two papers show that the same volume changes that characterise LDA and HDA lead to various phases including "derailed" states depending on relaxation time and temperature before attaining ice. Nevertheless, relaxed LDA has an enthalpy still smaller than that already frozen below T g . Its enthalpy is so close to that of ice that its vitreous state can be transformed, by relaxation, through various "derailed" states on the pathway leading to the formation of ice [49]. 4-The water phase diagram and the critical points under pressure There is no more first-order transition at a critical point. By applying equations (2) and (3) and assuming =0, and P 1 -P 2 =0, Lines 1 and 2 at a critical point in Figure 5 are shifted by P=P 1 =P 2 under pressure. In Figure 10, the LLPT line,  LL =-0.167 (T LL =0.832×T m ), extends from P=-0.5000 to P=0.8505. A reduced temperature is used because it reduces the figure number and it may apply to the melting temperature of any ice phase. The critical points are determined assuming that glass Phase 3 continues to exist as a liquid phase when heated above  g . A complementary volume change, corresponding to the HDA-VHDA transformation is observed at 125 K under higher pressures (approximately p=0.95GPa) [11, Figure 1]. and is frozen after decompression at 77 K and equal to 0.0855 m 3 g -1 . This transition is due to superheating of Phase 3 which disappears at the second homogeneous nucleation temperature when  n+ = lg =1.302×T m and it is accompanied by an enthalpy increase equal to 0.302×H m =1812 J.g. -1 mole -1 in the strong liquid and a volume change V=1812/18/p=10 -6 m 3 g. -1 (which agrees with [11]). Equation (17) gives the value of  n+ for all glass phases as a function of their initial enthalpy coefficient [ where  ls0 =1.14286,  gs0 =0.66667,  0g =-1,  0m =-2/3 and P=-0.3704 for LDA. The melting temperature (T m ) of ice, Ih, under 0.95GPa is deduced to be equal to 125/1.302=96K which is in agreement with Mishima's measurements [9]. The existence of a melting temperature above T m due to Phase 3 superheating suggests that any liquid Phase 3 is "ordered" above T g and T m . The existence of an "ordered liquid" state has already been suggested to exist above T g and above T m in Zr 41.2 Ti 13.8 Cu 12.5 Ni 10 Be 22.5 [63,64]. The temperature T n+ is observed from 1090 to 1150 K and equation (17) predicts 1116K [1]. An other glass-forming melt (Zr 58.5 Cu 15.6 Ni 12.8 Al 10.3 Nb 2.8 ) is ordered below 850K. Its glass transition temperature is 700 K and T m =1125 K. Using equation (12) and a=1 because C p (T g )=1.5×H m /T m ,  g =-0.378,  ls0 = g +2=1.622 and 1.5× 1 = g ; the homogeneous nucleation temperature T 1 in Phase 1 is calculated to be equal to 842 K. A specific heat peak is observed in Figure 11 at this temperature which shows that liquid Phase 1 is ordered below its homogeneous nucleation temperature [65]. The reference of the specific heat data is [66]. There is no more first-order transition above P=0.8505 because the homogeneous nucleation temperatures of strong Phase 1 and Phase 2, calculated using equations (5) and (7), reappear above  LL as shown in Figure 10. Another point without first-order transition occurs for P 1 =0.006 because the sum of latent heats of Phase 1 and Phase 3 at  LL is nearly equal to zero, as seen in Figure 5. These two points occur for p 2 /  and p 1 / 1 =2 where  is the density in Kg.m -3 and p the pressure in Pascal. The pressure p 1 is equal to 18.3 MPa, where 1 =915 and p 2 =0.31 GPa where  2 =1093 which is in rough agreement with other calculations [23,27,29,30] and with density measurements under pressure [67][68][69]. Liquid Phase 3 exists above T g and this explains the presence of a point at a low pressure equal to approximately 18.3MPa where the first-order transition disappears without being the end of the first-order transition line. The corresponding pressure slightly depends on the initial choice of T g but, it is equal to approximately P=0 for T g =135 K instead of 136.6 K. The specific heat increase at zero pressure below T m proves that the LLPT is always present for P<0.006 and exists at negative pressures down to P=-0.500. Phase 3 disappears with the glass transition for P=-0.500 when Line 1 and Line 2 in Figure 5 are shifted by -0.500. This third point is critical because it corresponds to the other extremity of the first-order transition line and confirms that the stability limit of the two metastable water phases occurs for p=-175 MPa assuming a density =950 Kg.m -3 [70]. This last critical point also corresponds in Figure 5 to the highest exothermic enthalpy under pressure and then, to the maximum density at negative pressure [70,71]. When assuming zero pressure, equations (2), (3), and (10) indicate that the first-order transformation during heating could be endothermic between P=0.006 and P=0.8505 and exothermic for -0.5<P<0.006 adding Phase 1 and Phase 3 contributions to the latent heat (see Figure 5). Conclusions: The thermodynamic parameters of two water phases (Phase 1 and Phase 2), separated by an enthalpy difference depending on   =(T-T m ) 2 /T m 2 , have been determined only knowing the formation temperature of a strong glass in Phase 3 where T g =136.6 K, the first-order LLPT was at -45.6°C in confined water under pressure, the compressibility minimum was +45.6 °C, the ice melting heat was H m =6000 J.mole -1 , and the melting temperature was 273.1 K. The LDA phase of strong glass contains an enthalpy excess below T g =136.6K, resulting from quenching. Consequently (below T g =136.6K) a sharp, exothermic latent heat is observed through relaxation heating after total decompression at 77K and it can be predicted to occur at temperatures (T sg ) in agreement with experimental results. The maximum relaxed enthalpy cannot be higher than its value at the Kauzmann temperature even if the frozen enthalpy is equal to -0.3704×H m at =-2/3 without entropy constraint. The enthalpy excess present in the bulk glass at the transformation temperature, leads to partially-amorphous ice instead of fully-relaxed ultrastable glass because the LDA density is too close to that of ice. LDA exists under various forms because any ice having a fusion entropy smaller than 22 J.K. -1 mole -1 is crystallized at a temperature equal or larger that its own Kauzmann temperature. A sharp volume increase from HDA to LDA has been measured at 115.5 K. This transformation temperature corresponds to the Kauzmann temperature T K =T sg 115.5 K. This is the first observation of the Kauzmann temperature of a glass. Supercooled water is a fragile liquid above a liquid-to-liquid phase transition (LLPT) at T LL =0.833×T m and it is transformed into a strong liquid below T LL . The first-order character of LLPT disappears for three pressures equal to approximately -175MPa, 18.3MPa, and 310 MPa. Glass Phase 3 disappears for p=-175MPa because there is no more glass transition below this negative pressure. For the first time, it has been shown that glass Phase 3 is transformed into a new liquid phase above T g and that the two liquids separated below T LL are liquid Phase 1 and liquid Phase 3 for -0.175 GPa<p<+0.31 GPa. Along the LLPT line, the first-order transition could be exothermic by heating from -175MPa to 18.3MPa and endothermic from 18.3 MPa to 310 MPa. Double glass transitions are expected under pressure at T sg and T g when the glass enthalpy is still enhanced by an enthalpy excess. The glass transition occurs at T g =0.5×T m at low pressure (p<0.31 GPa) and at 0.632×T m for HDA under high pressure (0.31<p<0.6 GPa). The first-order LDA-HDA phase transitions at T sg under pressure can be predicted, leading to constant volume and enthalpy changes. The predictions correspond to the formation of a new glass Phase 3 with an enthalpy increase equal to the maximum frozen enthalpy (0.3704×H m ) available at T=91K (=-2/3). This enthalpy change is no longer limited by its value at the Kauzmann temperature because this glass phase is induced by a first-order transition; it has an entropy maximum reduction at T sg which is equal to the available entropy below its own glass transition. The entropy and enthalpy changes at T sg are expected to be recovered in the "no man's land" at this new T g . Phase 3 does not disappear but continues to exist as an "ordered" liquid phase above T g . Ordered liquid Phase 3 is superheated above T m and disappears at the liquid homogeneous nucleation temperature T n+ . This transition is accompanied by a sharp volume decrease under pressure. VHDA is identified as being formed due to the melting of this ordered liquid Phase 3. All these theoretical findings, using classical nucleation theory completed by an enthalpy difference between two liquids, are fully compatible with the experimental results (without introducing any complementary parameter to ensure the fit). The existence of an ordered liquid above T g and T m has been suggested by other authors can now be confirmed, without knowing the nature of its microscopic order at the atomic scale. Ordered phases have to exist in all glass-forming melts giving rise to various glass phases. This work was based on the prediction of homogeneous nucleation temperatures T n-of various liquid and glass phases and it is suggested that they are the formation temperatures of new ordered phases of superclusters followed, after subsequent cooling, by the percolation threshold of dynamical fractal structures above T g . All these new "ordered phases" still have superheating temperatures T n+ at the second homogeneous nucleation temperature T n+ above T m .
2019-04-09T13:06:45.728Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "390cd5c81af6d94fbb2eaa0da9173b64128e3e8e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1707.01442", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "390cd5c81af6d94fbb2eaa0da9173b64128e3e8e", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
92792848
pes2o/s2orc
v3-fos-license
Effects of varenicline and cognitive bias modification on neural response to smoking-related cues: a randomised control trial Drug-related cognitive biases have been positively associated with drug-craving and increased likelihood of relapse. Cognitive bias modification paradigms have been developed to attenuate cognitive biases but there have been few studies that examined neural responses to these paradigms. This study compared neural responses following CBM and explored whether CBM effects were potentiated by varenicline administration. This was a double-blind placebo-controlled study with two between subject factors of drug (varenicline, placebo) and CBM (attend towards smoking cues, train away from smoking cues, control training) that recruited daily (≥ 10 cigarettes per day) non-treatment seeking smokers. Participants (n = 67, 53% female) were randomised to one-week of drug administration (varenicline or placebo) before attending a study session at which they were randomised to CBM condition, and underwent an fMRI scan were they were presented with smoking and neutral cues. Neural response to smoking (vs. neutral) cues, cognitive bias, craving and mood were assessed. There was no evidence of CBM effects on any outcomes. There was evidence of effects of varenicline on craving, with greater reductions in craving in the week preceding the study session in the varencline group (p = 0.04, ηp2 = .06). There was also evidence of a drug by CBM interaction for neural responses (z = 3.78, p <0.001). Compared to placebo, varenicline was associated with greater activation in the right temporal middle gyrus in the CBM control condition, compared to an opposite effect in the CBM “attend towards” condition. These data suggest that CBM does not modify cognitive bias, subjective craving and mood, or neural response to smoking cues. There was also no evidence that CBM effects were potentiated by varenicline. 86 Consequently, reduction in cognitive bias is a potential target for therapeutic 87 intervention. 88 There is evidence that it is possible to reduce cognitive biases using computer-89 based cognitive bias modification (CBM) paradigms that "train" individuals to allocate 90 attention away from disorder-relevant cues. CBM has been shown to reduce cognitive 91 and has also been associated with reduction in other symptoms such as low mood 92 (Baert, De Raedt, Schacht, & Koster, 2010). Attwood and colleagues (Attwood, 93 O'Sullivan, Leonards, Mackintosh, & Munafo, 2008) reported decreased cognitive bias in a group of smokers following one session of stimulus-avoidance CBM using a 95 modified dot probe task. Compared to a group who had been trained to attend to 96 smoking cues, there was evidence that the avoid group also showed attenuated 97 craving in response to in vivo smoking cues in a subsequent cue exposure test (male 98 participants only). A subsequent study in tobacco smokers found similar decreases in 99 cognitive bias following CBM, but did not observe generalisation of these effects of 100 other relevant behaviours (e.g., cigarette craving) or novel (untrained) stimuli (Field, 108 There has been growing interest in the development of combination drug-109 behavioural therapies, in which a drug is used to augment the outcomes of a 110 behavioural intervention (Swerdlow, 2012). This may offer a solution to the low efficacy 111 and reliability of CET effects, if a suitable pharmacological agent can be identified. The 112 smoking cessation pharmacotherapy varenicline acts as a partial agonist of the α4β2 113 nicotinic acetylcholine receptor and aids cessation by reducing cigarette craving and 114 withdrawal symptoms. However, it has also been associated with a reduction in cue- 120 The current study replicated earlier work by examining the effects of CBM on 121 behavioural measures of cognitive bias (visual dot probe and modified Stroop), and extended the work in two important ways. First, we examined whether 7-day pre-123 treatment of varenicline enhanced the effects of CET on smoking cue reactivity and 124 attentional bias. Second, using fMRI, we examined the neural responses to smoking 125 cues following treatment. Neuroimaging studies suggest that drug-related cognitive 126 biases are the result of a failure of cognitive regulatory systems to increase control in 127 the presence of salient cues that increase processing in the reward and emotional 128 centres of the brain (e.g., striatum, amygdala) (Hester & Luijten, 2013 presented. Before and after each block, a crosshair was presented for 5 s. 251 Participants were then asked to rate cigarette craving on an 8-point scale ("none at 252 all" to "extreme"). The scale was presented for 10 s followed by a crosshair for 253 another 10 s. Thus, the total interblock-interval was 25 s. The sequence of events 254 was controlled using EPrime version 2 software (Psychology Software Tools Inc., 255 Pittsburgh PA), and total task time was approximately 10 min. 389 For error data, three participants were identified as outliers in the pre-CBM 390 condition and one participant was identified as an outlier in the post-CBM condition. 391 These data were removed from main analysis. After data removal error data were not 392 normally distributed and a square root transformation was applied to these data. craving from session one (pre-drug) to session two (post-drug) in both drug groups, but 413 this effect was larger in the varenicline group (see Table 2). For cigarette craving VAS 514 Taken together these findings support a benefit of varenicline on tonic craving 515 and neural response to smoking cues (which may be driven by the craving effects). 516 While the effects of varenicline may be small, they are meaningful given the fact that 517 the dosing regime delivered in the study is substantially lower than the clinically 518 prescribed dose (i.e., 1 week compared to a standard 12-week course 534 There are some limitations of this study that should be considered when 535 interpreting these findings. First, our sample size was small for the analysis of 536 interactions. Our planned recruitment of 72 participants was achieved but not all 537 participants were tested to completion, and our final sample was lower (n = 67 for subjective and cognitive data; n = 64 for fMRI data). We also have a computer 539 malfunction for one of the conditions that was not identified until data were extracted. 540 We had to replace a number of participants in one CBM condition (avoid) and therefore 541 these individuals were tested outside of the randomisation sequence. We do not 542 however expect that this had a substantial effect on outcomes as these individuals 543 were testing in close time proximity to the rest of the sample. Furthermore, the 544 researchers collecting data were not aware of the reason for additional recruitment, 545 and therefore double-blinding was maintained. Third, our study recruited non-treatment 546 seeking smokers, and it is plausible that effects of CBM may be stronger in individuals 547 seeking treatment. 548 This study investigated neural responses to smoking cues following varenicline 549 and CBM treatment. There was little evidence of neural effects of either drug or CBM. 550 However, there was evidence of reductions in craving among smokers who completed 551 one-week of varenicline treatment. Drug by CBM interactions were exploratory due to 552 small sample sizes, but we observed an interaction on right temporal gyrus activity. 553 Specifically, varenicline appeared to attenuate cue-related activity in the right temporal
2019-04-03T13:09:02.361Z
2018-12-20T00:00:00.000
{ "year": 2018, "sha1": "ddd963d530b7e0590e44263591715838aa7b41d9", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2018/12/20/480566.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "98c990330d0065e7d8967824583df731fa8599bd", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13765844
pes2o/s2orc
v3-fos-license
Methylmercury Uptake into BeWo Cells Depends on LAT2-4F2hc, a System L Amino Acid Transporter The organic mercury compound methylmercury (MeHg) is able to target the fetal brain. However, the uptake of the toxicant into placental cells is incompletely understood. MeHg strongly binds to thiol-S containing molecules such as cysteine. This MeHg-l-cysteine exhibits some structural similarity to methionine. System L plays a crucial role in placental transport of essential amino acids such as leucine and methionine and thus has been assumed to also transport MeHg-l-cysteine across the placenta. The uptake of methylmercury and tritiated leucine and methionine into the choriocarcinoma cell line BeWo was examined using transwell assay and small interfering (si)RNA mediated gene knockdown. Upon the downregulation of large neutral amino acids transporter (LAT)2 and 4F2 cell-surface antigen heavy chain (4F2hc), respectively, the levels of [3H]leucine in BeWo cells are significantly reduced compared to controls treated with non-targeting siRNA (p < 0.05). The uptake of [3H]methionine was reduced upon LAT2 down-regulation as well as methylmercury uptake after 4F2hc silencing (p < 0.05, respectively). These findings suggest an important role of system L in the placental uptake of the metal. Comparing the cellular accumulation of mercury, leucine, and methionine, it can be assumed that (1) MeHg is transported through system L amino acid transporters and (2) system L is responsible for the uptake of amino acids and MeHg primarily at the apical membrane of the trophoblast. The findings together can explain why mercury in contrast to other heavy metals such as lead or cadmium is efficiently transported to fetal blood. Introduction One essential function of the human placenta is to accomplish the exchange of nutrients, gases, and metabolites between the mother and the fetus. The placental tissue is a barrier separating the maternal and fetal blood stream. Any substance that crosses the maternal-fetal interface from the maternal to the fetal side has to pass the outer syncytiotrophoblast (STB), the underlying cytotrophoblast (CTB), and the fetal endothelial cells (FECs). The initially complete cytotrophoblast layer becomes discontinuous as pregnancy progresses, resulting in just two continuous layers (STB and FECs) in the term placenta. Pre-Experiments The BeWo cell culture protocol was optimized to study the uptake of MeHg, [ 3 Figure 1A,B). Immunoblots (LAT1 and 4F2hc) and quantitative polymerase chain reaction (qPCR) (LAT2) confirmed efficient siRNA mediated knockdown under these conditions ( Figure 1C). It has to be noted that, upon LAT1 downregulation, 4F2hc expression is also markedly reduced. (Table 1) were proven unable to detect the protein [20]. H]methionine, and [ 3 H]leucine into placental cells in transwell plates ( First we conducted a time course experiment to compare the uptake of MeHg and the amino acids; BeWo cells were exposed to two MeHg doses (0.9 µM, 2.0 µM) and to 1 µCi/mL tritium-labeled methionine and leucine, respectively, for 10 min, 30 min, and 60 min (Figure 2A,B). The 0.9 µM dose was selected as a multiple of 0.03 µM MeHg, which is equivalent to about 6 µg/L, representing physiological concentration [8]. As expected, mercury levels were twofold higher in cells exposed to 2.0 µM MeHg than in those exposed to 0.90 µM MeHg. Compared to the rather continuous uptake of amino acids, the uptake of methylmercury, particularly the 2 µM dosage, occurred in a non-linear manner. It was strongest during the first ten minutes, followed by a steady uptake for further 20 min, and then uptake increased significantly again. The incubation time for all following experiments was set at 60 min. At that time point, the cells had accumulated detectable concentrations of all substrates (Figure 2A,B). As shown in Figure 2C,D, BeWo cells exposed to 2 µM MeHg (i.e., 180 µg/L) for one hour accumulate mercury to non-cytotoxic levels as far as indicated by unchanged cell numbers ( Figure 2E). Forskolin-Induced BeWo Cell Fusion Increases LAT1 and LAT2 Expression The Forskolin-induced fusion of BeWo cells was used to analyze changes of system L expression during differentiation ( Figure 3A). Both LAT1 and LAT2 expression significantly increased upon 48 h of Forskolin treatment in relation to dimethyl sulfoxide (DMSO) treated controls, an effect confirmed by immunoblotting for LAT1 ( Figure 3B). LAT2 and 4F2hc Downregulation Reduces Mercury Uptake into BeWo Cells Adding MeHg to apical compartments upon LAT2 and 4F2hc silencing resulted in significantly decreased mercury content of the BeWo cells (76% and 58%, respectively) in relation to the controls ( Figure 4A). No such effect could be detected when methylmercury was added to the basal compartment (data not shown). The basal to apical permeability determined by Lucifer Yellow paracellular transport was 5.2 ± 1.7% (n = 8) and was approximately twice as high as that from apical to basal (3.4 ± 1.3%, n = 8). The ratio thus, as expected, correlated to the ratio of basal to apical volumes of media (2:1). First we conducted a time course experiment to compare the uptake of MeHg and the amino acids; BeWo cells were exposed to two MeHg doses (0.9 µM, 2.0 µM) and to 1 µCi/mL tritium-labeled methionine and leucine, respectively, for 10 min, 30 min, and 60 min (Figure 2A,B). The 0.9 µM dose was selected as a multiple of 0.03 µM MeHg, which is equivalent to about 6 µg/L, representing physiological concentration [8]. As expected, mercury levels were twofold higher in cells exposed to 2.0 µM MeHg than in those exposed to 0.90 µM MeHg. Compared to the rather continuous uptake of amino acids, the uptake of methylmercury, particularly the 2 µM dosage, occurred in a non-linear manner. It was strongest during the first ten minutes, followed by a steady uptake for further 20 min, and then uptake increased significantly again. The incubation time for all following experiments was set at 60 min. At that time point, the cells had accumulated detectable concentrations of all substrates (Figure 2A,B). As shown in Figure 2C,D, BeWo cells exposed to 2 µM MeHg (i.e., 180 µg/L) for one hour accumulate mercury to non-cytotoxic levels as far as indicated by unchanged cell numbers ( Figure 2E). Forskolin-Induced BeWo Cell Fusion Increases LAT1 and LAT2 Expression The Forskolin-induced fusion of BeWo cells was used to analyze changes of system L expression during differentiation ( Figure 3A). Both LAT1 and LAT2 expression significantly increased upon 48 h of Forskolin treatment in relation to dimethyl sulfoxide (DMSO) treated controls, an effect confirmed by immunoblotting for LAT1 ( Figure 3B). LAT2 and 4F2hc Downregulation Reduces Mercury Uptake into BeWo Cells Adding MeHg to apical compartments upon LAT2 and 4F2hc silencing resulted in significantly decreased mercury content of the BeWo cells (76% and 58%, respectively) in relation to the controls ( Figure 4A). No such effect could be detected when methylmercury was added to the basal compartment (data not shown). The basal to apical permeability determined by Lucifer Yellow paracellular transport was 5.2 ± 1.7% (n = 8) and was approximately twice as high as that from apical to basal (3.4 ± 1.3%, n = 8). The ratio thus, as expected, correlated to the ratio of basal to apical volumes of media (2:1). LAT2 and 4F2hc Downregulation Reduces Methionine and Leucine Uptake into BeWo Cells LAT2 and 4F2hc downregulation resulted in the significantly reduced uptake of leucine (46% and 71%, respectively) and methionine (61% and 74%, respectively) when amino acids were added to the apical chamber ( Figure 4B,C). No such effect was seen when the amino acids were added to the basal compartment (data not shown). In LAT1 downregulated BeWo cells, a trend for lower leucine uptake was observed. The permeability determined by paracellular mannitol transport was 2.1 ± 0.5% (n = 6) in experiments examining apical to basal leucine transport. The basal to apical permeability was 4.5 ± 0.9%. With regard to methionine transport, apical to basal permeability was 2.2 ± 0.4% (n = 6), and basal to apical permeability was 5.0 ± 1.3% (n = 6). The ratio of permeability thus, as expected, correlated with the ratio from apical to basal volumes of media (1:2). Discussion The concept of a placenta barrier suggests that a placental cell, first and foremost the STB, is able to distinguish between essential nutrients that have to be transported to the fetal blood stream and unwanted substances that should not reach the fetal circulation. It is, however, evident that the toxicant mercury in the form of MeHg-L-cysteine is recognized by system L when expressed at the blood brain barrier or in Xenopus laevis eggs (e.g., [13,14]). The question arose whether the toxicant is transported in the same way as amino acids across the human placenta. While placental amino acid transport is comparatively well understood [11,21], our knowledge on placental mercury transport still is incomplete. The aim of the present study was to address the role of placental system L amino acid transporters in MeHg uptake into BeWo cells, a trophoblast transport model endogenously expressing system L. It has to be noted that BeWo cells are mostly mononuclear (if not stimulated to fuse in vitro) and thereby model the undifferentiated trophoblast rather than the syncytiotrophoblast. As human primary trophoblast cells start to differentiate rapidly after plating and form syncytia in a discontinuous manner [22], they are rarely used in transwell studies. However, in a recent report, a validated model of a confluent human primary trophoblast monolayer has been proposed [23]. Previous findings [8,9,13,14,[24][25][26] suggest that MeHg transport across barriers depends on cysteine, is stereo-selective (MeHg is transported in presence of L-cysteine but not in presence of Dcysteine), and is carrier-mediated by system L and system b 0,+ . In vitro demethylation to mercuric LAT2 and 4F2hc Downregulation Reduces Methionine and Leucine Uptake into BeWo Cells LAT2 and 4F2hc downregulation resulted in the significantly reduced uptake of leucine (46% and 71%, respectively) and methionine (61% and 74%, respectively) when amino acids were added to the apical chamber ( Figure 4B,C). No such effect was seen when the amino acids were added to the basal compartment (data not shown). In LAT1 downregulated BeWo cells, a trend for lower leucine uptake was observed. The permeability determined by paracellular mannitol transport was 2.1 ± 0.5% (n = 6) in experiments examining apical to basal leucine transport. The basal to apical permeability was 4.5 ± 0.9%. With regard to methionine transport, apical to basal permeability was 2.2 ± 0.4% (n = 6), and basal to apical permeability was 5.0 ± 1.3% (n = 6). The ratio of permeability thus, as expected, correlated with the ratio from apical to basal volumes of media (1:2). Discussion The concept of a placenta barrier suggests that a placental cell, first and foremost the STB, is able to distinguish between essential nutrients that have to be transported to the fetal blood stream and unwanted substances that should not reach the fetal circulation. It is, however, evident that the toxicant mercury in the form of MeHg-L-cysteine is recognized by system L when expressed at the blood brain barrier or in Xenopus laevis eggs (e.g., [13,14]). The question arose whether the toxicant is transported in the same way as amino acids across the human placenta. While placental amino acid transport is comparatively well understood [11,21], our knowledge on placental mercury transport still is incomplete. The aim of the present study was to address the role of placental system L amino acid transporters in MeHg uptake into BeWo cells, a trophoblast transport model endogenously expressing system L. It has to be noted that BeWo cells are mostly mononuclear (if not stimulated to fuse in vitro) and thereby model the undifferentiated trophoblast rather than the syncytiotrophoblast. As human primary trophoblast cells start to differentiate rapidly after plating and form syncytia in a discontinuous manner [22], they are rarely used in transwell studies. However, in a recent report, a validated model of a confluent human primary trophoblast monolayer has been proposed [23]. Previous findings [8,9,13,14,[24][25][26] suggest that MeHg transport across barriers depends on cysteine, is stereo-selective (MeHg is transported in presence of L-cysteine but not in presence of D-cysteine), and is carrier-mediated by system L and system b 0,+ . In vitro demethylation to mercuric mercury is implausible as, in humans, MeHg is slowly metabolized to inorganic mercury, predominantly by the intestinal microflora at a rate of about 1% of the body burden per day and to some extent also in phagocytic cells. It is therefore to be expected that most of the MeHg added to cell culture medium is present as monovalent cation (CH 3 Hg + ) rapidly bound to ligands due to the high affinity of mercury ions to sulfhydryl group-containing molecules. Although it is likely that a substantial part of MeHg is bound to cysteine in the extracellular space as well as to glutathione in the cytosol, the respective amounts of MeHg compounds present in the extra-and intracellular compartments have not been quantified so far. It has to be noted that the amino acid composition of the serum (FCS) we added to cell culture medium is not provided by the manufacturer. In order to demonstrate the inhibition of MeHg transport after the inactivation of system L subunits, we studied the transport of methionine, leucine, and methylmercury in parallel. Methionine was included because MeHg-L-cysteine structurally mimics the amino acid part of the molecule [10], while leucine specifically reflects system L activity as the amino acid is mainly transported by system L [11]. In this work, we provide evidence that MeHg is transported across BeWo cells through system L amino acid transporters. Dose and Time Dependent Uptake into BeWo Cells BeWo cells accumulate mercury to levels directly proportional to MeHg dosages. The mercury levels do not reach equilibrium during the first hour of exposure. Primary human trophoblast cells reach a steady state in mercury accumulation after about four hours [8]. Both methionine and leucine reach a steady state at around 30 to 60 min. The findings are in accordance with the time course of histidine uptake through LAT1 [27]. Simmons-Willis et al. [14] suggested MeHg-cysteine to be a better substrate for system L than endogenous amino acids, as they observed higher V max values for MeHg-L-cysteine than for methionine transport through LAT1 and LAT2. Nonetheless, this finding cannot explain why BeWo cells still accumulate mercury while amino acid levels are already in a steady state. System L activity in combination with other amino acid uniporters/exchangers obviously tightly regulates intracellular methionine and leucine levels, while MeHg, once in the cell, dissociates from cysteine to bind to other intracellular ligands, e.g., glutathione or metallothionein [28], and thus no longer is under the control of amino acid transporters. LAT1, LAT2, and 4F2hc Downregulation Does Not Affect BeWo Cell Number The validation of siRNA-mediated gene knockdown by immunoblotting showed that the silencing of LAT1 results in the downregulation of 4F2hc. LAT2 knockdown could be confirmed on the mRNA level as commercial LAT2 antibodies (Table 1) were shown to be unable to detect the target protein [20]. We observed BeWo cell numbers to remain unaffected by the downregulation of any of the system L subunits (data not shown). In addition, no effects on cell morphology such as shrinking could be observed by visual inspection (inverted light microscope). Amino acid supply is crucial for cell growth and proliferation [29]. Gene targeting of Slc3a2 (4F2hc) in conventional knockout mice is embryonically lethal as it is obligatory for murine embryogenesis [30]. 4F2hc was shown to play a role in tumorigenesis in renal cancer cell lines [31] and in the skin homeostasis of Slc3a2 conditional knockout mice [32]. A global homozygous knockout of Slc7a5 (LAT1) in mice was also embryonically lethal. The heterozygous Slc7a5 knockout animals, however, had no overt phenotype, suggesting that its function in mTOR-S6K signaling is sufficiently compensated by, for instance, LAT2 [33]. A loss of LAT1 results in tumor growth inhibition [34]. In contrast to 4F2hc and LAT1, the Slc7a8 (LAT2) knockout mouse did not apparently differ from the wild type mouse, apart from a mild aminoaciduria [35]. Overall, we conclude that our experimental model (i.e., a transfection period of five to seven days; Figure 1A) does not mimic the long-term effects of system L subunit inhibition. Forskolin-Induced BeWo Cell Fusion Increases LAT1 and LAT2 Expression Our observation that Forskolin induces the up-regulation of LAT1 is in accordance with previous reports in BeWo cells [36]. Elevated levels of LAT1 might be essential for the formation of the syncytiotrophoblast since a recent study has demonstrated impairments in trophoblastic fusion in LAT1 knockdown mice as well as in LAT1 deficient BeWo cells [37]. LAT2 and 4F2hc Downregulation Reduces Uptake of Methylmercury, Leucine and Methionine into BeWo Cells BeWo cells accumulate significantly less mercury (76% and 58%) upon LAT2 and 4F2hc silencing relative to controls. This finding is, in principle, in accordance with other reports showing the transporter subunits to be involved in MeHg uptake into rat brain [13], C6 rat glioma cells [38], B35 rat neurons [16], rat placenta [17], Xenopus laevis oocytes [14], and Chinese hamster ovary cells [15]. In our previous study on BeWo cells cultivated in conventional dishes, we observed a similar reduction of mercury uptake upon LAT2 silencing (75%) but no such effect upon 4F2hc silencing. Moreover, we found the strongest effect on cellular mercury when LAT1 was down-regulated [8]. It remains unclear whether these discrepancies emerge from the different methods of cell culturing (conventional dishes versus transwell), leading to differences in cell morphology and polarity [39], or from the different MeHg treatments (0.90 µM in our previous work versus 2 µM in the present study). BeWo cells responded to the down-regulation of LAT2 and 4F2hc with significantly reduced uptake of methionine (61% and 74%) and leucine (46% and 71%), whereas LAT1 silencing had no apparent effect (94% and 84%). The latter observation is in accordance with a previous report from Gaccioli et al. [40] in human primary trophoblast cells. The less pronounced effect of LAT2 knockdown on leucine uptake in human primary trophoblast cells (uptake reduced to 87% relative to controls) compared to BeWo cells (reduced to 46%; Figure 4B) might be explained by the circumstance that human primary trophoblasts are hard to transfect. In transwell experiments with BeWo cells, LAT1 knockdown had no significant effect on that transport of methylmercury, leucine, and methionine, although we observed a trend for lowered uptake here as well (Figure 4). In X. laevis oocytes, LAT1 was shown to transport the substrate faster than LAT2 (V max of 286 vs.75) but also to have a lower affinity to MeHg-L-cysteine than LAT2 (K m of 98 µM vs. 64 µM) [14]. To our knowledge, no other studies exist in which mercury and amino acid uptake have been directly compared. In lung cancer cells, Dann et al. [34] observed a significant reduction of methionine levels (to about a third relative to controls) upon the downregulation of LAT1. Nicklin et al. [29] found both LAT1 and 4F2hc silencing to exert the same effect on leucine transport (reduction to a third) into HeLa cells. Uptake of MeHg, Leucine and Methionine in Relation to Transporter Localization Most of the so far available data suggest that system L transporters, LAT1, LAT2, and 4F2hc, respectively, localize primarily to the apical side of the STB [41][42][43]. Our data based on RNAi validated antibodies [20] showed the heavy chain 4F2hc to be localized at both STB plasma membranes (apical and basolateral). Moreover, we found the light chains, LAT1 and particularly LAT2, localized in intracellular vesicular structures of the STB [8]. Our data indicate that LAT2-4F2hc exerts its function predominantly at the apical side of trophoblast cells, as the uptake of mercury, leucine, and methionine remained unaffected by system L inactivation when substrates were added to the basal chamber. Transwell Studies BeWo cells were seeded onto permeable Transwell inserts (12-well polycarbonate membrane with 0.4 µm pore size, Corning Inc., Corning, NY, USA) that had been coated with human placental collagen (50 µg/cm 2 ; Bornstein and Traub Type IV, Sigma-Aldrich Corporation, St. Louis, MO, USA) according to the protocol of Bode et al. [22] at a density of 2.5 × 10 4 cells/well. The cells were transiently transfected 24 h to 48 h post seeding with non-targeting and specific siRNA targeting SLC7A5, SLC7A8, and SLC3A2 (encoding LAT1, LAT2, and 4F2hc) (GE Dharmacon, Lafayette, CO, USA) using Lipofectamine RNAiMax (Life Technologies) as described by Rosner et al. [44], with the minor modification of employing only 1 4 of the original transfection reagent amount. Thereafter the cells were cultivated until a confluent monolayer was formed (around eight days after seeding). BeWo cells were treated with 2 µM MeHg (aqueous CH 3 HgCl) (Alfa Aesar, Haverhill, MA, USA) for one hour, added to medium at the apical (0.5 mL) or basal (1 mL) side of the transwell. The paracellular permeability of each well was determined concomitantly to MeHg transport by adding 100 µM Lucifer Yellow (CH Dilithium Salt, Sigma-Aldrich) to the target compartment, while the opposing compartment was filled with cell culture medium only. Lucifer Yellow's fluorescence was measured in black 96-well plates (Corning) in a microplate reader (BioTek Instruments, Winooski, VT, USA) using a 485 ± 20 nm excitation and 528 ± 20 nm emission filter. In the same way as for MeHg, BeWo cells were treated with 1 µCi/mL Forskolin Treatment BeWo cells were cultured until 50% confluency on 60 mm dishes (Corning). At this point, the cells were either incubated with Forskolin (20 mM) or DMSO as a control. Cells were harvested after 24 h, 48 h, and 72 h. Changes in gene expression were analysed by quantitative PCR (qPCR) and Immunoblotting. RNA Isolation, cDNA Synthesis and Quantitative PCR Total RNA was isolated using TRI Reagent®(Sigma), according to the manufacturer's instructions. RNA was reverse transcribed with a Go-Script Reverse Transcription System (Promega, Madison, WI, USA) using random hexamer primers. Gene expression was analyzed using a Taq Man Expression System (Applied Biosystems, Foster City, CA, USA) in an Applied Biosystems StepOnePlus™ Real-Time PCR System, according to the manufacturer's protocol. The cDNA was diluted 1:11, and 2 µL was used as a template in a 15 µL reaction. Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and TATA-box binding protein (TBP) were used as reference genes. The employed primers were Hs00794796 m1 (SLC7A8), Hs99999905_m1 (GAPDH), and Hs00427620_m1 (TBP). In Forskolin experiments, we used the primers Hs00185826_1 (SLC7A5), Hs00247916_m1 (LAT2), and Hs00374243_m1 (SLC3A2), and as reference gene we used Hs0082473_m1 (Ubiquitin C). Protein Extraction and Immunoblotting The cells were lysed in RIPA (Radioimmunoprecipitation assay) buffer (50 mM Tris, pH 7.6, 150 mM NaCl, 1% Triton, 0.1% SDS, 0.5% sodium deoxycolate), supplemented with 2 mg/mL aprotinin, 0.3 mg/mL benzamidin chloride, 2 mg/mL leupeptin, and 10 mg/mL trypsin inhibitor (Sigma). The protein samples were separated using SDS-PAGE and transferred to nitrocellulose membranes. Blots were blocked for 1 h in 5% nonfat dry milk in tris-buffered saline containing 0.1% Tween 20 (TBST), followed by incubation in 5% bovine serum albumin (BSA)/TBST containing the primary antibody overnight at 4 • C. Thereafter, blots were washed and incubated with corresponding secondary horseradish peroxidase (HRP)-conjugated antibodies. The enhanced chemiluminescence method (Pierce™ ECL western blotting substrate, Thermo Fisher Scientific, Waltham, MA, USA) was used to visualize the signals. For a list of the employed primary and secondary antibodies, see Table 1. Analysis of Mercury The samples and reference material were acid-digested with nitric acid (69%; Suprapur®; Carl Roth, Karlsruhe, Germany) in a microwave oven (MARS6, CEM Corporation, Matthews, NC, USA). The samples, stabilized with HCl, were stored at 4 • C for up to three days and diluted in a ratio of 1:2.5 before they were analyzed for total mercury content by cold vapour atomic fluorescence spectroscopy (CV-AFS) (Mercur Plus, Analytik Jena AG, Jena, Germany). Quality control was achieved by measuring blank test solutions (limit of detection was 0.024 µg/L) and reference materials (Seronorm Trace Elements Urine L-2, 210705, LOT 1011645). The mercury levels of the reference material (30.6 ± 7.2 µg/L; n = 19) lay well within the certified range (23.8-55.8 µg/L). All samples were measured in duplicate by the working curve method (RSD < 15%). Statistics and Software Data represent mean values ± SD (standard deviation). Regression lines and coefficients of determination were made with MS Excel. ANOVA was applied for the comparison of group differences, followed by a Bonferroni test to correct for multiple testing. We used IBM SPSS Statistics 24 (IBM, Armonk, NY, USA) and set the critical significance level at α = 0.05. Conclusions The present study is the first one in which uptake of methylmercury, leucine, and methionine were examined upon the knockdown of system L subunits LAT1, LAT2, and 4F2hc in parallel and in a setting as close as possible to the in vivo situation. The direction and magnitude of the effects are comparable. Altogether, the findings clearly indicate that LAT2-4F2hc is a significant contributor to methylmercury uptake into placental cells. The findings support the assumption that methylmercury (in the extracellular environment most likely present as MeHg-L-cysteine) is accidentally taken up into the human cytotrophoblast because the compound resembles essential amino acids. The 'mimicry' can explain why mercury in contrast to other heavy metals such as lead or cadmium is efficiently transported to the fetal blood.
2017-08-13T15:43:51.830Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "32bece25f61a388455f55e2d05012197132166bf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/8/1730/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32bece25f61a388455f55e2d05012197132166bf", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
259857899
pes2o/s2orc
v3-fos-license
Growth factors and mechano-regulated reciprocal crosstalk with extracellular matrix tune the keratocyte–fibroblast/myofibroblast transition Improper healing of the cornea after injury, infections or surgery can lead to corneal scar formation, which is associated with the transition of resident corneal keratocytes into activated fibroblasts and myofibroblasts (K–F/M). Myofibroblasts can create an extracellular matrix (ECM) niche in which fibrosis is promoted and perpetuated, resulting in progressive tissue opacification and vision loss. As a reversion back to quiescent keratocytes is essential to restore corneal transparency after injury, we characterized how growth factors with demonstrated profibrotic effects (PDGF, FGF, FBS, TGFβ1) induce the K–F/M transition, and whether their withdrawal can revert it. Indeed, the upregulated expression of αSMA and the associated changes in cytoskeletal architecture correlated with increases in cell contractility, fibronectin (Fn) and collagen matrix density and Fn fiber strain, as revealed by 2D cell culture, nanopillar cellular force mapping and a FRET-labeled Fn tension probe. Substrate mechanosensing drove a more complete K–F/M transition reversal following growth factor withdrawal on nanopillar arrays than on planar glass substrates. Using decellularized ECM scaffolds, we demonstrated that the K–F/M transition was inhibited in keratocytes reseeded onto myofibroblast-assembled, and/or collagen-1-rich ECM. This supports the presence of a myofibroblast-derived ECM niche that contains cues favoring tissue homeostasis rather than fibrosis. Proper corneal wound healing requires a sequence of well-tuned biochemical and biophysical events to properly regenerate the damaged tissue 1,2 . The transparency of the cornea is tightly linked to its extracellular matrix (ECM) architecture which is assembled and maintained by resident keratocytes, i. e. specialized mesenchymal cells that minimize light scattering, form a network and have a characteristic dendritic shape. Native keratocytes in the unwounded cornea display low contractility and do not express filamentous actin (F-actin) stress fibers, nor do they assemble a notable Fn matrix 3 . The transition of resident keratocytes into activated wound fibroblasts and myofibroblasts is induced by growth factors released at wound sites (most notably Transforming Growth Factor β1 (TGFβ1) and Platelet-Derived Growth Factor (PDGF)), the extracellular matrix (ECM) composition of the wound bed (e.g. ED-A fibronectin (Fn)), and alterations of wound mechanics (e.g. tissue tension) 4 . The transition of resident keratocytes into activated wound fibroblasts or alpha smooth muscle actin (αSMA) expressing myofibroblasts in response to injury is referred to as keratocyte-fibroblast/myofibroblast (K-F/M) transition. K-F/M is marked by alterations in ECM production, cytoskeletal architecture, and αSMA expression to enhance cell traction forces needed to contract and finally close a wound 1,2,5,6 . After wound closure, a reversion back to the keratocyte phenotype is essential to restore corneal transparency. A wounded epithelium is an important source of various K-F/M transition-inducing growth factors, including PDGF, fibroblast growth factor (FGF), and TGFβ1 6 . The concentrations of PDGF, FGF, and TGFβ1 in the wound environment decrease again once epithelial wound healing is completed 6 . This leads to a reduced K-F/M transition and increased myofibroblast apoptosis or phenotype reversal, facilitating regenerative stromal remodeling processes 6,7 . Myofibroblast disappearance from wound sites is necessary to prevent progressive scar formation. However, the ECM deposited by myofibroblasts during wound healing persists in scar tissue environments and affects resident cells long after the acute wound healing phase has subsided. The (mis)-instructive role of fibrotic ECM cues can therefore persist much longer than the influence of auto-and paracrine signals [8][9][10] . As such, even after the successful removal of scar tissue myofibroblasts, the presence of scar tissue ECM can create a niche in which fibrosis is promoted and perpetuated. This pathological ECM environment can push newly recruited cells towards the generation of new scar tissue [8][9][10] . Much like the fibrotic cancer associated ECM triggers the transition of normal tissue fibroblasts to the myofibroblast-like cancer-associated fibroblast phenotype 11 . Fibrosis has been studied extensively and is relatively well understood, especially regarding the mechanism of action of its biochemical components. However, significant knowledge gaps exist, specifically regarding the multitude of biophysical aspects that regulate the reciprocal ECM-cell interactions that drive tissue (re)modeling, thereby contributing either to fibrosis, or the disappearance of myofibroblasts from healing wound sites 12 . Central to the healing of wounds is cell proliferation and the assembly of new ECM. Fibronectin is the first provisional ECM that is actively assembled by either platelets 13 , or activated fibroblasts and myofibroblasts, in response to corneal injury 14 . The amount and organization of Fn present in a wound is correlated to the stage of wound healing 1,15 . Fn provides both biochemical and mechanical cues for adherent cells at the early stages of wound healing 16 and serves as template for the subsequent deposition of other ECM molecules, particularly of type I collagen [16][17][18] . As cells pull on Fn fibers, Fn domains can be stretched and partially unfolded by cell-generated forces, which can expose some binding sites that are buried in the native fold, while turning off others [18][19][20][21] . For example, Fn fiber stretching exposes cryptic Fn-Fn self-assembly sites 22 , which accelerates Fn fibrillogenesis, cross-linking and fiber bundling 19,22,23 . Whereas Fn fiber stretching decreases the binding affinity to collagen I, as the unstretched N-terminus of Fn provides a multivalent template for collagen-1 nucleation and fiber assembly 18 . Fn domain unfolding can also change the affinity of certain cell surface receptors 24,25 . The mechanobiology of the ECM can thus be dynamically regulated by alterations in cell contractility, allowing the ECM to function as a dynamic signaling reservoir that provides biochemical and biophysical guidance to the resident cells 19 . As such, ECM-induced cell responses are altered by wound healing myofibroblast-mediated changes in the global architecture and stiffness, and local composition and organization (e.g. Fn fiber stretching), of the ECM 9,10,19,26 . However, little is known whether Fn fiber tension might correlate with or even regulate the K-F/M transition, or its reversal. Treatment with wound healing-associated growth factors like PDGF, FGF, and TGFβ1 initiates K-F/M phenotype transition and stimulates Fn fibrillogenesis, and matrix remodeling and contraction 5,27,28 . Because cellular forces can stretch Fn 19,21 , we asked here to what extent the treatment with and deprivation of wound healing-associated growth factors affects the ability of corneal keratocytes to generate forces, assemble a Fn matrix and stretch the Fn fibrils within it. Gaining insights into the mechanobiological functional regulation of ECM during wound healing is particularly important since Fn fiber stretching directly regulates further Fn fibrillogenesis and the subsequent initiation of collagen-1 assembly 18,19 , and myofibroblast differentiation 29 , thereby decisively influencing wound healing outcomes. To ask whether the level of Fn fiber stretching and collagen fiber content within the surrounding matrix might influence myofibroblast phenotype switching, and to address the abovementioned questions, a unique combination of biochemical and physical assays was exploited to illuminate mechanoregulatory factors in corneal wound healing. Primary rabbit keratocytes were cultured in growth factor conditioned and deprived environments to tune keratocyte phenotypes in vitro that are representative of the initial and late phases of wound healing 27 . To ask how phenotypic changes relate to the ability to create traction forces, a nanopillar assay 30 was used to probe cellular traction forces with subcellular resolution. To probe Fn fiber stretching in the ECM of conditioned keratocytes, fluorescence resonance energy transfer (FRET)-labeled Fn (Fn-FRET) was used as tension probe to evaluate the in situ molecular conformation of fibrillar Fn within the cell assembled ECM 31 . Finally, to study Figure 1. Fibronectin and Collagen-1 ECM assembly is increased for growth factor conditioned keratocytes. (a) brief overview of the keratocyte-fibroblast/myofibroblast (K-F/M) phenotype transition process in vitro and experimental timeline. One day of cell attachment, four days of growth factor exposure to various growth factors (IGF-1, PDGF, FGF, TGFβ1) or FBS in the presence of 50 mg/ml unlabeled Fn, fixation, Fn and collagen-1 staining and imaging. (b, c) effect of exposure to various growth factors on Fn and collagen-1 fibrillogenesis. Limited, fragmentary, pericellular Fn fibril assembly in native control and IGF-1-conditioned keratocytes. Significantly denser fibrillar Fn network assembly by PDGF, FGF and TGFβ1-conditioned keratocytes (*:p < 0.05-0.0001), with the densest fibrillar Fn ECM assembled by FBS-conditioned keratocytes (**:p < 0.05-0.0001). TGFβ1-conditioned keratocytes deposited significantly more collagen-1 (***:p < 0.0001), resulting in a 1.5-3 times thicker sample compared to any of the other phenotypes tested (b; c). The x-y views in (b) are maximum intensity z-projections of the entire 8-30 µm high image stacks, the x-z views are maximum intensity y-projections (100/512 slices) of the white boxed-in regions of interest. Green = fibronectin, red = collagen-1, blue = DAPI. Scale bars: 25 µm. The box plots in (c) demonstrate Fn (green) and Collagen-1 (red) fluorescence intensities for all cell phenotypes. Boxes signify medians, 25th and 75th percentiles. Whiskers represent the measurement range. The x-axes of the small graphs indicate sample thickness in µm. All y-axes indicate raw fluorescence intensities (gray values). FBS and TGFβ1 data were not plotted on the same scale as the data for the control, IGF-1, PDGF and FGF phenotypes. Statistical comparisons between phenotypes were performed via one-way ANOVA with Tukey's multiple comparisons test (Collagen-1) and Kruskal-Wallis with Dunn's multiple comparisons test (Fn), with significance set at p < 0.05 for all comparisons. The various phenotypes have been color coded in the relevant figures and graphs throughout the manuscript: native keratocyte control = black, IGF-1 = grey, PDGF = orange, FGF = red, FBS = green, TGFβ1 = blue. FBS and TGFβ1 conditioned keratocytes display increased contractility with altered distribution of adhesive forces across the cell body, which is reversed after FBS and TGFβ1 deprivation. A remodeling of the contractile actomyosin cytoskeleton is crucial for cell phenotype transitions (Supplementary Figs. S1 and S5) 28,35,36 . To quantify how this relates to changes in cell contractility during the K-F/M transition and reversal, Fn coated SU-8 nanopillar arrays were used to directly measure cellular contractility on a single cell level in our K-F/M transition model (Fig. 2a) 30 . Quantification of the mean forces per nanopillar exerted by IGF-1, PDGF or FGF conditioned keratocytes revealed that they did not exceed the native keratocyte baseline (Fig. 2g). In contrast, FBS conditioned fibroblasts generated significantly higher traction forces on the nanopillar substrates compared to native keratocytes ( Fig. 2c,g,j). Forces generated by TGFβ1 conditioned myofibroblasts were significantly higher than for any of the other phenotypes tested (Fig. 2e,g,j). FBS and TGFβ1 conditioned keratocytes not only generated higher traction forces per nanopillar (Fig. 2g,j), but also increased significantly in cell size (Fig. 2j, see also 33 ), resulting in much higher overall force generation per cell. In parallel with their increased contractility, actin fiber assembly was significantly upregulated in FBS and TGFβ1 conditioned keratocytes on Fn-coated nanopillar substrates, but not with IGF-1, PDGF or FGF conditioning (Fig. 2b,c,e, Supplementary Fig. S6). Growth factor conditioning not only had a major impact on cell shape, but also on the locations of maximal pillar displacements ( Fig. 2b-f, supplementary Fig. S6). While the traction forces were highest at the cell periphery for keratocytes, IGF and PDGF conditioned cells, FBS or TGFβ1-activation led to the additional Figure 2. TGFβ1 and FBS conditioning cause an increase in cell contractility and redistribution of cellular forces, which is reversed via TGFβ1 and FBS deprivation. (a) Experimental timeline: growth factor conditioned and deprived keratocytes were seeded onto Fn-coated nanopillar substrates, fixated three hours after seeding, then phalloidin stained and imaged. Confocal microscopic images were obtained to evaluate the deflection of the nanopillars underneath the cells by recording images at the pillar's base and top and by analyzing pillar displacement with particle tracking software. (b-f) Actin cytoskeleton morphology of control, FBS conditioned, FBS deprived, TGFβ1 conditioned and TGFβ1 deprived keratocytes seeded on nanopillar substrates. Scale bars: 5 μm. Corresponding colorimetric distribution maps of force induced pillar displacement across the cell body are juxta-positioned. Scale bars: 20 μm. Displacement of single nanopillars was topographically mapped with the level of displacement indicated by colors ranging from blue to red (0 to 0.5 μm displacement). (g) Scatter plot of nanopillar displacement and traction forces per nanopillar generated by growth factor conditioned keratocytes. Displacement of and forces exerted on single nanopillars were averaged across whole cells. FBS (*:p < 0.0001) and TGFβ1 (**:p < 0.0001) conditioning significantly increased cellular contractility compared to other phenotypes. Plots were constructed from data averaged from 18 cells per phenotype measured in three separate experiments. Bars signify the means and whiskers the standard deviation from the mean, reflecting force differences between individual cells. Statistical comparisons via one-way ANOVA and Tukey's multiple comparisons test with significance set at p < 0.05 for all comparisons. (h) As (g), but for growth factor deprived keratocytes. Note residual increased contractility of FBS deprived keratocytes (***:p < 0.01-0.0001). . We previously demonstrated that the high perinuclear pillar displacement originates from actin stress fibers mostly oriented along the main cell axis and spanning the apical side of the cell nucleus, also called the 'actin cap' 30 . Actin stress fibers were only observed in FBS conditioned keratocytes on nanopillars, but not upon TGFβ1 stimulation in our study ( Supplementary Fig. S8). Instead, a dendritic cell shape resembling the myofibroblast morphology in compliant collagen gels, and a dense actin network, rich in nodular actin structures colocalized with regions of high pillar displacement, were observed within TGFβ1 conditioned keratocytes on nanopillars (Figs. 2e,i, Supplementary Figs. S7-S9) 37 . Since cellular force generation changes during the early cell attachment and spreading phases 38 , we asked whether significant changes in cellular force generation occurred in our model between two and three hours after cell seeding. The traction forces obtained during dynamic force measurements on nanopillar arrays (Fig. 3c) were identical to the forces measured in experiments with fixed cells (Fig. 2g), with the highest forces measured in myofibroblasts. Subsequently, we evaluated whether growth factor stimulated phenotype changes could be reversed by growth factor deprivation in 2D in vitro cell culture conditions. A disassembly of actin stress fibers and a partial reversion towards the native stellate keratocyte morphology was indeed observed following a 4-day period of PDGF or FGF deprivation on planar substrates ( Supplementary Fig. S1). Significant actin stress fiber disassembly also occurred in FBS and TGFβ1 deprived keratocytes, but cell polarization and thin basal stress fibers remained visible in most cells cultured on planar substrates ( Supplementary Fig. S1). αSMA containing stress fiber disassembly was complete after FBS and TGFβ1 deprivation and cell metabolic activity decreased to the pre-growth factor conditioning levels of native keratocytes following growth factor deprivation ( Supplementary Fig. S4). The generated traction forces, cytoskeletal F-actin morphology, and location of maximal pillar displacement by activated fibroblasts and myofibroblasts (FBS or TGFβ1-treated) on nanopillar array substrates reverted to those of native keratocytes following growth factor deprivation ( Fig. 2d,f,h,k). FBS and TGFβ1 conditioned keratocytes cause significant stretching of Fn fibers. To ask how differences in Fn and collagen deposition and cellular contractility between FGF, PDGF, FBS or TGFβ1 conditioned keratocytes impact Fn fiber tension, our well validated Fluorescence Resonance Energy Transfer (FRET) assay using Fn labeled with multiple donor and acceptor fluorophores (FRET-Fn) was exploited. This is a sensitive method to probe a large range of conformational changes in Fn fibers 31 . To prevent intermolecular FRET, the FRET-Fn probe was added in trace-amounts to the culture medium ( Fig. 4a) using previously published protocols 31 . FRET revealed that FBS and especially TGFβ1 conditioned keratocytes stretched the Fn fibers within the ECM much more than PDGF and FGF conditioned keratocytes ( Fig. 4c-h). We know from previous studies that cell-regulated Fn fiber stretching is associated with partial protein unfolding which can activate or destroy molecular binding epitopes 19 . Significant Fn fiber stretching and thus partial protein unfolding was observed in the ECM assembled by FBS and especially TGFβ1 conditioned keratocytes. This finding is in agreement with the enhanced cell contractility of FBS and TGFβ1 conditioned keratocytes as quantified on nanopillar substrates (Fig. 2g). In fact, RhoA activation has been shown to upregulate cell traction forces, as well as Fn fiber tension 19 . When asking whether the tensional state of the Fn ECM could be reverted, a 4-day period of PDGF and FGF deprivation, subsequent to the initial PDGF and FGF stimulation, revealed a near complete disassembly of the Fn ECM. A significantly decreased Fn ECM density precluded a reliable evaluation of FRET intensity ratios after FBS and TGFβ1 deprivation in vitro. As a result, we could not analyze the Fn fiber tension in growth factor deprived keratocyte cultures. Intracellular Fn-containing vesicles with a fluorescence at the wavelength range of Figure 3. Dynamic traction force measurements in native, FBS and TGFβ1 conditioned keratocytes reveal contractile forces that are stably maintained over a 30-min timeframe. Force generation measurements on fixed cells reflect the cellular contractile state at a single time point, which was three hours after cell seeding in our single cell contractility assays (Fig. 2a). Cellular force generation changes during the early cell attachment and spreading phases 38 . We therefore asked whether significant changes in cellular force generation occurred in our model between two and three hours after cell seeding. (a) Native, FBS and TGFβ1 conditioned keratocytes were live-membrane (DiL) stained and seeded onto Fn-coated nanopillar substrates. Two cells per phenotype were imaged within three hours after seeding for a period of 30 minutes on an incubated microscope stage. This timeframe was chosen to match the timeframe for force generation measurements on fixed cells and evaluate traction force stability after the initial cell attachment and spreading phases 38 . Nanopillar displacement was evaluated using particle tracking software on confocal microscopic images. (b) Cell outlines of DiL-stained native, FBS and TGFβ1 conditioned keratocytes seeded on nanopillar substrates. Scale bars: 10 μm. (c) Plots of displacement of and forces exerted onto single nanopillars averaged across whole cells. Minute-by-minute force plots (left graph) and 30-min whole cell force averages (right graph: two cells per phenotype indicated on the x-axis). The symbols indicate the means and the whiskers the standard deviation from the mean, reflecting force differences between individual nanopillars (left graph) and force differences over time (right graph). Force differences between individual nanopillars are large, as also seen on the pillar displacement histograms in Fig. 2j and the force maps in Fig. 2b-f, where areas with high pillar deflection and areas without pillar deflection are identified in all three phenotypes. The traction forces (nN) observed during dynamic force measurement resembled the forces measured in experiments with fixed cells, with the highest forces measured in myofibroblasts. Nanopillar displacement ranged between 0.05 and 0.10 μm for keratocytes, between 0.10 and 0.15 μm for fibroblasts and between 0.15 and 0.20 μm for myofibroblasts both in live and fixed conditions (Figs. 2g and 3c). www.nature.com/scientificreports/ donor fluorophores were observed within TGFβ1-deprived cells ( Supplementary Fig. S10), which is suggestive of Fn fiber degradation and internalization 39 . Inhibition of cell generated forces via latrunculin-B-induced actin cytoskeleton disruption led to partial refolding of Fn as probed by FRET within matrix fibrils in experiments conducted with human foreskin fibroblast (HFF) assembled ECM ( Supplementary Fig. S11). However, the myofibroblast-assembled ECM maintained a higher Fn fiber strain than the fibroblast-assembled ECM upon latrunculin-B-induced force inhibition, demonstrating that a residual strain was preserved within the matrix, perhaps due to enhanced ECM fiber crosslinking ( Supplementary Fig. S11). This could be due to the ECM crosslinking enzyme transglutaminase 2 (TG2), as TGFβ1 is a direct stimulator of the transcription of TG2 40 . TG2 has various targets amongst ECM fibrils, including Fn, fibrillin and several collagen types, and can effectively cross-link them, thus stiffening the ECM and protecting the ECM from proteolytic degradation 41 . By virtue of its targets, TG2 likely is an important stabilizer of both the early and late wound healing matrix. We thus stained for TG2 and observed that the expression of TG2 was indeed clearly upregulated in TGFβ1 conditioned keratocytes, whereas cells exposed to the other growth factors showed minimal TG2 staining ( Supplementary Fig. S12). Matrix crosslinking by TG2 is therefore a likely cause for the observed Fn strain preservation in myofibroblast-deposited ECM in our study 41 . Residual ECM Fn fiber strain was also preserved within the myofibroblast-assembled ECM after matrix decellularization ( Supplementary Fig. S11). Collagen-dominated and myofibroblast-derived ECM scaffolds reduce K-F/M upon TGFβ activation. We observed that the ECM assembled by various keratocyte phenotypes displayed different Fn fiber tensions (Fig. 4) and collagen content (Fig. 1), and that Fn fiber strain was preserved within the myofibroblastassembled ECM after matrix decellularization ( Supplementary Fig. S11). We therefore asked whether a decellularized fibroblast or myofibroblast derived matrix might constitute a 'bad neighbourhood' ECM with a regulatory role in the K-F/M process. We also asked how matrix collagen might impact the niche properties. In these experiments, HFFs were cultured in the presence or absence of TGFβ1 and/or L-ascorbic acid (Vitamin C) during ECM assembly. The Fn and collagen content were quantified in a subset of the assembled scaffolds. Supplementation of TGFβ1 significantly increased Fn assembly (Fig. 5e, Supplementary Fig. S13) and Fn stretching within matrix fibers in the assembled ECM scaffolds (Fig. 5b-d), similar to the situation in the ECM assembled by growth factor-treated keratocytes (Figs. 1, 4). Collagen fibril deposition was significantly increased following L-ascorbic acid supplementation (Fig. 6b). In the first experiment, low-collagen ECM scaffolds assembled by native HFFs were compared to low-collagen ECM scaffolds assembled by TGFβ1-treated HFFs (Fig. 5a, Supplementary Fig. S13). To investigate ECM scaffolds with low collagen content, the first experiment was run without L-ascorbic acid supplementation, but with the supplementation of FRET-Fn during the 4-day culture period prior to decellularization. ECM quantification before and FRET analysis after decellularization confirmed that the ECM assembled by TGFβ1-treated HFFs contained much more stretched Fn within matrix fibers compared to native HFF derived ECM (Fig. 5b-e). In the second experiment, collagen-1-rich ECM scaffolds (L-ascorbic acid supplementation during ECM assembly) were compared to low-collagen-1 ECM scaffolds (L-ascorbic acid deprivation during ECM assembly), all assembled by native HFFs (Fig. 6). In the third experiment, low-collagen ECM scaffolds assembled by L-ascorbic acid-deprived native HFFs were compared to collagen-1-rich ECM scaffolds assembled by L-ascorbic acid and TGFβ1-supplemented HFFs (Supplementary Fig. S14). After scaffold decellularization, keratocytes were seeded onto the ECM scaffolds and 5 ng/ml TGFβ1 was added for four days to stimulate myofibroblast differentiation. Importantly, the proportion of TGFβ1 conditioned keratocytes immunocytologically expressing αSMA-positive actin stress fibers was significantly lower on TGFβ1-treated HFF-derived scaffolds and on collagen-1-rich scaffolds (Figs. 5f,g, 6b, Supplementary Fig. S14). . Upregulated keratocyte contractility leads to enhanced fibronectin matrix fiber stretching as probed by FRET. (a) Experimental timeline: as described in Fig. 1, but with the addition of FRET Fn-10% Amine/ cys double, donor-acceptor fluorophore labeled Fn and 90% unlabeled Fn-to the culture medium to be incorporated into the ECM assembled during the growth factor conditioning period 19,31 . (b) Principle of Fluorescence Resonance Energy Transfer (FRET). Multiple donor and acceptor fluorophores label single Fn fibers to probe a large range of conformational Fn changes (loss of tertiary and secondary protein structure folding) using well established protocols. Energy transfer efficiency between donor and acceptor fluorophores decreases as the donor-acceptor distance increases during Fn protein unfolding, measured as a decrease in Fn-FRET ratio: acceptor (I A ) divided by donor (I D ) channel fluorescence emission intensity. ECM assembly by native control keratocytes or IGF-1 conditioned keratocytes was not detectable in this assay. www.nature.com/scientificreports/ Discussion As the cornea is a living material that needs to stay transparent under healthy conditions, yet needs to be repairable following injury, the challenge nature had to solve is how to close a corneal wound site, and subsequently remodel the altered ECM to restore transparency. Improper healing of the cornea after injury or infections can lead to corneal fibrosis, which causes enhanced light scattering, resulting in vision impairment or even vision loss 1,42-44 . We thus asked how the up-and downregulation of growth factors, simulated by external IGF-1, PDGF, FGF, FBS or TGFβ1 supplementation and withdrawal, affects the keratocyte's ability to assemble and model the surrounding ECM. And how biophysical alterations of the cellular environment, in concert with growth factor availability, can coregulate a reversible cell phenotype switch. We found that each of these wound healing-associated growth factors induced a distinct set of cell morphologies and behaviors as summarized in Fig. 7. In contrast to growth factors upregulated in wound sites, we did not observe differences between IGF-1treated and control keratocytes (Fig. 7), which fits the role of IGF-1 in the maintenance and repair of the normal corneal keratocyte network and ECM 5,27,32 . Epithelial damage increases stromal levels of PDGF 6 , and PDGF-primed corneal keratocytes are a proliferative, low contractile, metabolically active cell phenotype, displaying collective, contact-guided migration, and the assembly of a relaxed, fibrillar Fn matrix along their migration tracks 2,45,46 (Fig. 7). PDGF-primed corneal keratocytes therefore seem ideally suited to initiate wound repair by repopulating low stiffness stromal wound areas after keratocyte apoptosis, without causing major matrix remodeling 46,47 (Fig. 8). In contrast to the low contractile PDGF-primed phenotype, FBS and TGFβ1-conditioned keratocytes deposited the densest Fn and collagen matrix, and caused significant contraction, fibronectin fiber stretching and reorganization within their surrounding matrix 2,5 (Fig. 7). These observations are in agreement with previous reports of accelerated Fn fibril formation and matrix assembly as a result of Fn fiber stretching and the exposure of cryptic Fn-Fn self-assembly sites 19,22 . Fn fiber unfolding is also speculated to expose the cryptic Toll-like receptor (TLR) 4 activating site on Fn's ED-A domain, a Fn splice variant associated with fibrosis 19,48 . Subsequent TLR activation drives the transitioning towards myofibroblasts which show upregulated TGFβ, tenascin-C, Fn and collagen-1 gene expression, and consequently enhanced ECM assembly 48 . Cell generated force-induced Fn fiber stretching can thus promote early Fn fibrillogenesis 22 , but also the subsequent assembly of a collagen matrix 48 . The decreased enzymatic digestion of collagen fibers under tension 50 further highlights the importance of cell contractility for matrix assembly and stabilization. Vice versa, mechanobiological cues, including highly stretched Fn fibers, were identified as drivers and stabilizers of the myofibroblast phenotype in the growth front of de novo grown microtissues 35,51 . Serum protein-primed (here: FBS), highly metabolically active, proliferative, contractile and Fn assembling fibroblasts could thus be responsible for increased fibrotic remodeling of the corneal stroma in areas with significant inflammation-associated blood vessel ingrowth (Fig. 8). Finally, myofibroblasts would be responsible for the deposition of a contracted, dense, crosslinked collagen-1-rich matrix (Fig. 8). While these processes are essential for the closure of corneal laceration or perforation wounds, they can create adverse effects in terms of ECM architecture and transparency in situations where cell repopulation, but not wound closure, is needed (e.g. refractive surgery, corneal cross-linking). Since keratocytes in their native niche are exposed to a nanostructured rather than smooth/planar microenvironment, it is particularly notable that FGF and TGFβ1 primed keratocytes in 2D cell culture adopted a markedly different morphology on nanopillar substrates compared to planar glass in our study. Whereas FGF Figure 5. αSMA expression by TGFβ1 conditioned keratocytes is decreased on decellularized myofibroblastderived ECM scaffolds low in fibrillar collagen-1. (a) Experimental timeline: ECM assembled by human foreskin fibroblasts cultured in L-ascorbic acid/Vitamin C-free and serum-free culture medium, with (TGFβ1-treated HFFs) or without (native HFFs) TGFβ1 supplementation for four days. After cell adhesion, medium was replaced with identical medium containing FRET-Fn (see Fig. 4a) or 5 µg/ml Alexa-488 labeled Fn + 45 µg/ ml unlabeled exogenous plasma Fn. ECM quantification was performed before and FRET analysis after decellularization. Keratocytes were seeded onto the remaining decellularized ECM scaffolds and exposed to 5 ng/ml TGFβ1 for 4-days. αSMA-incorporation into actin stress fibers was compared in TGFβ1 conditioned keratocytes cultured on native HFF and TGFβ1-treated HFF-assembled ECM scaffolds. (b-d) FRET evaluation prior to keratocyte seeding. (b) Color-coded IA/ID ratiometric images for native (violet) and TGFβ1treated (purple) HFF-derived scaffolds. Scale bars: 50 μm. (c) Histograms of donor-acceptor intensity ratio distributions from the same ECM scaffolds. Histograms were derived from one representative field of view from one experiment in each group. Solution denaturation values for dimeric Fn-DA: 0 M GdnHCl (0.95, red), and monomeric Fn-DA: 1 M (0.62, green), 4 M GdnHCl (0.44, blue). (d) Unfolding of Fn within matrix fibrils was significantly greater in ECM scaffolds derived from TGFβ1-treated (purple) HFFs, compared to native (violet) HFFs (*:p < 0.0001). Scatter plots were constructed from data averaged from five random fields of view from one experiment comparing both groups. (e) ECM quantification was performed as described for Fig. 1c. TGFβ1 supplementation significantly increased Fn (**:p < 0.0001), but not collagen assembly. (f, g) αSMA incorporation into actin stress fibers was observed in 85% of TGFβ1 conditioned keratocytes on native HFF-assembled ECM scaffolds (violet), compared to 52% on TGFβ1-treated HFF-assembled ECM scaffolds (purple) (***:p < 0.0001). Scale bars: 100 μm. Measurements from > 200 cells per scaffold type were included in the bar chart: bars signify the means and whiskers the standard deviation from the mean. Statistical comparisons via one-way ANOVA with Sidak's multiple comparisons test (Fn/Collagen-1) and unpaired t-tests (αSMA), significance set at p < 0.05 for all comparisons. www.nature.com/scientificreports/ primed keratocytes maintained the dendritic morphology of control keratocytes on nanopillar arrays, TGFβ1 priming induced an enlarged, dendritic cell shape with a dense actin network, accentuated by nodular actin condensations (Fig. 7, Supplementary Figs. S6-S9). Depending on pillar aspect ratio, nanopillar substrates can be perceived as soft due to cell force-induced pillar bending 52 . Therefore, one explanation for the observed cell morphology difference between planar and nanopillar substrates is that the contractile cells can displace the nanopillars upon force generation, thus sensing a 'soft' substrate. Whereas their adhesion sites are not displaced on planar substrates of equally rigid material, causing the cells to indeed perceive such surfaces as rigid. In support of this notion, similar behaviour was observed in previous work demonstrating a preservation of the dendritic morphology by FGF-primed keratocytes 53 , and of the enlarged, dendritic cell morphology by TGFβ1-primed keratocytes 37 , in uncompressed soft 3D collagen gels. On the other hand, compressed stiff 3D Figure 6. Keratocytes seeded onto decellularized collagen-1 fiber-rich ECM scaffolds show downregulated αSMA expression, despite TGFβ1 stimulation. (a) Experimental timeline: human foreskin fibroblasts were cultured in serum-free culture medium with (Fn + Coll-1) or without (Fn) L-ascorbic acid added to allow a 4-day period of ECM assembly. A subset of ECM scaffolds was imaged after Collagen-1 immunostaining. The rest of the scaffolds underwent decellularization and further processing as detailed in Fig. 5a. (b) Left column and box plot: native (violet) and L-ascorbic acid supplemented (purple) HFFs both assembled a Fn-rich ECM on account of the supplied plasma Fn in the culture medium. ECM quantification was performed as described for Fig. 1c. L-ascorbic acid supplementation did not further increase Fn assembly, but significantly increased collagen assembly (*:p < 0.005). Scale bars: 50 μm. Middle column and bar chart: αSMA incorporation into stress fibers was observed in 85% of TGFβ1 conditioned keratocytes on native HFF-assembled ECM scaffolds (violet: Fn), compared to 44% on L-ascorbic acid supplemented HFF-assembled ECM scaffolds (purple: Fn + Coll-1) (**:p < 0.0001). www.nature.com/scientificreports/ gels promoted the classical FGF-induced polarization with presence of actin stress fibers 53 , and enlarged, stress fiber-rich myofibroblast morphology in TGFβ1-primed keratocytes 37 , like on planar glass in our study (Fig. 7). Furthermore, TGFβ1 conditioned keratocytes cultured on soft or patterned 2D substrates have been shown to reduce αSMA stress fiber expression and contractility compared to cells on rigid or planar 2D substrates 54,55 . FGF and TGFβ1 thus clearly elicit a conditional response, which depends on mechanosensory feedback from the substrate or pericellular matrix to the cells. This response is likely initiated by differences in topography, ligand availability/density, and/or 'observed' viscoelasticity 56,57 . The K-F/M transition reversal following growth factor withdrawal was more complete on nanopillar arrays than on planar glass substrates in our study (Fig. 7). These results underline the importance of substrate mechanosensing, and of a normalization of stromal growth factor concentrations following epithelial wound healing 6,7 , for myofibroblast phenotype reversal. The presence of scar tissue deposited by myofibroblasts in vivo has been proposed to create a 'bad neighbourhood'-type niche in which fibrosis is promoted and perpetuated, also after the matrix depositing cells have disappeared 9 . In our study, we found that the dense collagen matrix with highly stretched Fn fibers deposited by myofibroblasts, was maintained even after cell removal (Figs. 5b-d, 7, Supplementary Fig. S11). Contrary to the abovementioned 'bad neighbourhood' hypothesis, we observed a reduction in the transition of keratocytes into αSMA expressing myofibroblasts when cultured on these myofibroblast-derived, collagen and stretched Fn fiber-rich, decellularized ECM scaffolds. This observation even held true in the presence of exogenously supplemented TGFβ1 (Figs. 5, 6, Supplementary Fig. S14). Combining our 2D and 3D 35,51 data suggests two things: first, the composition of the ECM overruled soluble factor signaling to exert a defining influence on the (K-)F/M transition in our cell culture models. The biochemically and biophysically complex ECM within these 2D and 3D environments clearly played a more dominant instructive role on myofibroblast transition than exogenously supplemented TGFβ1, in agreement with studies that used decellularized cancer-associated stroma ECM 11 , or decellularized fibroblast-derived microtissues 51 . This is remarkable, since TGFβ1 has historically been viewed as an essential ingredient for myofibroblast transition 37 . Second, scar tissue ECM does not necessarily need to create a 'bad neighborhood'-type niche in which fibrosis is promoted. Instead, the fibrotic, myofibroblast-derived ECM may contain cues that favor tissue regeneration over sustained fibrotic scarring. Our data (Fig. 6, Supplementary Fig. S14, 35,51 ) suggest that a low Fn/collagen ratio inhibits fibroblast to myofibroblast transition. www.nature.com/scientificreports/ Regarding the potential clinical relevance, key features of the wound healing and tissue regeneration phases following in vivo laser ablation surgery of the anterior corneal stroma in rabbits 1 underline a mechanosensory ECM feedback-induced myofibroblast phenotype reversal. Corneal haze, αSMA and Fn expression, and the presence of enlarged, actin stress fiber-rich myofibroblasts peaked at 21 days after laser ablation surgery. Also, collagen-1 expression was elevated at this timepoint, but only few collagen fibers were detected via SHG within a rather featureless stroma 1 . This situation is most reminiscent of the myofibroblast-promoting environment on planar glass substrates (Fig. 7), in stiff 3D collagen gels 37 , and in the 3D microtissue growth front 35,51 . Between days 60 and 180 after laser ablation surgery, the complex collagen fiber topography of the normal corneal stroma reappeared, with corneal haze, Fn, αSMA and the typical myofibroblast morphology progressively disappearing. At the same time, the reorganizing (myo)fibroblasts increasingly expressed bright punctate, actin-rich structures that were associated and aligned with the collagen fibers 1 . This situation is most reminiscent of the myofibroblastinhibiting environment on nanopillar substrates (Fig. 7), in soft 3D collagen gels 37 , and in the 3D microtissue core region 35,51 . In a similar in vivo laser ablation surgery study in rabbits, the wounded corneal stroma assumed a maximum bulk tissue stiffness at 7 days after laser treatment 58 . Stromal stiffness was still high at 21 days after laser treatment, when αSMA expression was at its maximum. Stromal stiffness progressively decreased at later 19,22 , accelerating Fn fibrillogenesis, cross-linking and fiber bundling, and stabilizing the early Fn matrix 19,22,23 . Fn fiber unfolding also exposes a cryptic Tolllike receptor (TLR) 4 activating site on Fn's ED-A domain, resulting in TLR activation, and subsequent TGFβ, tenascin-C (TNC), Fn, and collagen-1 gene expression 48 . Cell generated force-induced Fn fiber stretching can thus promote early Fn fibrillogenesis 22 , and the subsequent assembly of a collagen matrix 48 . Increased solid tissue stresses and tissue tension, caused by cell proliferation, contractility, and matrix assembly, can also drive profibrotic gene expression (incl. αSMA, collagen, TNC), and thus myofibroblast transition and the deposition of a contracted, dense, crosslinked collagen-1-rich matrix (d-f), in a self-amplifying process 49 . Finally, decreasing PDGF and TGFβ1 concentrations following epithelial and basement membrane healing, together with normalizing ECM properties, including tissue specific matrix topography, stiffness, and collagen fiber content, likely facilitate the disappearance of myofibroblasts from wound sites. Thus, an ECM niche supportive of homeostasis and regenerative remodeling is created. We have integrated our experimental results (bold) with published information in the literature (italics): for citations, please see main text. www.nature.com/scientificreports/ measurement timepoints at 42, 70 and 400 days after laser treatment, when αSMA expression, histologically graded fibrosis, and clinical corneal haze decreased as well 58 . An interpretation of the combined results from these in vivo wound healing studies 1,58 , as well as 2D and 3D cell culture models 35,37,51 , thus supports the presence of an ECM niche supportive of homeostasis and regenerative remodeling. This niche is most likely characterized by a normalized tissue physiology-specific matrix topography, stiffness, and collagen fiber content, overruling soluble factor signaling, and facilitating the disappearance of myofibroblasts from wound sites. In summary, the growth factor deprivation and ECM-driven downregulation of myofibroblast differentiation observed in our study provide a mechanobiological explanation for the disappearance of (myo)fibroblasts during the maturation phase of wound healing (Fig. 8). Such myofibroblast disappearance should facilitate regenerative matrix remodeling at wound sites and in scars 12 . Remodeling processes in the corneal stroma are active for months or years and determine whether opacities persist or regress 44 . Understanding the basic mechanisms governing cell-ECM crosstalk is becoming increasingly important with the vast increase in popularity of novel clinical and tissue engineering tools that mechanically and biochemically modify the cellular microenvironment and potentially affect cell fate. As such, our findings reflect basic, clinically relevant, and potentially targetable mechanisms related to tissue fibrosis. Methods Detailed experimental protocols are available as supplementary information. Primary rabbit keratocyte isolation and cell culture. Eyes used for isolation of primary corneal keratocytes were obtained from healthy rabbits at a local abattoir. The keratocytes were isolated and cultured on collagen coated, plasma treated polystyrene culture dishes (tissue culture plastic) prior to passaging onto various specific substrates for the different experiments. Exposure to specific growth factors (IGF-I: 10 ng/ml; PDGF-BB: 50 ng/ml; FGF-2: 10 ng/ml; TGFβ1: 5 ng/ml) or 10% Fetal bovine serum (FBS) supplemented to the serum free medium using previously described methods 59 took place either while on the initial collagen coated culture dishes during culture expansion, or after passaging onto the final substrate. Trypsin used for passaging keratocytes was neutralized with Soybean Trypsin Inhibitor. The growth factor concentrations represent the lowest concentrations with a maximal effect on cell morphology and F-actin organization. These concentrations were adopted from previous studies 28,59 . For interventional experiments, cells were seeded onto glass cell culture substrates with adsorbed unlabeled Fn. Primary human foreskin fibroblast (HFF) cell culture. Primary human foreskin fibroblasts (HFF) were cultured using previously described protocols 60 Compartment specific fluorescent dyes used: phalloidin-Alexa Fluor 488 and 568 (1:100-200 dilution in 1 × PBS + /− 1-3% BSA for 2 h) as cellular F-actin and actin stress fiber markers; 4' ,6-diamidino-2-phenylindole (DAPI) (1:1000 dilution in 1 × PBS for 10-15 min) to counterstain cell nuclei. Routine fixation, permeabilization, blocking and staining protocols were used. All antibodies and compartment specific fluorescent dyes were added after fixation and permeabilization, with the exception of the primary anti-Coll-1 antibody and the primary anti-Fn antibodies, which were added to the live cell culture at 37 °C and after fixation but prior to permeabilization, respectively. Finally, samples were left in 1 × PBS until immunofluorescent imaging with a Zeiss Axiovert 200 M epifluorescent microscope, or Olympus FV-1000 or Leica SP5 confocal microscope. Image analysis for ECM quantification. To ensure robust results, quantification of immunofluorescence intensities in Z-stack confocal microscopy data was carried out using a custom-built FIJI macro, which can be accessed on GitHub (https:// github. com/ BennS ynergy/ FIJI-macro_ zStac kQuant. git, DOI: https:// doi. org/ 10. 5281/ zenodo. 79788 97) 61 . It was noted that the z-position of z-stack-slices exhibiting maximum fluorescence intensity significantly varied between the collagen-1 and Fn channel in TGFβ1-treated samples (see Supplementary Fig. S15). Consequently, quantification of Fn and collagen-1 fluorescence intensities was performed in a 3-slice substack that surrounded the z-stack-slice with peak fluorescence intensity in the collagen-1 channel. This strategy was adopted since the selection of the maximum z-position from either the collagen-1 or Fn channel for fluorescence intensity quantification did not alter the relative differences observed among the interventional groups. MTT assay. An MTT assay was used as an indicator of cell proliferation in growth factor conditioned keratocytes at culture day 5 and in growth factor deprived keratocytes at culture day 9, and was performed according to the manufacturer's instructions (Cell Proliferation Kit I (MTT), Cat # 11465007001, Roche www.nature.com/scientificreports/ the medium was measured with a Tecan M200 plate reader. Absorbance values were normalized to that of the control keratocytes in serum-free medium and reported in Fig. S4. Real-time PCR evaluation of keratocyte and myofibroblast markers. Cells were lysed and RNA was isolated (Nucleospin RNA-II, Cat # 740955.50, Macherey Nagel AG, Oensingen, Switzerland), and cDNA was produced (Taqman ® Reverse transcription Reagent, Cat# N808-0234, Applied Biosystems) using manufacturers protocols. A spectrophotometer was used to determine RNA yield (Nanodrop; Thermo Scientific, Wilmington). Gene expression was evaluated by real-time PCR using SYBR green reagents (Sensimix SYBR kit from Bioline) and validated real-time PCR primers for rabbit keratocyte and myofibroblast markers ( Table 1, supplementary information) as previously published 33 . Relative quantification was performed by the ΔΔCT method with β-actin used as normalizing housekeeping gene. Nanopillar array fabrication and cell traction force measurement. Nanopillar platforms were fabricated exploiting nanosphere lithography followed by a molding process using previously described protocols 30 . The photoresist SU-8 nanopillars measured 0.25 μm in diameter and 1.5 μm in height with a 0.8 μm pillar center to pillar center distance. The spring constant of a representative SU-8 nanopillar was measured by Atomic Force Microscopy (AFM) by deflecting single nanopillars with the AFM cantilever, and was used to calculate the cell-generated horizontal traction forces on the pillar substrate. These nanopillars with passivated pillar sides and Fn-coated pillar tops were biocompatible and allowed a natural spreading of cells on top of the nanopillars. Cells were fixed 3 h after seeding, then phalloidin stained and imaged on a Leica SP5 confocal microscope. This timeframe was chosen since we aimed to measure force generation after the initial cell to substrate attachment phase 38 , but prior to any significant ECM deposition. Since force generation measurements on fixed cells reflect the cellular contractile state at a single point in time and cellular force generation changes over time as adhesion complexes mature 38 we were interested in force generation measurements over time. To visualize cell edges in live-cell imaging experiments the cells were incubated with a fluorescent membrane dye (Vybrant/ Dil, Invitrogen, 1:200 dilution in culture medium) in suspension prior to seeding. Within 3 h after seeding, each cell was imaged for 30 min on an incubated microscope stage. The pillar displacement underneath the cells in xy direction was quantified by comparing two sets of images with focal planes at the pillar base and top, respectively. Pillar displacement was analyzed with particle tracking software (Diatrack 3.03, Powerful Particle Tracking, Semasopht; and Fiji, plugin, template matching for drift collection) and the traction forces by which the cells displaced the nanopillars calculated. Scanning electron microscopy. Cells on nanopillar arrays were imaged using a Zeiss ULTRA 55 Scanning Electron Microscope after fixation, critical point drying, and gold sputter-coating using standard protocols 30 . Direct stochastic optical reconstruction microscopy (dSTORM). For dSTORM imaging, keratocytes preconditioned with 5 ng/ml TGFβ1 for 4 days, were seeded onto Fn-coated coverslips. After fixation, permeabilization, blocking and staining with Alexa Fluor 647 phalloidin, samples were imaged using a home-built set-up for single-molecule localization microscopy, as previously described 62 . Fn isolation and labeling. Fn was isolated from human plasma (Zurcher Blutspendedienst SRK, Switzerland) by affinity chromatography as previously described 31 . Double labeling of plasma Fn with Alexa Fluor ® 488 as donor on amines and Alexa Fluor ® 546 as acceptor on free sulfhydryls was performed as previously described 31 . Preparation of cell derived ECM scaffolds. Tissue equivalents, often collagen gels seeded with fibroblasts, were used to investigate ECM stress regulatory principles in most previous studies 2,5,28 . Although collagen fibrils can self-assemble in vitro, their assembly and proper organization in vivo are regulated by many additional binding partners, including cellular fibronectin and integrins 17 . Furthermore, resident tissue fibroblasts control the supramolecular fibril organization within and the three-dimensional structure of the collagen matrix 63 . Cultured human foreskin fibroblast (HFF) assembled ECM scaffolds were therefore used here to provide a more physiologically relevant 3D cell culture environment to evaluate the influence of the ECM on K-F/M transition. HFFs (50,000 cells/cm 2 ) were seeded onto Fn coated surfaces and allowed to adhere for 30 min. The culture medium was then replaced by cell-type specific medium containing FRET labeled Fn or Alexa 488 singly labeled Fn. Cells were cultured for 4 days with a medium change after 48 h prior to imaging. Image acquisition and analysis for fluorescent resonance energy transfer (FRET). Fluorescent Resonance Energy Transfer (FRET) analysis was performed as previously described 31 , using Matlab (http:// www. mathw orks. com/) with a self-programmed script (script available as supplementary information). Using an Olympus FV-1000 scanning laser confocal microscope, all FRET images were acquired from living cell samples, except for the FRET images from HFF derived ECM scaffolds for keratocyte reseeding experiments and for decellularization experiments, which were acquired from decellularized samples. FRET I A /I D ratios were calibrated to different Fn conformations in PBS and various strength GdnHCl solutions. Dimeric and fully folded Fn in PBS showed strong energy transfer whereas monomeric and significantly unfolded Fn-FRET in 4 M Gdn-HCl showed dramatically decreased energy transfer. According to previous studies on Fn conformations in solution 31 , the I A /I D value of monomeric Fn-FRET in 1 M GdnHCl will be used to indicate the very first onset of loss of secondary structure.
2023-07-15T06:17:34.459Z
2023-07-13T00:00:00.000
{ "year": 2023, "sha1": "87f9602407697a8063ed4c266319e4572f00ccef", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ef92efc85b4b21701947440dd293bc9952139aef", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
218830381
pes2o/s2orc
v3-fos-license
Minimally Invasive Aesthetic Treatment of the Face and Neck Using Combinations of a PCL-Based Collagen Stimulator, PLLA/PLGA Suspension Sutures, and Cross-Linked Hyaluronic Acid Clinical, Cosmetic and Investigational Dermatology Background: Combinations of minimally invasive procedures (MIPs) are often used in aesthetic treatments and are increasingly considered as the new standard of care. Three agents with speci fi c properties are available in this perspective: a polycaprolactone (PCL)-based collagen stimulator, a poly-L-lactic acid (PLLA)- and a poly-glycolic acid (PLGA)-based resorbable suspension suture with a 3D-cone technology, and a cross-linked hyaluronic acid (HA). Objective: To develop the fi rst practice guidelines on rejuvenation treatment of the face and the neck using combinations of these agents, whether associated or not with other widely used MIPs such as botulinum neurotoxins or energy-based devices. Methods: A multi-disciplinary, multi-national board of plastic surgeons and dermatologists convened to develop guidelines using a prede fi ned consensus method. The consensus was de fi ned as ≥ 83% agreement rate between participants. Results: Practice guidelines and algorithms, describing optimal procedure sequence and spacing, are proposed for the treatment of upper-, mid-, lower-face and neck, combining the PCL collagen stimulator, the PLLA/PLGA suspension sutures, and the cross-linked HA, whether associated or not with other MIPs. Conclusion: These new guidelines provide general support to optimal management strategies. Individual treatment plans should be adapted according to the physician ’ s individual competence and the patient ’ s preferences. acid/poly-glycolic acid. Introduction Since the appearance of skin and face are considered important factors of wellbeing and health, the number of aesthetic procedures performed worldwide is continuously increasing. 1,2 For instance, according to the American Society of Plastic Surgeons, 17.7 million surgical and minimally invasive cosmetic procedures were performed in the United States (US) in 2018. 3 In this context, the use of minimally invasive procedures (MIPs) increased strongly by +228% growth rate between years 2018 and 2000 in the US, 3 and MIPs represent now nearly 90% of aesthetic interventions. 4 They aim to attain optimal results with minimal invasiveness, faster recovery, reduced scarring, limited stress, and better patient satisfaction. 5 They include a wide range of injectable agents, devices, and techniques, each being performed in precise indications. The most often used injectable agents are the Clostridium botulinumderived botulinum neurotoxins (BoNTx), which induce a temporary relaxation of muscles, 6 and hyaluronic acidbased (HA) biodegradable soft-tissue fillers. 7,8 Other biodegradable fillers, based on calcium hydroxylapatite (CaHA), polycaprolactone (PCL) or poly-L-lactic acid (PLLA), possess additional bio-stimulatory properties. [9][10][11] The term "energy-based devices" (EBDs) encompasses different purposes and devices, ie, tightening (micro modelling) vs resurfacing techniques. Commonly used EBDs in face and neck rejuvenation are radio frequency (RF) for skin tightening and collagen contraction, skin resurfacing lasers, and high-intensity focused ultrasound (HIFU) for wrinkle reduction and skin tightening. 6,[12][13][14][15] While mechanical liposuction and chemical lipolysis are used for fat reduction, intense pulsed light (IPL) is used for improving skin colour and texture. Combination Treatments: The New Standard of Care MIPs are increasingly utilised in combination protocols to improve outcomes. 16 In 2014, nearly half of all aesthetic patients in the US who requested MIPs received multiple procedures. 4 Indeed, combination treatments offer an optimal response to the multifactorial process of facial ageing, which involves structural changes in all anatomical layers (bone, muscles, ligaments, adipose tissue, and skin) and dynamic interactions among these tissues. 2,17,18 Consequently, the modern concept of natural and harmonious rejuvenation is based on a comprehensive, threedimensional, multi-layered approach, combining multiple agents and techniques to attain multiple goals such as relaxation, volumisation, volume repositioning, reshaping, resurfacing, or tightening, depending on specific patient needs. 6,17,19 Diverse multimodal approaches have been assessed for face and neck rejuvenation in clinical studies ( Table 1). The studies have generally concluded that combination treatments display additive or even synergistic effects, leading to better and longer-lasting results compared to single agent-or single technique-based protocols, with no clinical evidence of increased adverse events (AEs) rate or severity. 6,16,[20][21][22] Therefore, combined treatments are now considered the new standard of care. 18 PCL Collagen Stimulator, PLLA/PLGA Suspension Sutures, Cross-Linked HA Three distinct agents have been proposed by a single company (Sinclair Pharmaceuticals, London, UK) for minimally invasive rejuvenation treatments: a biodegradable collagen stimulator (Ellansé ® ), a resorbable suspension suture with a 3D-cone technology (Silhouette Soft ® ), and a cross-linked HA (Perfectha ® ). reshaping, and skin quality improvement. Three versions are available (Ellansé-S ® , -M ® ,-L ® ), providing the duration of effect from at least 18 months up to 3 years (Table 2), 24,25 as the degradation time of the PCL microspheres depends on the initial polymer chain length. [26][27][28] Clinical studies, including prospective randomized controlled trials (RCTs), have shown product safety and efficacy in the treatment of nasolabial folds, 29,30 forehead augmentation, 31 hand rejuvenation, 32 and complete facial rejuvenation. 33 Consensus guidelines stated that this PCL collagen stimulator offers noteworthy advantages over PLLA-based fillers, as the results are immediately visible, and over HA-and CaHA-based fillers, due to better stability and duration of the results. 34 A worldwide post market survey (2012-2019) found a 0.0562% AEs rate, confirming excellent safety in daily practice. 35 The suspension sutures are made of PLLA biodegradable monofilaments that support resorbable 3D cones made of a copolymer of PLLA and poly-glycolic acid (PLGA). These unique features lead to a dual effect: an immediate repositioning of the sagging tissue and, thanks to the collagen stimulation, a gradual and sustained tissue regeneration. The sutures are utilised in the treatment of mild to moderate skin sagging on the mid-face, lower face, full face, neck, and in eyebrow repositioning. Three product references are available (8 cones, 12 cones, and 16 cones) to address different areas and degrees of skin laxity. Clinical studies have shown the efficacy of these threads as well as their long-lasting effect and safety, with the observed AEs being mild to moderate and easily manageable. [36][37][38] A post market survey (2012-2019) found a very limited 0.0231% overall AEs rate. 39 A recent US-based expert consensus concluded that the treatment with absorbable facial suspension sutures, when performed properly, is associated with minor and infrequent AEs, offering a beneficial clinical alternative to traditional facial rejuvenation techniques. 40 The third product range includes resorbable high molecular weight and high purity HA gels of non-animal origin which are cross-linked with butanediol diglycidyl ether. Five different versions are available. All the products of the range have the same HA concentration (20mg/mL), but their particle sizes and rheological properties vary: the bigger the particles, the greater the volumizing effect and the longerlasting the results. Thus, each gel is different and designed to best meet specific needs, depending on the area to be treated, the volume needed, and the depth of injection required. As a consequence, these agents are utilised in a wide range of rejuvenation treatments: from superficial or deep lines filling to volume creation and contour shaping. 41,42 Clinical studies have shown their efficacy, lasting for six to 18 months, as well as a high level of patient and physician satisfaction. [43][44][45][46] Devoid of inflammatory effects, 47 these cross-linked HA gels are safe in daily clinical use as evidenced by a post market survey (2012-2019) that found a 0.0239% AEs rate out of 2.8 million syringes sold worldwide. 48 In daily practice, the PCL collagen stimulator, the PLLA/ PLGA suspension sutures, and the cross-linked HA are most often combined in multimodal rejuvenation protocols. As no specific recommendations existed to guide this frequent practice, the objective of the present work was to provide physicians with guidelines on the optimal use of these agents in combination in face and neck rejuvenation treatments. Methods The guidelines were developed by a multi-disciplinary, multi-national board of plastic surgeons and dermatologists representing a worldwide perspective. As an initial step, each participant was asked to independently indicate personal preferences concerning the best-combined rejuvenation treatment of predefined target areas (neck, lower face, mid-face, and upper face) using a common standardized questionnaire. Participants were also asked to analyse frequent aesthetic problems separately within each area (eg, for lower face: loss of jawline contour, loss of submental cervical angle, and so on) in patients with mild or moderate to severe signs of aging. Individual preferences were compiled in anonymised summary tables, which were presented for discussion at a consensus meeting. All options were submitted to plenary votes to identify formal consensual statements according to the following criteria: 1. Agreement of six out of six experts: strong consensus, 2. Agreement of five out of six experts (83% agreement): consensus, 3. Agreement of ≤four/six experts: absence of consensus. Results and Recommendations The board commented on the initially proposed analytical approach, objecting that the adjectives "mild", "moderate", and "severe" are vague and subjective: a patient may well be sorted in different categories by different physicians. Thus, this approach cannot be used unless based on a validated and widely accepted (visual) scale. Moreover, differences in patient management according to severity are often only a matter of the number of sessions and product amounts, not different strategies. The focus on individual aesthetic unit problems within the same face area appeared superfluous and irrelevant: in daily practice, most patients are treated by combination protocols for more than one mutually correlated aesthetical problems within the same area. However, a series of key orientations to categorize the usual problems and manage combination treatments were consensually agreed. Guidelines on Common Key-Principles It is not advisable to perform multiple procedures on the same area during the same session because scant data are available on possible interactions, making it difficult to accurately incriminate the responsible agent/procedure in case of an emergent AE. However, the board members acknowledged that they do not always follow this rule, assuming thereby increased personal responsibility and that no complications may arise from combining different treatment modalities in different anatomical areas during the same session. As a rule, the rejuvenation protocol should successively aim for two main objectives: volume adjustment (reduction, replacement/augmentation or creation) in the first place, tissue reposition afterwards. A frequently needed optional final step aims at improving the overall result performing "touch up" and skin quality improvement procedures. Regarding volume replacement, the board favours the PCL collagen stimulator, as it offers an additional long-term rejuvenating effect by stimulating neocollagenesis, but HA fillers, as per the treating physician preference, can also be used. Volume (fat) reduction can be performed by any physician's preferred usual technique (laser-assisted lipolysis, chemical lipolysis solution, liposuction, HIFU). Most board members recommend performing fat-reduction before PLLA/PLGA sutures insertion (respecting an interval of six to eight weeks after injection lipolysis, and a 12-week interval after liposuction or cryolipolysis); one participant prefers applying injection lipolysis two weeks after sutures insertion. The PLLA/PLGA suspension sutures should be placed according to the currently recommended straight patterns, strictly avoiding the "U" and the angle ("L") patterns. The number of sutures implanted in every target area should be sufficient to induce optimal effect and patient satisfaction. BoNTx injections are always recommended for eyebrows and neck rejuvenation, where they should be performed two weeks before the PLLA/PLGA sutures insertion, since complete muscular relaxation allows better cones encapsulation and a more stable effect (see the "blanket statement" in Table 3). Their use is optional in other areas where they are often injected before or during the same session as volume replacement. The cross-linked HA fillers or the PCL collagen stimulator may also be injected as an elective last step, designed to improve the final result, thanks to a "touch up" effect, ie, fine-line/wrinkle correction, skin quality improvement and beautification (eg, lip enhancement/augmentation, additional volume augmentation, and so on). EBD Use Some EBD tightening techniques (eg, RF, US) may interfere with the fillers and the PLLA/PLGA sutures and impair the subsequent collagen production: It is thus recommended to perform them on separate sessions. They should preferably be performed before (6-8 weeks) the insertion of the PLLA/PLGA sutures: The key reason is that this period is needed to start collagen remodeling, resulting in better support for PLGA cones of the suspension sutures. If EBD tightening techniques are performed after suture insertion or filler injection, the interval should be longer (eight-12 weeks) to avoid the risk of suture breakage or filler meltdown. When applied, EBD resurfacing techniques should preferably follow the PLLA/PLGA suspension sutures insertion (after a two-four-week interval). If resurfacing is performed before, the skin should be properly re-epithelized (healed) and made free of any resurfacing-associated complication or AEs (bacterial or viral infection, delayed healing areas) before suture insertion. EBD techniques may be used at any of the above-cited locations in addition to other more area-specific treatments. Algorithms According to Treatment Areas Area-specific treatment algorithms were developed according to the key-principles described above and accounting for the usual area-specific rejuvenation priorities. To keep this report adequately concise, the proposed treatment sequences and spacings are specified in relevant figures, while the text highlights only some conceivable alternatives or noticeable comments. Upper Face The overall upper-face rejuvenation procedure (sequence and spacing) was agreed with a strong consensus level. It starts with systematic BoNTx injections, associated with volume replacement. The sequence then differs depending on whether further tissue reposition (eyebrows elevation) is based on either cross-linked HA or PCL collagen stimulator injections ( Figure 1A) or on 8-cone PLLA/PLGA sutures insertion ( Figure 1B). The final touch-up/refinement session is optional. Mid-Face Mid-face rejuvenation treatment starts with volume replacement, followed by tissue reposition (Figure 2); agreement level: strong consensus. The third "touch up" step is optional and should not be performed before the results of the previous steps are stabilized. Lower Face The overall lower-face rejuvenation procedure, starting with volume adjustment (reduction or replacement/augmentation) is described in Figure 3. A minor discrepancy existed between board members regarding the preferred sequence of injection lipolysis and the PLLA/PLGA sutures insertion: starting with injection lipolysis was the consensually adopted choice, but one participant preferred the reverse sequence; (overall agreement level: consensus). The BoNTx-based muscular relaxation ("Nefertiti lift" 49,50 ) is optional and, when needed, should be separated by a two-week interval with the next step (tissue reposition, suture insertion). The last (fourth) "touch up" step is also optional. Neck The two-to four-step neck rejuvenation treatment may start with either an optional volume reduction, an optional EBD tightening or a systematic BoNTx-based platysma relaxation ( Figure 4); overall agreement level: consensus. When performed, the EBD tightening should be separated by a long enough interval with the PLLA/PLGA sutures insertion (tissue reposition). Finally, an optional fifth step (dermal filler injections according to its instructions for use, two weeks after sutures insertion) may be needed to perfect the results. Discussion We propose here the first recommendations on the multimodal rejuvenation treatment of the face and neck involving three specific agents, ie, the PCL collagen Optional Touch Up Treatment • Should be injected 4-6 weeks after initial volume replacement/ augmentation when using the PCL collagen stimulator, or 2 weeks after initial filler treatment when using the cross-linked HA. stimulator, the PLLA/PLGA suspension sutures, and the cross-linked HA. Such recommendations are needed because previously published consensuses have been focused on the separate use of the PCL collagen stimulator 34 or the PLLA/PLGA suspension sutures only, 40 while combination treatments are commonly used in daily practice and may lead to serious problems if followed inappropriate sequencing or spacing (eg, unfavourable interaction between EBD techniques and the PLLA/PLGA sutures). We believe that our recommendations are reliable as they are designed by a multidisciplinary group of experienced physicians, are based on a formal consensus method, and only propose attitudes supported by a high agreement rate. However, the US-based board member could not directly comment on the fillers that are not approved by the Food and Drug Administration (FDA) but extrapolated recommendations from the use of FDA-approved fillers. Finally, our recommendations account for a wide range of other commonly used agents and techniques, rendering them probably relevant and helpful in daily practice. We have deliberately designed our consensus as general guidelines on the best management strategy and not as detailed recommendations on precise problems. This is because the addressed practice field encompasses nearly an infinite number of individual problems. Like all other authors, we acknowledge that such guidelines are never enoughthey provide only general support, not an individualized treatment plan. 6 The optimal treatment plan always results from a physician's individual knowledge (anatomy, aging physiology, product characteristics), training (injection or insertion techniques), general clinical competence, responsibility, and wisdom combined with the patient's values and preferences. 17,34 Comparable consensus guidelines have also most often specified their recommendation according to separate areas (eg, upper, mid-, and lower face, neck), the type of injected agents or EBDs, and the treatment sequence and timing (Table 4). 6,18,51 The importance of spacing different treatments (at least one to two weeks) on the same area was generally highlighted to allow the resolution of local side-effects and reliably assess efficacy results and potential AEs. 6,51 While some injectable agents (BoNTx, HA, CaHA) can safely be used on the same day and in any sequence, EBDs (MFU-V) should be delivered on a separate occasion, preferably before filler injection. 6,18,51 The main limitations of our guidelines pertain to the drawbacks associated with the expert consensus method and the lack of population specificity. Indeed, it has been emphasized that ethical guidelines should be evidencebased, ie, derived from randomized controlled trials (RCTs) and meta-analyses of RCTs, which bear low risk for bias. 1 However, RCTs are very rare in the rejuvenation and beautification domain, especially regarding multimodal management. 52 Thus, as a rule, guidelines on combination treatments have been based on expert consensus, as are ours. 6,16,18,51,53,54 Compared to these sources, we have used fairly stringent consensus criteria. It has been acknowledged that expert advice may provide valuable guidance for a multi-modal approach to aesthetic treatment. 55 Parts of the available guidelines are focused on specific subpopulations, according to patients' ethnic origin, gender, or age 6,18,19,53,54,56 as the achievement of optimal outcomes results from a patient-centred treatment plan that accounts for facial morphotype as well as personal and cultural aesthetic ideals. 18,19 Indeed, facial morphology and age-related changes differ across ethnic groups, 54 which result in distinct treatment goals and priorities or components of combination treatments. 6,18 However, the loss of volume occurs in all ethnicities, explaining the reason for volumisation always being a crucial step in rejuvenation treatment. 19 The qualitative and quantitative differences in treatment are limited for early intervention/ enhancement and restoration, most combination strategies being similar or slightly different in Asian and Caucasian patients. 53 In addition, our board-convened members from diverse geographic areas and cultural background ideals, and the proposed protocols account for the main variations of ethnic aesthetic problems and ideals. However, our guidelines always need adaptation to specific contexts and individual needs. Conclusion These new practice guidelines will probably prove helpful for practitioners by advising the optimal management strategy in the multimodal rejuvenation treatment of different face areas when combining the PCL collagen stimulator, the PLLA/PLGA suspension sutures, and the cross-linked HA, whether associated or not with other frequently used MIPs. Individual treatment plans should always be adapted according to the physician's individual competence and the patient's preferences and needs.
2020-05-07T09:14:38.258Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "88620ad40fc2610f03189ef16452c0ea99088ad0", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=57884", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11283dfd1e607e50e1526dcf434105ad411bc440", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10208126
pes2o/s2orc
v3-fos-license
Caspase-11 Activation in Response to Bacterial Secretion Systems that Access the Host Cytosol Inflammasome activation is important for antimicrobial defense because it induces cell death and regulates the secretion of IL-1 family cytokines, which play a critical role in inflammatory responses. The inflammasome activates caspase-1 to process and secrete IL-1β. However, the mechanisms governing IL-1α release are less clear. Recently, a non-canonical inflammasome was described that activates caspase-11 and mediates pyroptosis and release of IL-1α and IL-1β. Caspase-11 activation in response to Gram-negative bacteria requires Toll-like receptor 4 (TLR4) and TIR-domain-containing adaptor-inducing interferon-β (TRIF)-dependent interferon production. Whether additional bacterial signals trigger caspase-11 activation is unknown. Many bacterial pathogens use specialized secretion systems to translocate effector proteins into the cytosol of host cells. These secretion systems can also deliver flagellin into the cytosol, which triggers caspase-1 activation and pyroptosis. However, even in the absence of flagellin, these secretion systems induce inflammasome activation and the release of IL-1α and IL-1β, but the inflammasome pathways that mediate this response are unclear. We observe rapid IL-1α and IL-1β release and cell death in response to the type IV or type III secretion systems of Legionella pneumophila and Yersinia pseudotuberculosis. Unlike IL-1β, IL-1α secretion does not require caspase-1. Instead, caspase-11 activation is required for both IL-1α secretion and cell death in response to the activity of these secretion systems. Interestingly, whereas caspase-11 promotes IL-1β release in response to the type IV secretion system through the NLRP3/ASC inflammasome, caspase-11-dependent release of IL-1α is independent of both the NAIP5/NLRC4 and NLRP3/ASC inflammasomes as well as TRIF and type I interferon signaling. Furthermore, we find both overlapping and non-redundant roles for IL-1α and IL-1β in mediating neutrophil recruitment and bacterial clearance in response to pulmonary infection by L. pneumophila. Our findings demonstrate that virulent, but not avirulent, bacteria trigger a rapid caspase-11-dependent innate immune response important for host defense. Caspase-11 participates in the activation of a non-canonical inflammasome that induces cell death and the secretion of IL-1a and IL-1b in response to Gram-negative pathogens, such as Escherichia coli and Vibrio cholerae, and to particular toxins, such as the cholera toxin B subunit [26][27][28][29]. This non-canonical, caspase-11-dependent response to Gram-negative bacteria is independent of virulence-associated secretion systems that deliver bacterial molecules into the host cytosol and requires LPS-induced TLR4 signaling through the adaptor TIR-domain-containing adaptorinducing interferon-b (TRIF) and TRIF-dependent type I interferon (IFN) production. Type I IFN signaling through the type I IFN receptor (IFNAR) is required for caspase-11 upregulation and activation, but how type I IFN mediates activation of caspase-11 is not well-defined [27][28][29]. Caspase-11 contributes to NLRP3-dependent activation of caspase-1 and subsequent caspase-1-dependent IL-1b secretion and cell death. Caspase-11 also facilitates an NLRP3-and caspase-1-independent pathway that results in cell death and release of IL-1a [26][27][28][29]. This caspase-11-dependent, caspase-1-independent pathway is responsible for LPS-induced septic shock in vivo [26,30]. Although caspase-11 is activated in response to signals from Gram-negative pathogens and certain pore-forming toxins, whether caspase-11 contributes to inflammasome activation in response to virulenceassociated secretion systems that deliver bacterial ligands into host cytosol is unknown. Bacterial pathogens use evolutionarily conserved secretion systems, such as type III or type IV secretion systems (T3SS or T4SS), to translocate effector proteins into the cytosol of host cells [31,32]. In addition to bona fide virulence factors, these secretion systems also translocate bacterial molecules such as flagellin or structural components of the secretion machinery itself, which results in inflammasome activation [14,16,[33][34][35][36]. Legionella pneumophila, an opportunistic pathogen that causes a severe pneumonia known as Legionnaires' disease [37,38], utilizes its dot/icm-encoded T4SS as a virulence factor to translocate bacterial effector proteins into the host cell cytosol and establish a replicative vacuole [39][40][41][42][43][44][45][46]. L. pneumophila induces T4SS-dependent inflammasome activation through two genetically distinct pathways [47]. T4SS-mediated translocation of flagellin into the cytosol triggers caspase-1 activation and pyroptosis through the NLR NAIP5 in conjunction with another NLR, NLRC4 [16,36,[47][48][49][50]. Caspase-1 activation is also triggered independently of the NLRC4/flagellin pathway through the adaptor protein ASC, but the bacterial factor that is recognized and the upstream proteins that regulate this pathway remain unknown [47,51]. However, although ASC is necessary for robust secretion of IL-1b in response to L. pneumophila as well as a number of pathogens, such as Salmonella or Yersinia species which employ T3SSs, ASC is dispensable for induction of pyroptosis that is rapidly triggered in response to these infections. We therefore considered the possibility that in addition to its role in delayed inflammasome activation in response to Gram-negative bacteria, caspase-11 might participate in rapid cell death and release of IL-1a in response to the presence of bacterial pathogens that access the host cell cytosol by means of type IV and type III secretion systems. Here, we demonstrate that IL-1a and IL-1b are rapidly released in response to bacterial T4SS activity independently of bacterial flagellin. In this system, we find IL-1b secretion requires caspase-1, but caspase-1 is dispensable for cell death and IL-1a release in response to a functional L. pneumophila T4SS. Instead, caspase-11 is required for both IL-1a release and cell death in response to L. pneumophila T4SS activity. Consistent with recent findings, caspase-11 contributes to optimal NLRP3-mediated caspase-1 activation and IL-1b secretion in response to L. pneumophila. However, caspase-11-dependent IL-1a release and cell death in L. pneumophila-infected cells are independent of the NAIP5/NLRC4 and NLRP3/ASC inflammasomes. In contrast to the role of TRIF and IFNAR in the response against Gram-negative bacteria, caspase-11 activation and cytokine release in response to the T4SS of L. pneumophila are independent of both TRIF and IFNAR signaling. We further demonstrate that T3SS activity of the unrelated pathogen Yersinia pseudotuberculosis induces a similarly rapid caspase-11-dependent response that also leads to cell death and release of IL-1a and IL-1b. Finally, we find that both IL-1a and IL-1b are critical in vivo for neutrophil recruitment and bacterial clearance. Overall, our data show that caspase-11 is poised to respond robustly to a conserved feature of pathogenic bacteria, bacterial access to the host cytosol through specialized secretion systems. This establishes caspase-11 as a critical regulator of immune system-mediated discrimination of pathogenic and nonpathogenic bacteria. Results LPS priming induces rapid IL-1a and IL-1b secretion in response to L. pneumophila T4SS activity L. pneumophila infection induces IL-1a and IL-1b secretion that requires T4SS activity [47,52]. IL-1b secretion is regulated by a flagellin-dependent NAIP5/NLRC4 inflammasome and a poorly defined ASC inflammasome that both activate caspase-1 [47,51]. The mechanisms underlying IL-1a secretion are less clear, but IL-1a secretion is still robustly induced by flagellin-deficient L. pneumophila, which do not activate the NAIP5/NLRC4 inflammasome [52]. Recent studies have described a non-canonical inflammasome triggered in response to Gram-negative bacteria. This non-canonical inflammasome requires lipopolysaccharide (LPS) for the upregulation and activation of caspase-11 and subsequent IL-1a and IL-1b release [26][27][28][29]. Whether caspase-11 is also activated in response to bacteria that use specialized secretion systems to translocate bacterial molecules into the host cytosol is unknown. We thus hypothesized that LPS priming would upregulate caspase-11, pro-IL-1a, and pro-IL-1b and allow for more robust and rapid IL-1a and IL-1b secretion in response to T4SS activity. To test this, we first compared IL-1a and IL-1b release in unprimed and LPS-primed bone marrow-derived macrophages (BMDMs). As shown previously [48,52], unprimed BMDMs secrete robust levels of IL-1a and IL-1b by 20 hours postinfection with wild-type L. pneumophila (WT Lp) ( Figure 1A). Slightly attenuated levels of secreted IL-1a and IL-1b are observed with flagellin-deficient L. pneumophila (DflaA Lp), which do not activate the NAIP5/NLRC4 inflammasome [17,18]. Secretion of both Author Summary The inflammasome, a multiprotein complex, is critical for host defense against bacterial infection. The inflammasome activates the host protease caspase-1 to process and secrete IL-1b. Another caspase, caspase-11, can cause cell death and IL-1a release. The bacterial signals that trigger caspase-11 activation are poorly understood. A common feature of many bacterial pathogens is the ability to inject virulence factors and other bacterial molecules into the host cell cytosol by means of a variety of virulenceassociated secretion systems. These secretion systems can introduce bacterial flagellin into the host cytosol, which leads to caspase-1 activation and cell death. However, many bacteria lack or down-regulate flagellin yet still activate the inflammasome. Here, we show that the type IV secretion system of Legionella pneumophila and the type III secretion system of Yersinia pseudotuberculosis rapidly trigger caspase-11 activation in a flagellin-independent manner. Caspase-11 activation mediates two separate inflammasome pathways: one leading to IL-1b processing and secretion, and one leading to cell death and IL-1a release. Furthermore, we find these caspase-11-regulated cytokines are critical for neutrophil recruitment to the site of infection and clearance of non-flagellated Legionella in vivo. Overall, our findings show that virulent bacteria activate a rapid caspase-11-dependent immune response that plays a critical role in host defense. cytokines is significantly diminished during infection with L. pneumophila lacking DotA, an essential component of the T4SS (DdotA Lp), and is significantly diminished in caspase-1/caspase-11deficient (Casp1 2/2 Casp11 2/2 ) macrophages as well ( Figure 1A). The diminished IL-1 secretion induced by DdotA Lp is not due to a lack of pro-IL-1 production, as DdotA Lp and WT Lp induce robust levels of pro-IL-1b ( Figure S1A). At 4 hours postinfection, unprimed macrophages do not secrete IL-1 ( Figure 1B). However, LPS-primed cells rapidly secrete IL-1a and IL-1b, and this secretion is abrogated in Casp1 2/2 Casp11 2/2 Figure 1B). Secretion of IL-18, another IL-1 family cytokine, also requires T4SS activity and is eliminated in Casp1 2/2 Casp11 2/2 cells ( Figure S1B). Comparable levels of the caspase-1/ caspase-11-independent cytokines IL-12 and TNF-a are secreted in the absence and presence of LPS priming ( Figure S1C-D). These data suggest that LPS priming upregulates a factor required for rapid IL-1a and IL-1b release in response to L. pneumophila T4SS activity. Caspase-1 catalytic activity is required for IL-1b but not IL-1a secretion Secretion of IL-1b in response to both canonical and noncanonical inflammasome activation requires caspase-1 [26,53,54]. In contrast, IL-1a release downstream of the non-canonical inflammasome depends on caspase-11, and does not require caspase-1 [26]. To test if the catalytic activity of caspase-1 is required for IL-1a secretion in response to L. pneumophila, we inhibited caspase-1 catalytic activity with the pharmacological inhibitor YVAD-cmk (YVAD). Consistent with previous studies [53], IL-1b secretion in response to L. pneumophila is substantially inhibited by YVAD. However, YVAD has no effect on IL-1a secretion, indicating that IL-1a release in response to L. pneumophila does not require caspase-1 catalytic activity ( Figure 1C), as has been shown for other inflammasome activators [55]. Given that IL-1a secretion occurs more rapidly upon LPS priming, is abrogated in Casp1 2/2 Casp11 2/2 macrophages, and does not require caspase-1 catalytic activity, we considered the possibility that caspase-11 might participate in inflammasome activation during L. pneumophila infection. Caspase-11 contributes to inflammasome activation in response to flagellin-deficient L. pneumophila To test the genetic requirement for caspase-11 in the inflammasome response to L. pneumophila, we infected BMDMs from either caspase-1-deficient (Casp1 2/2 ) or caspase-11-deficient (Casp11 2/2 ) mice. In the absence of flagellin, caspase-11 is required for IL-1a secretion, whereas it is not essential for IL-1b secretion but contributes to maximal secretion ( Figure 2A). These data suggest that caspase-11 is activated in response to L. pneumophila infection independently of flagellin. Indeed, there is robust processing and secretion of caspase-11 in response to WT Cell death (% cytotoxicity) was measured by LDH release into the supernatants relative to Triton X-100-lysed cells. Graphs show the mean 6 SEM of triplicate wells. (C) Levels of processed caspase-1 (casp-1 p10) in the supernatants and full-length caspase-1 (pro-casp-1) and b-actin in the cell lysates were determined by immunoblot analysis. Data are representative of three independent experiments. *** is p,0.001 by two-way ANOVA with Bonferroni post-test, ** is p,0.01 by two-way ANOVA with Bonferroni post-test, and * is p,0.05 by unpaired t-test. NS is not significant. doi:10.1371/journal.ppat.1003400.g002 Specialized Secretion Systems Activate Caspase-11 PLOS Pathogens | www.plospathogens.org and DflaA Lp ( Figure S2). In accordance with previous findings [26,53], caspase-1 is absolutely required for IL-1b secretion. In contrast, we observe robust IL-1a release even in the absence of caspase-1. Both IL-1a and IL-1b release in response to DflaA Lp are caspase-11-dependent in both primed and unprimed macrophages (Figures 2, S3A-B), making L. pneumophila distinct from other Gram-negative bacteria that require priming to induce robust caspase-11 upregulation and activation [27]. Thus, while caspase-11 contributes to maximal caspase-1-dependent IL-1b secretion, it is both necessary and sufficient for IL-1a release in response to flagellin-deficient L. pneumophila. Cell death in B6 BMDMs is partially flagellin-dependent, but is flagellin-independent in Casp1 2/2 BMDMs ( Figure 2B). Importantly, cell death in response to flagellin-deficient L. pneumophila requires caspase-11, thus correlating caspase-11-dependent cell death with IL-1a release from host cells. In contrast, and consistent with previous findings [26], LPS+ATP induces canonical caspase-1-dependent pyroptosis and secretion of IL-1a and IL-1b that is independent of caspase-11. Because caspase-1 must be processed to mediate IL-1b secretion [53], we examined whether caspase-1 processing is decreased in the absence of caspase-11, which could account for the decreased IL-1b secretion in response to DflaA Lp. Caspase-1 processing is slightly attenuated but not abrogated in response to DflaA Lp in Casp11 2/2 macrophages, consistent with the slight decrease in IL-1b secretion (Figures 2C, S3C). Thus, flagellin-deficient L. pneumophila trigger a canonical caspase-1-dependent inflammasome as well as a non-canonical caspase-11-dependent inflammasome. We next sought to determine whether IL-1a is also released independently of ASC and NLRC4 during in vivo infection. Because flagellin-deficient L. pneumophila do not activate the NLRC4 inflammasome [16,17,47], infecting Asc 2/2 mice with DflaA Lp eliminates both the ASC and NLRC4 inflammasome pathways. Importantly, the level of IL-1b in the bronchoalveolar lavage fluid (BALF) 24 hours post-infection is significantly attenuated in Asc 2/2 mice infected with DflaA Lp ( Figure 3C). In contrast, the level of IL-1a in the BALF is unaffected even in the absence of both the ASC and NLRC4 inflammasomes. Both IL-1a and IL-1b release are significantly diminished in caspase-1/ caspase-11-deficient mice ( Figure S5). Collectively, our data indicate that L. pneumophila triggers caspase-11 activation and IL-1a release independently of the ASC and NLRC4 inflammasomes during both in vitro and in vivo infection. Non-canonical inflammasome responses to L. pneumophila occur independently of TRIF and IFNAR Recent data demonstrate that caspase-11 activation in response to a wide variety of Gram-negative bacteria requires TLR4 signaling through its adaptor TRIF and subsequent type I IFN production [27][28][29]. To determine if L. pneumophila engages a similar TRIF and type I IFN receptor (IFNAR)-dependent pathway for caspase-11 activation, we infected TRIF-deficient (Trif 2/2 ) and IFNAR-deficient (Ifnar 2/2 ) BMDMs. Unlike the response to E. coli, L. pneumophila infection of unprimed macrophages triggered robust cell death and secretion of IL-1a and IL-1b that was independent of IFNAR and TRIF ( Figure 5A-B). Consistently, priming with the TLR1/2 agonist Pam3CSK4, which results in TRIF-and IFNAR-dependent cytokine secretion and cell death in response to E. coli [27], still induced cell death and cytokine secretion in TRIF-and IFNAR-deficient cells in response to L. pneumophila ( Figure S8A-B). These data suggest that during L. pneumophila infection, caspase-11 is upregulated and activated independently of TRIF and IFNAR signaling. Indeed, caspase-11 is still robustly processed and secreted independently of IFNAR and TRIF ( Figures 5C, S9). Notably, substantially upregulated levels of pro-caspase-11 are not observed in the lysates of cells infected with WT or DflaA Lp because both the pro and cleaved forms of caspase-11 are rapidly secreted into the cell supernatant upon infection ( Figures 5C, S9). Accordingly, lysates from IFNAR-and TRIF-deficient macrophages infected with L. pneumophila express comparable levels of pro-caspase-11 to wild-type macrophages, whereas TRIF and IFNAR do contribute to upregulation of pro-caspase-11 in response to E. coli ( Figure S10A-C). When the macrophages are primed with LPS prior to infection, there is a moderate contribution of TRIF and IFNAR signaling to inflammasome activation, consistent with the observation that LPS stimulates the TLR4-TRIF-IFNAR axis involved in caspase-11 upregulation ( Figure S8C-D). Because the caspase-11-dependent response to L. pneumophila is TRIF-independent, we investigated whether the TLR signaling adaptor MyD88 contributes to caspase-11 upregulation. When immortalized macrophages deficient for both MyD88 and Trif (iMyd88 2/2 Trif 2/2 ) were infected, caspase-11 upregulation was abrogated in response to both WT and DflaA Lp ( Figure S11A-B), and we were unable to detect caspase-11 activation (data not shown). Thus, although TRIF is not required for caspase-11 activation, a TLR-dependent signal is likely required as the loss of both MyD88 and TRIF eliminates caspase-11 upregulation and activation. Caspase-11 mediates inflammasome activation in response to Yersinia pseudotuberculosis type III secretion system activity Because caspase-11 activation in response to L. pneumophila expressing a functional T4SS is so rapid and robust, we sought to test whether this robust caspase-11-dependent inflammasome activation might be a general response to the activity of specialized secretion systems that allow for bacterial access to the host cytosol. The Yersinia pseudotuberculosis type III secretion system (T3SS) induces inflammasome activation independently of bacterial flagellin and the known secreted effector proteins, and this inflammasome activation is important for bacterial clearance [57]. Since wild-type Yersinia induces cell death that is independent of both caspase-1 and -11 and requires the secreted effector YopJ [57,58], we instead infected Casp1 2/2 Casp11 2/2 , Casp1 2/2 , and Casp11 2/2 BMDMs with a strain of Y. pseudotuberculosis that expresses a T3SS but lacks the six known secreted effectors (D6 Yp). Similarly to L. pneumophila infection, both IL-1a and IL-1b release in response to D6 Yp are caspase-11-dependent ( Figure 6A). Again, caspase-1 is absolutely required for IL-1b secretion, whereas IL-1a is released independently of caspase-1. Secretion of IL-12, an inflammasomeindependent cytokine, is unaffected ( Figure S12). Cell death in response to D6 Yp is both caspase-1 and caspase-11-dependent, with a more dramatic reduction in death in Casp11 2/2 BMDMs ( Figure 6B). Furthermore, Y. pseudotuberculosis-induced release of both IL-1a and IL-1b requires the presence of a functional T3SS, as Y. pseudotuberculosis unable to form a functional T3SS pore in the host cell plasma membrane (DyopB Yp) do not induce secretion of either cytokine. These data indicate a general role for caspase-11 in the induction of rapid cell death and robust release of IL-1a and IL-1b in response to bacterial secretion systems that are capable of accessing the host cell cytosol, but may be independent of the activities of specific virulence factors per se. IL-1a and IL-1b control bacterial burden and neutrophil recruitment in vivo As caspase-11 contributes to flagellin-independent IL-1a and IL-1b release from infected macrophages in vitro and IL-1a and IL-1b secretion is flagellin-independent in vivo, we wanted to determine the contribution of IL-1a and IL-1b to host defense against L. pneumophila in vivo. IL-1a and IL-1b both bind the IL-1 receptor (IL-1R), which signals through the MyD88 adaptor protein [59][60][61]. As MyD88 is critical for control of L. pneumophila replication during in vivo infection but deletion of an individual MyD88-dependent TLR or a combination of TLRs does not recapitulate MyD88 deficiency, it is likely that other MyD88-dependent receptors, including the IL-1R, may play a role [62][63][64][65][66]. IL-1R signaling contributes to chemokine production by non-hematopoietic cells during infection with wild-type, flagellinexpressing L. pneumophila [67]. However, the role of IL-1R signaling during infection with flagellin-deficient L. pneumophila, which do not activate the NAIP5/NLRC4 inflammasome, has not been investigated. We therefore infected B6 and IL-1R-deficient (Il1r1 2/2 ) mice intranasally with DflaA Lp and measured bacterial burden in the lung over the course of seven days. Though both B6 and Il1r1 2/2 mice received similar initial bacterial burdens, Il1r1 2/2 mice show a defect in bacterial clearance as early as 24 hours post-infection ( Figure 7A). Bacterial burden remains elevated in the absence of IL-1R signaling, with the Il1r1 2/2 mice still exhibiting a log-increase in bacterial load at 120 hours post-infection. Since IL-1R signaling is important for neutrophil recruitment [68], we examined whether Il1r1 2/2 mice have a defect in neutrophil recruitment to the pulmonary airway during L. pneumophila infection. Indeed, Il1r1 2/2 mice exhibit a significant decrease in neutrophil recruitment to the airway 24 hours postinfection, possibly contributing to their inability to efficiently clear the pathogen (Figure 7B-C). The IL-1R signals in response to both IL-1a and IL-1b; however, these cytokines can play non-redundant roles in antibacterial defense [69]. To determine the relative contributions of IL-1a and IL-1b to neutrophil recruitment and bacterial clearance during L. pneumophila infection, we utilized neutralizing antibodies to selectively block either IL-1a or IL-1b prior to infection. Specific cytokine neutralization in the BALF could be observed 24 hours post-infection ( Figure S13). Critically, IL-1a neutralization alone significantly diminishes the percentage of neutrophils recruited to the BALF at 24 hours post-infection and results in a half-log increase in bacterial CFUs, in marked contrast to isotype control antibody or neutralization of IL-1b, which on its own did not have a significant effect ( Figure 7D-F). However, neutralization of both IL-1a and IL-1b fully recapitulates the magnitude of neutrophil reduction and defect in bacterial clearance observed in the Il1r1 2/2 mice. Collectively, these data indicate that although there are some overlapping roles for these cytokines during L. pneumophila infection, IL-1a plays a distinct role from IL-1b in driving neutrophil recruitment to the airway and mediating bacterial clearance. Discussion Inflammasomes respond robustly to conserved features of pathogenic microbes, such as pore-forming toxins or specialized secretion systems that access the host cytosol. Inflammasomes therefore play a central role in enabling the immune system to discriminate between virulent and avirulent bacteria [70]. Recent reports show a role for caspase-11 in regulating the activation of a non-canonical inflammasome that promotes cell death as well as IL-1a and IL-1b secretion. This non-canonical inflammasome Figure 8. Caspase-11 controls multiple pathways of inflammasome activation in response to bacterial secretion systems that access the host cytosol. Three distinct inflammasome pathways are induced upon interaction of virulent bacteria with host cells. Translocation of flagellin into the host cytosol by specialized secretion systems triggers a NAIP5/NLRC4/caspase-1 inflammasome that leads to cell death, IL-1a, and IL-1b release. Virulent bacteria induce two separate pathways of caspase-11-dependent inflammasome activation through a two-signal model. First, TLR stimulation by PAMPs (signal one) leads to upregulation of pro-IL-1a, pro-IL-1b, NLRP3, and pro-caspase-11. Next, cytosolic detection of virulence activity, namely type III or type IV secretion (signal two), leads to caspase-11 processing and activation. Active caspase-11 contributes to NLRP3mediated inflammasome activation and caspase-1-dependent IL-1b secretion. Caspase-11 also mediates caspase-1-independent cell death and IL-1a release through a pathway that is independent of the NLRP3/ASC and NAIP5/NLRC4 inflammasomes and involves an unknown host sensor. doi:10.1371/journal.ppat.1003400.g008 responds to both pathogenic and non-pathogenic Gram-negative bacteria independently of specialized secretion systems that translocate bacterial molecules into the host cytosol [26][27][28][29]. This pathway involves the TRIF-and IFNAR-dependent upregulation and activation of caspase-11 and occurs with relatively delayed kinetics in comparison to the response to pathogenic bacteria. Intriguingly, we find that the activity of the L. pneumophila Dot/Icm T4SS leads to rapid and robust caspase-11 activation independently of the TRIF-IFNAR axis, and this activation triggers rapid cell death and release of both IL-1a and IL-1b (Figure 8). We extend these results to show that the evolutionarily distinct T3SS of another pathogen, Y. pseudotuberculosis, also rapidly triggers caspase-11-dependent responses. Collectively, our findings demonstrate that caspase-11 is critical for inflammasome activation in response to the secretion systems of virulent bacteria that enable bacterial molecules to access the host cell cytosol and demonstrate that IL-1a and IL-1b together play a crucial protective role during acute infection in vivo. We demonstrate that in response to the activity of bacterial secretion systems that enable cytosolic access, caspase-11 contributes to NLRP3-mediated inflammasome activation and caspase-1dependent IL-1b secretion and to a second ASC and NLRC4independent pathway that does not require caspase-1 and leads to cell death as well as robust IL-1a release. These L. pneumophilainduced pathways are similar to recent findings with a number of Gram-negative bacterial pathogens, including C. rodentium, E. coli, and S. typhimurium [26][27][28][29]. However, we observe rapid and robust T4SS-dependent activation of these two caspase-11-mediated pathways by L. pneumophila, whereas the response to Gram-negative bacteria lacking specialized secretion systems occurs less robustly and with much slower kinetics. Intriguingly, we observe a similarly rapid caspase-11-dependent induction of cell death and IL-1 release in response to the structurally and evolutionarily unrelated T3SS of Y. pseudotuberculosis. Importantly, this pathway is independent of host sensing of flagellin, as it is triggered by flagellin-deficient L. pneumophila, and Y. pseudotuberculosis downregulates flagellin expression when the T3SS is expressed [71]. Thus, our data suggest that the caspase-11 inflammasome is poised to respond robustly and rapidly to the activity of bacterial secretion systems that are capable of delivering microbial products to the host cell cytosol and may enable the host to respond to pathogens that evade flagellindependent responses. This could have significance for understanding the role of caspase-11 activation at mucosal sites colonized by large numbers of commensal bacteria. At mucosal barriers, it would be expected that the non-canonical inflammasome pathway would not be robustly activated by commensal bacteria but could respond rapidly to the presence of bacterial secretion systems that enable pathogen access to the host cytosol. Our findings are consistent with recent observations that the L. pneumophila Dot/Icm T4SS triggers the caspase-11-dependent noncanonical inflammasome [72], as well as the finding that bacteria that enter the cytosol either due to failure to maintain integrity of their replicative vacuoles or natural entry into the cytoplasm also trigger rapid caspase-11 activation [73]. Thus the pathway that leads to caspase-11 activation appears to be particularly sensitive to pathogens that 'violate the sanctity of the cytosol' [74], either through the activity of specialized secretion systems that translocate bacterial molecules into the cytosol or through their direct entry into the host cell cytosol. Whether other pathogens that replicate within the cytosol, such as Listeria or Shigella, or cytosolic viruses possess mechanisms to evade this pathway remains to be determined. L. pneumophila T4SS-mediated activation of caspase-11 differs from the other pathways of non-canonical inflammasome activa-tion in several ways. First, L. pneumophila-mediated activation of caspase-11 does not require TRIF or IFNAR signaling. We observe a moderate dependence on TRIF and IFNAR signaling when macrophages are primed with LPS prior to infection, consistent with LPS-dependent upregulation of caspase-11 expression through the TLR4-TRIF-IFNAR axis [27][28][29]. However, in the absence of LPS priming, TRIF and IFNAR signaling are dispensable for L. pneumophila-dependent caspase-11 activation. In this context, it is likely that MyD88 compensates for the absence of TRIF, as cells deficient for both MyD88 and TRIF failed to activate caspase-11 in response to L. pneumophila. Thus, although the TLR4-TRIF-IFNAR axis is required for caspase-11 activation in response to Gram-negative bacteria, a MyD88-dependent signal is sufficient for caspase-11 activation in response to pathogens that utilize virulence-associated secretion systems to translocate bacterial molecules into the host cytosol. It is possible that different signals are capable of activating caspase-11 through distinct pathways, but these pathways occur with distinct kinetics because they may indicate distinct levels of pathogenicity. Thus, while caspase-11 is robustly upregulated by LPS priming, this upregulation alone is insufficient for rapid activation in response to bacteria that lack specialized secretion systems, as DdotA or DyopB bacteria do not induce rapid cell death even in primed cells. Collectively, these data indicate a two-signal model for rapid caspase-11 activation during infection with virulent bacteria, where bacterial PAMPs induce caspase-11 upregulation, but rapid caspase-11 activation requires a second, secretion system-dependent signal ( Figure 8). The specific secretion system-dependent signals responsible for caspase-11 activation are currently unknown. While rapid activation of caspase-11 requires the presence of a functional type III or type IV secretion system or cytosolic access of the bacteria, whether the signal is an as-yet-undefined translocated bacterial molecule or a cellular response to the pore forming activity of these systems remains to be determined. The delayed NLRP3-and caspase-11-dependent response to Gram-negative bacteria suggests that in addition to LPS-induced upregulation of inflammasome components, bacterial mRNA provides an additional signal for activating the NLRP3 inflammasome [18,75], although the role of caspase-11 in this response has not been formally demonstrated. Activity of the type III or IV secretion systems may bypass the need for bacterial mRNA. Alternatively, these secretion systems may translocate bacterial RNA [70,76,77], and the rapid caspase-11-dependent response they induce could be due to more rapid delivery of bacterial mRNA into the host cell cytosol. Furthermore, the host factors required for activation of the NLRP3-independent caspase-11-dependent inflammasome also remain to be identified. As this pathway is independent of flagellin sensing, NLRP3, ASC, and NLRC4, an unknown upstream sensor and/or adaptor may be involved in caspase-11 activation in response to a translocated bacterial substrate or an endogenous signal induced by infection. This sensor may also be upregulated by type I IFN signaling itself [27][28][29]. Our data show that IL-1a release during L. pneumophila infection is controlled by two independent pathways, one involving the flagellin-dependent NAIP5/NLRC4 and caspase-1-dependent inflammasome and a second pathway involving the NLRP3independent caspase-11-dependent inflammasome (Figure 8). Though we demonstrate that IL-1a release has an important biological consequence in vivo for neutrophil recruitment and bacterial clearance, it is unclear if IL-1a release is regulated by unconventional secretion, as is the case for IL-1b [78]. As both pathways that control IL-1a release also lead to cell death, our data are consistent with a model in which IL-1a is an endogenous alarmin that is released during cell death [79]. Interestingly, caspase-11 also contributes to control of flagellinexpressing L. pneumophila by serving as a component of an NLRC4-dependent inflammasome that promotes trafficking of the L. pneumophila-containing vacuole to lysosomes [80]. Thus, caspase-11 may function in multiple ways to control L. pneumophila infection. Importantly, we find that IL-1a, IL-1b, and IL-1R signaling play an important role in the control of L. pneumophila infection through efficient neutrophil recruitment to the airway. IL-1a and IL-1b play both distinct and overlapping roles in mediating neutrophil recruitment and controlling bacterial replication, as depletion of IL-1a alone showed a more pronounced defect in neutrophil recruitment and bacterial clearance than depletion of IL-1b alone, but loss of both cytokines resulted in a further reduction of neutrophil recruitment and an increased defect in bacterial clearance. Further analysis is required to define the relative contributions of the various caspase-11-mediated effector functions to the control of L. pneumophila replication in vivo. In conclusion, these studies demonstrate that T3SS and T4SS activities trigger rapid and robust activation of caspase-11. This activation contributes to maximal NLRP3-dependent IL-1b secretion as well as to NLRP3-independent IL-1a release and host cell death. The downstream effector functions of these pathways are important for host defense against L. pneumophila in vivo, as IL-1a and IL-1b promote neutrophil recruitment to L. pneumophila-infected lungs and control pulmonary bacterial replication. Our results highlight the contribution of caspase-11 to rapid inflammasome activation and discrimination between pathogenic and nonpathogenic bacteria. Ethics statement This study was carried out in strict accordance as defined in the federal regulations set forth in the Animal Welfare Act (AWA), the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health, and the guidelines of the University of Pennsylvania Institutional Animal Use and Care Committee. The protocols were approved by the Institutional Animal Care and Use Committee at the University of Pennsylvania (protocols #803465 and #803459). Bacterial strains Legionella pneumophila serogroup 1 strains were used in all experiments. Macrophages were infected with Lp02 (thyA), a thymidine auxotroph derived from strain Lp01 [40], or DdotA [81] and DflaA [16] isogenic mutant strains. For in vivo studies, mice were infected with the Lp02 DflaA or the JR32 [82] DflaA isogenic mutant strain where indicated. For in vitro and in vivo studies, L. pneumophila were cultured on charcoal yeast extract agar for 48 hours at 37uC prior to infection. Escherichia coli BL21 strain was cultured in LB broth for 16 hours at 37uC prior to infection. The Yersinia pseudotuberculosis strains used were IP2666 DyopHOJEMK (D6) [58] and DyopB [83]. Yersinia were grown overnight with aeration in 26YT broth at 26uC. The bacteria were diluted into fresh 26YT containing 20 mM sodium oxalate and 20 mM MgCl 2 . Bacteria were grown with aeration for 1 hour at 26uC followed by 2 hours at 37uC prior to infection. In vivo infection studies [8][9][10][11][12] week-old mice were anesthetized by intraperitoneal injection of a ketamine/xylazine/PBS solution at a dose of 100 mg/kg ketamine and 10 mg/kg xylazine. Mice were infected intranasally with 40 ml of a bacterial suspension containing 1610 6 CFU L. pneumophila or PBS vehicle control. For antibody neutralization experiments, mice were injected intraperitoneally with 100 mg anti-IL-1a antibody (clone ALF-161), 100 mg anti-IL-1b antibody (clone B122), 100 mg of each anti-IL-1a and anti-IL-1b antibody, or 100 mg Armenian hamster IgG 1 isotype control antibody (eBioscience) 16 hours prior to intranasal infection. At the indicated timepoints after infection, mice were sacrificed, and the bronchoalveolar lavage fluid (BALF) and lungs were harvested. To determine bacterial load, the lungs were mechanically homogenized in sterile distilled H 2 O and a portion of the lysate was spread onto CYE plates. Animal experiments were performed in accordance with approved University of Pennsylvania Institutional Animal Care and Use Committee protocols and procedures. Macrophage experiments Bone marrow was collected from the femurs and tibiae of mice. Bone marrow cells were differentiated into macrophages by culturing the cells in RPMI containing 30% L929 cell supernatant and 20% FBS at 37uC in a humidified incubator. The macrophages were replated one day prior to infection in RPMI containing 15% L929 cell supernatant and 10% FBS. For experiments involving LPS-primed macrophages, macrophages in 48-well plates (2.0610 5 cells/well) were pretreated with 0.5 mg/ mL LPS for 2.5 hours and either mock-infected with PBS, infected with L. pneumophila at an MOI = 10 for 4 hours, or treated with 2.5 mM ATP for 1 or 4 hours. For experiments performed in the absence of LPS priming, macrophages in 48-well plates (2.0610 5 cells/well) were either mock-infected with PBS, infected with L. pneumophila at an MOI = 10 for 16 or 20 hours, or infected with E. coli at an MOI = 25 for 1 hour followed by gentamycin treatment for 15 hours. To assess the involvement of caspase-1 catalytic activity, macrophages were treated with 20 mM or 40 mM of the caspase-1 inhibitor YVAD-cmk (Bachem) or an equivalent volume of dimethyl sulfoxide (vehicle control) 0.5 hours prior to infection. For L. pneumophila and E. coli infections, bacteria were centrifuged down onto the macrophages at 1200 RPM for ten minutes prior to incubation. For Y. pseudotuberculosis infection, bacteria were washed three times with pre-warmed DMEM, added to the cells at an MOI = 20, and centrifuged down onto the macrophages at 1000 rpm for 5 min. Cells were incubated at 37uC for 1 hour post-infection followed by addition of 100 mg/mL gentamicin. Supernatants were harvested 4 hours post infection for ELISA and LDH analysis. Cytotoxicity assays Cells were infected or treated as described above, and supernatants were harvested at the indicated times post-infection. Lactate dehydrogenase (LDH) release was quantified using the LDH Cytotoxicity Assay Kit (Clontech) according to the manufacturer's instructions. Immunoblotting Supernatants from infected cells were mixed 1:1 with 2 X SDS-PAGE sample buffer or infected BMDMs were directly lysed in 1 X SDS-PAGE sample buffer. Samples were boiled, separated by SDS-PAGE, and transferred to Immobilon P membranes (Millipore). Primary antibodies against caspase-1 p10 (Santa Cruz Biotechnology), caspase-11 (Sigma, clone 17D9), IL-1b (R&D systems), and b-actin (Sigma) were used. Detection was performed with HRP-conjugated anti-rabbit IgG (Cell Signaling Technology) or anti-rat IgG (Santa Cruz Biotechnology or Jackson Immuno). ELISA Harvested supernatants from infected macrophages or the BALF from infected mice were assayed using capture and detection antibodies specific for IL-18 (MBL), IL-1a, IL-1b, and IL-12p40 (BD Biosciences). Flow cytometry To determine neutrophil recruitment to the airway, BALF cells were stained with Live/Dead Fixable Dead Cell Stain (Invitrogen), and antibodies specific for CD45, Gr-1 (eBioscience), and Ly6G (Biolegend). Data were collected with an LSRII flow cytometer (BD Biosciences) and post-collection data was analyzed using FlowJo (Treestar). Cells were gated on singlets and live cells. Neutrophils were identified as being CD45 + , Gr-1 + , and Ly6G + . Statistical analysis Plotting of data and statistical analysis were performed using Graphpad Prism software, and statistical significance was determined by the unpaired two-tailed Student's t test, one-way ANOVA with Tukey post-test, or two-way ANOVA with Bonferroni post-test. Differences were considered statistically significant if the P value was ,0.05. Figure S12 Caspase-11-deficient cells secrete comparable amounts of IL-12 in response to Y. pseudotuberculosis. B6, Casp1 2/2 Casp11 2/2 , Casp1 2/2 , or Casp11 2/2 mice were primed with 0.05 mg/mL LPS for 2.5 hours and infected with type III secretion system-deficient Y. pseudotuberculosis (DyopB Yp), effectorless Y. pseudotuberculosis DHOJMEK (D6 Yp), or PBS (mock infection) or treated with 2.5 mm ATP for 4 hours. The level of IL-12 p40 in the supernatants was measured by ELISA.
2016-04-15T09:12:14.267Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "597fb9bf15ddaa7c69c9fd83b97914840410de9d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1003400&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "597fb9bf15ddaa7c69c9fd83b97914840410de9d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254007841
pes2o/s2orc
v3-fos-license
Deficit Irrigation Stabilizes Fruit Yield and Alters Leaf Macro and Micronutrient Concentration in Tomato Cultivation in Greenhouses: A Case Study in Turkey : Water is crucial for agriculture and needs to be used effectively due to climate change and drought in the Mediterranean region. For this reason, to adapt to water deficit scenarios, deficit irrigation applications are increasing in importance. The aim of this research was to determine the effect of varying levels of irrigation on growth parameters and concentration of nutrients in tomato plants grown under greenhouse conditions. The irrigation schedule used in this study was designed to include 100% (control), 90%, 80% and 70% of evaporation from the class-A pan. Water deficit was found to cause a stress effect in tomato plants, which was reflected in changes in the physiological function plants, such as flowering and early ripening. In addition, the SPAD values were examined, for which the lowest value of the green color intensity of the leaves was 47.3 (I3) and the highest was 48.7 (I4). However, the results of statistical analyses show that the difference was not significant. We also observed that the height values of tomato plants were the highest in the period of seedling and fruit ripening under full irrigation. Furthermore, analysis of the macronutrient content of tomato leaves showed that the obtained values were below the threshold values recommended for manganese. Based on these and similar studies, we believe that the application of water stress is most effective during the phases in which the plants are least affected. We believe that determining the periods during which tomato or any other crop is be affected by the least water stress will be more accurate for both plant development and economic production. Introduction Owing to its taste and health benefits, tomato (Solanum lycopersicum L.) one of the most important vegetables for human nutrition. This species is considered one of the most widely produced agricultural products in the world in terms of weight [1][2][3]. In Turkey, tomatoes are grown both in greenhouses and in open fields, especially in the Mediterranean region [2,4]. The world leader in tomato production is China, with an annual production of 62.8 million tons. India ranks second, with a production of 19 million tons; Turkey is third, with a production of 13 million tons; and the United States is fourth, with a production of 10.9 million tons. Turkey has several favorable factors for growing tomatoes, especially under greenhouse conditions, including (1) diverse climatic conditions with the possibility of cultivation at any time of the year; (2) geothermal resources, which make cultivation in winter more economical; and (3) proximity to European and Asian markets [5]. Tomatoes, which have a high rate of production in Turkey and worldwide, are important in terms of nutrition [6]. Tomato composition may vary depending on factors such as tomato type (beefsteak, large type, cherry, etc.), cultivar, cultivation method, growing environment (field vs. greenhouse), region and time of harvest [7,8]. This species is also an important source of antioxidants in human nutrition, such as lycopene, phenols and vitamin C [9,10]. When growing tomatoes, it is very important to ensure optimum moisture levels in the soil so that the plants are not stressed by water deficit, which causes yield losses. Especially on hot summer days, the rate of evapotranspiration is very high, and plants can be damaged before stress becomes apparent. In this case, irreversible losses in the amount and quality of the yield may occur. Under irrigation management, physiological measurements determining the water stress of plants are better indicators than measurements of soil moisture [11,12]. In arid and semi-arid regions where water resources are limited, the need for adequate irrigation for efficient water use has led scientists to develop new irrigation programming technologies. Previous studies confirm that plant-based methods have significant irrigation programming potential and have shown that measurements such as leaf water potential, leaf temperature, sap flow and stem diameter can be used for precise irrigation scheduling [12,13]. The use of deficit irrigation in tomato cultivation seems to be beneficial not only from the point of view of water use efficiency but also in terms of improving the quality of tomato fruit [14]. Furthermore, water deficit reduces the accumulation of ions in the leaves [15]. In a water deficit study reported by Rodriges et al., the content of some microand macronutrients decreased [16]. However, they found that fruits had increased color intensity, reduced water content and increased concentrations of sucrose, glucose and fructose when grown under conditions of water deficit. Nuruddin et al. [17] applied water deficit during different growth stages of tomato plants under greenhouse conditions, both in summer and winter. In the summer period, the yield of fruit harvested from plants not exposed to stress related to water deficiency was only 1.78 kg plant −1 , whereas the yield of fruit harvested from plants cultivated under deficit irrigation conditions during flowering and fruit setting was 1.45 kg plant −1 . During the winter period, the yield of plants not subjected to stress was 1.34 kg plant −1 , and the yield of plants cultivated under the conditions of deficit irrigation during flowering and fruit setting was 1.40 kg plant −1 . The development of deficit irrigation practices as a tomato production management tool can be very effective in conditions of water scarcity and can reduce wastewater pollution. This is important because tomato is a popular vegetable grown all over the world. Water deficits and insufficient water supply are the main factors limiting crop production worldwide. Water-saving practices can reduce production costs, conserve water and reduce leaching of nutrients and pesticides into groundwater [17]. Kırda et al. [18] concluded that fruit yield and quality should be investigated before adopting deficit irrigation practices as a management tool. We believe that the research on deficit irrigation of tomatoes will continue intensively, owing to the increasing demand for water. It is necessary not only to consider the relationship between deficit irrigation and crop production efficiency but also to study the nutrient intake of the plant. Micro-and macro elements are necessary for the proper metabolism of plants and cannot be replaced by other elements [19,20]. The tomato was chosen as the subject of this research owing to the very common production of this vegetable, both in Turkey and worldwide. The main objectives of this study were (a) the observation of changes in tomato development parameters in the face of water deficit during cultivation and (b) the correlation of yield with deficit irrigation and determination of the nutritional status of plants. The greenhouse was 18 m long and 6 m wide, and the planting area was 108 m 2 . Plants were cultivated in the greenhouse in sandy loam soil. Other properties of the greenhouse soil are: soil pH, 7.8; electrical conductivity, 0.28 ds m −1 ; organic matter content, 0.8%; lime content, 5%; and cation exchange capacity, 66 me 100 g −1 . Crop Management Tomato seedlings (Solanum lycopersicum cv. Demiröz) were transplanted on 24 September 2021. Before planting, the soil in the greenhouse was processed and adapted for the planting of seedlings. In each plot, tomato seedlings were planted in rows with a spacing of 0.50 m, and the spacing of plants in the rows was 0.50 m. During the growing season, the plants were fertigated with the drip irrigation method. Based on our previous experiences, we applied 200, 100 and 150 kg ha −1 N, P and K, respectively. Ammonium nitrate, monoammonium phosphate and potassium sulphate were used as fertilizer sources. In addition, taking into account the need for microelements (Fe, Cl, Mn, Cu, Ni, etc.) in tomato fertilization, foliar fertilizers were used. Solid fertilizers were applied to the plots before planting. Experimental Design The conditions inside the greenhouse were adapted to the climatic requirements of the tomato. The process of growing tomatoes in the greenhouse covered the dates from planting the seedling (24 September 2021) to the end of the harvest (17 January 2022). The experiment consisted of 12 plots and three replications. The plots were arranged in a completely random design. In each plot, 25 grafted tomato seedlings were planted in 5 rows. The area of each plot was 6.25 m 2 (2.5 m length × 2.5 m width). Drip irrigation applications were located on one side of each row of plants. A class-A pan was used to calculate the water consumption of the plants. During the irrigation treatments, 100%, 90%, 80% and 70% of the total amount of evaporation measured from the class-A pan evaporation was applied. Accordingly, the irrigation treatments were performed as follows: (1) I100 irrigation, using the total amount of evaporated water from the class-A pan (100%) (full irrigation, I1, control); (2) I90 irrigation, for which 90% of the total evaporated water from the class-A Pan was used (I2 with a 10% water deficit); (3) I80 irrigation, for which 80% of the total evaporated water from the class-A pan was used (I3 with a 20% water deficit); and (4) I70 irrigation, for which 70% of the total evaporated water from the class-A pan was used (I4 with a 30% water deficit). Crop Water Consumption The water consumption of the crop is calculated using a class-A pan inside the greenhouse. Using the amount of irrigation water, open water surface evaporation and plant-pan coefficients, Gençoglan et al. [24] determined water consumption in accordance with the described method. The amount of water to be used in the plant water balance equation was calculated using Equations (1) and (2). where: I = amount of irrigation water (mm); Epan = amount of A-pan evaporation (mm); Kcp = plant-pan coefficient (I70 = 0.70, I80 = 0.80, I90 = 0.90 and I100 = 1.00); A = plot area (m 2 ); P = percentage of wetted area (%); and V = volume of water L −1 [24]. The amount of water for irrigation calculated in the above equation is used in Equation (3) to calculate the water consumption of plants (evapotranspiration) [25]. Water meters placed on each plot were used to supply the appropriate amount of water for irrigation. Fertilizers, which were determined depending on the content of nutrients in the soil and the needs of plants, were applied in equal amounts to all tested individuals. ET = I + R + Cr + Dp + Rf + ∆s, where: ET = water consumption (mm); I = amount of irrigation water (mm); Cr = capillary rise (mm); Dp = penetration losses (mm); R = runoff (mm); Rf = runoff losses (mm); and ∆s = soil moisture content at the time of planting/harvesting (∆s = W in/out mm). The capillary rise used in Equation (3) was ignored because there was no drainage problem in the experimental area. The values of deep infiltration and surface runoff were assumed to be zero because the drip irrigation method was used, and the practice of irrigation by bringing the missing moisture to the field capacity was taken into account. After calculating the reference water consumption by tomatoes, using the values of the plant coefficient Kc for tomato, the potential values of water consumption by plants during the growing season were calculated (Table 2). We found that with an increased amount of water used for irrigation, the values of seasonal water consumption by plants also increased. The I1 irrigation treatment corresponds to the level of full irrigation, at which the standard amount of water needed to irrigate tomatoes was used, with the highest seasonal water consumption by plants amounting to 399 mm. Seasonal water consumption by plants under deficit irrigation treatments I2, I3 and I4 was 359, 319 and 279, respectively. In a study on tomatoes conducted in a plastic greenhouse in Turkey, Kirda et al. [26] determined the value of seasonal water consumption by plants at the level of 375 mm with full irrigation and 245-247 mm with deficit irrigation. Soil Analysis Soil organic matter (OM) was determined on the basis of studies by Walkley and Black [27]. In addition, the pH and EC of the soil (soil:water; 1:2.5, w w −1 ) were determined using an electrical conductivity meter (EC) and a pH meter [28]. The calcimetric method was used to determine the content of CaCO 3 in the soil [29]. Soil cation exchange capacity (CEC) was determined by the BCl 2 method as described by Hendershot et al. [30]. pH (6.20 KCl), organic material (0.8%), total N (2250.00 kg ha −1 ), total P (5542.5 kg ha −1 ) and total K (4999 kg ha −1 ) were determined based on the soil analysis carried out before planting. Plant Analysis Procedures Plant height (cm) determined using a measuring tape and the number of leaves per plant (pcs.) were used as vegetative characteristics. The height of the tomatoes was measured during four stages of plant development: seedling (6 October 2021), flowering (26 October 2021), fruit growth (26 November 2021) and ripening (22 November 2021). Total yield (kg ha −1 ), number of fruits (pcs. per 18.75 −2 ) and fruit weight (g) were assessed as generative features of tomatoes. To determine the nutritional status of the plants, 8-10 leaves from different plants were collected, representing each parallel of the 5th or 6th leaf from the top on all sides of the plants [31]. After washing the samples with distilled water, they were placed in an air-flow oven at 70 ± 5 • C until a stable weight was obtained. Dried samples were ground and wet-digested in a microwave oven (CEM Mars X-press, CEM Corporation, Matthews, NC, USA) at 180 • C, then filtered into to 50 mL with deionized water to measure N, P, K, Ca, Mg, Cu, Mn, Fe and Zn. The concentrations of K, Ca, Mg, Cu, Mn, Fe and Zn were determined using an atomic absorption spectrophotometer. Phosphorus analysis was performed using a spectrophotometer [32]. For nitrogen analysis, the samples were wet digested with concentrated H 2 SO 4 in 250 mL macro-Kjeldahl tubes at 350-400 • C. After digestion, the samples were distilled with NaOH (40%), and NH 4 -N was fixed in H 3 BO 3 (2%) and then titrated with 0.1 normal H 2 SO 4 [33]. SPAD (Soil Plant Analysis Development) A Minolta SPAD 502 was used to determine the intensity of the green color of the leaves. As part of the measurement, 3 readings were taken from the leaves collected for mineral analysis, and the values were averaged. SPAD measurements were taken 2 months after the start of the experiment. Leaf Area Index (LAI) Leaf area index (LAI) values were also calculated in the study. A portable leaf area meter (ADC BioScientific, Model: AM300, Hoddesdon, UK) was used to measure leaf area (LA). The leaf area index was calculated using Equation (4). where: Statistical Evaluation ANOVA (analysis of variance) was used to determine differences between the mean values obtained in this study. ANOVA is used to determine whether there is a difference between 3 or more groups based on a particular variable. The data were statistically evaluated using the MSTAT package program. In one-way ANOVA, the means are compared assuming that k populations are normally distributed with µ 1 , µ 2 , . . . µ k means and common variance σ 2 . Because the condition of one-way analysis of variance is suggested for the normal distribution of the group data, the conformity of the data with the normal distribution was previously tested with tests of normality. The Shapiro-Wilk normality test was preferred in this study. After testing whether there was a significant difference between the groups using a one-way ANOVA, the difference between the groups was examined with the post hoc technique using Tukey's test. Results and Discussion In this study, we investigated tomatoes grown in a greenhouse under water deficiency conditions. During the experiment, the effect of four levels of irrigation on growth parameters and nutrient concentrations was assessed. The first irrigation treatment (I1) was full irrigation, the second irrigation treatment (I2) comprised a deficit of irrigation water of 10% in relation to I1, the third irrigation treatment (I3) comprised a deficit of irrigation water of 20% in relation to I1 and the fourth irrigation treatment (I4) comprised a deficit of irrigation water of 30% in relation to I1. Table 3 presents changes in the height of tomatoes in different stages of development depending on the level of water deficit. The tallest tomato seedlings were obtained under the full irrigation (I1) treatment. The data show a normal distribution according to the Shapiro-Wilk test of normality (p > 0.05). Whether there was a significant difference between them was examined using ANOVA. The results are shown in Table S1. According to the ANOVA results, there was a statistically significant difference between the groups (F = 19.07; p ≤ 0.05). Tukey's test was conducted to group the differences according to origin, showing that differences occurred between full irrigation (I1) and limited irrigation (I3 and I4) (p ≤ 0.05). In the case of deficit irrigation, the height of tomatoes was lower than in the case of full irrigation. Generally, researchers do not recommend a water deficit during the seedling stage. Water restrictions applied during this stage have a negative impact on plant growth. The height of tomatoes measured both during the flowering stage and in fruit growth and ripening stages was similar for all irrigation treatments. Differences in tomato height between the tested irrigation treatments measured both during flowering and fruit growth were not statistically significant (p > 0.05). Taking into account the average height of tomatoes during the fruit ripening stage, we found that plants cultivated under full irrigation conditions (I1) were significantly (p > 0.05) taller than plants cultivated under deficit irrigation conditions (I3). The tomato is sensitive to water deficiency. In this regard, the use of deficit irrigation in tomato cultivation has a negative effect on plant height. According to Nangare et al. [35], tomato plant height was higher under full irrigation than under deficit irrigation. In addition, the researchers reported that not all stages of tomato development are equally susceptible to soil moisture deficiency, and deficit irrigation may be more beneficial during non-critical stages. We found that tomato plants are most sensitive to water deficiency during flowering and fruit setting [17,36,37]. We also assessed the effect of deficit irrigation treatments on the intensity of tomato leaf coloration (SPAD). These measurements were conducted about 2 months after the start of the experiment, and the results are presented in Figure 1. We observed that the lowest value of green leaf color intensity was measured for I3 plants and amounted to 47.3, whereas the highest value was 48.7 for I4 plants. We found that the SPAD values assessed in plants cultivated under I2 and I4 deficit irrigation conditions were higher than in the case of full irrigation (I1). However, these differences were not significant according to ANOVA results (p > 0.05) after performing the Shapiro-Wilk normality test of (p > 0.05). Thus, a post hoc test was not performed, as there were no differences between the groups according to the ANOVA result (Table S2). ments. Differences in tomato height between the tested irrigation treatments measured both during flowering and fruit growth were not statistically significant (p > 0.05). Taking into account the average height of tomatoes during the fruit ripening stage, we found that plants cultivated under full irrigation conditions (I1) were significantly (p > 0.05) taller than plants cultivated under deficit irrigation conditions (I3). The tomato is sensitive to water deficiency. In this regard, the use of deficit irrigation in tomato cultivation has a negative effect on plant height. According to Nangare et al. [35], tomato plant height was higher under full irrigation than under deficit irrigation. In addition, the researchers reported that not all stages of tomato development are equally susceptible to soil moisture deficiency, and deficit irrigation may be more beneficial during non-critical stages. We found that tomato plants are most sensitive to water deficiency during flowering and fruit setting [17,36,37]. We also assessed the effect of deficit irrigation treatments on the intensity of tomato leaf coloration (SPAD). These measurements were conducted about 2 months after the start of the experiment, and the results are presented in Figure 1. We observed that the lowest value of green leaf color intensity was measured for I3 plants and amounted to 47.3, whereas the highest value was 48.7 for I4 plants. We found that the SPAD values assessed in plants cultivated under I2 and I4 deficit irrigation conditions were higher than in the case of full irrigation (I1). However, these differences were not significant according to ANOVA results (p > 0.05) after performing the Shapiro-Wilk normality test of (p > 0.05). Thus, a post hoc test was not performed, as there were no differences between the groups according to the ANOVA result (Table S2). Abdelhay et al. [38] found that water restriction affects SPAD values. Their measured SPAD value in fully irrigated tomatoes was significantly lower than that in plants treated with 80% full irrigation [38], which is similar to our research results. The main reason for this situation is probably the early start of ripening and fruit setting resulting from the conditions of water deficit. The effect of different irrigation levels on the concentrations of N, P, K, Ca and Mg in tomato leaves is shown in Figure 2. The Shapiro-Wilk normality test (p > 0.05) and ANOVA were used to evaluate the differences between irrigation treatments (I1, I2, I3 and I4). We observed that the use of varying levels of plant irrigation had a significant (F: 0.865: p ≤ 0.05) effect on leaf nitrogen concentration. The highest concentration of N (3.16%) was observed in the leaves of plants growing under the I2 irrigation treatment, and the lowest (2.58%) was observed in plants growing under the I4 irrigation treatment. We found that the concentration of N in the leaves of plants growing all levels of irrigation was below Abdelhay et al. [38] found that water restriction affects SPAD values. Their measured SPAD value in fully irrigated tomatoes was significantly lower than that in plants treated with 80% full irrigation [38], which is similar to our research results. The main reason for this situation is probably the early start of ripening and fruit setting resulting from the conditions of water deficit. The effect of different irrigation levels on the concentrations of N, P, K, Ca and Mg in tomato leaves is shown in Figure 2. The Shapiro-Wilk normality test (p > 0.05) and ANOVA were used to evaluate the differences between irrigation treatments (I1, I2, I3 and I4). We observed that the use of varying levels of plant irrigation had a significant (F: 0.865: p ≤ 0.05) effect on leaf nitrogen concentration. The highest concentration of N (3.16%) was observed in the leaves of plants growing under the I2 irrigation treatment, and the lowest (2.58%) was observed in plants growing under the I4 irrigation treatment. We found that the concentration of N in the leaves of plants growing all levels of irrigation was below the threshold level for tomatoes indicated by Jones et al. [32] and Bergman [39]. All nutrients perform specific functions in plant physiology. In the case of nutrient deficiency, the plants show corresponding symptoms. For example, tomato shows poor vegetative growth under nitrogen-deficient conditions. Regression of flowering and fruiting occurs in association with phosphorus deficiency. In the case of potassium deficiency, fruits show quality problems, such as changes in color and taste. Magnesium deficiency in plants slows down photosynthesis and reduces yield [39]. The concentration of phosphorus for all irrigation treatments was below the recommended values (0.5-1.2%) [32,39]. We found that the effect of irrigation conditions on P concentration in leaves was statistically significant (p ≤ 0.05). The concentration of P decreased with an increase in the amount of water used for irrigation (Figure 2), and the lowest P (0.19%) was measured in plants grown under the I1 condition, whereas the highest P (0.38%) was associated with condition I4. This situation can be explained by the concentration of P resulting from delayed plant growth and reduction in leaf area due to water deficit [40,41]. With a deficiency of P, the yield of tomatoes decreases, and the fruit is smaller than normal. P deficiency has also been reported to deteriorate the quality of tomatoes, and frosts may damage the plant [42]. Depending on the irrigation treatment, leaf potassium levels ranged from 3.24% to 3.75% (Figure 2), although these differences were not significant (F: 1.105: p > 0.05). All potassium values for all irrigation treatments were below the recommended value [32,39]. Potassium has many functions in plant metabolism. It plays an important role in the growth and yield of plants. Moreover, it increases the resistance of plants to diseases, cold and pests [42,43]. The highest calcium concentration(4.80%) was recorded in plants cultivated under I3 deficit irrigation, and the lowest value (3.85%) was recorded in I4 plants ( Figure 2). The recommended concentration of Ca ranges from 1.5% to 2.40% [32,39]. We found that Ca concentrations for all irrigation treatments were above the recommended value. However, differences between irrigation levels were statistically insignificant (p > 0.05). The required level of magnesium in plants ranges from 0.32% to 0.80% [32,39]. The Mg level assessed in this experiment was below the recommended level in all irrigation treatments, except I1 ( Figure 2). The highest concentration of Mg occurred under the full irrigation condition (I1) and amounted to 0.33%, and the lowest Mg concentration occurred under the I2 condition (0.21%). Differences in Mg concentration in individual irrigation levels were not statistically significant (p > 0.05). We found that water deficiency had a negative effect on Mg levels. Mg plays an important role in plant physiology. It is also the central element of chlorophyll molecules [42,43]. Regarding the macronutrient values obtained in the present experiment, the concentration of nutrients in tomato leaves appears to be below the threshold values defined by Jones et al. [32] and Bergman [39]. Macronutrients are elements that plants need more than microelements. In the absence of these elements or when their availability is delayed, plant growth slows down, their resistance to diseases decreases, the quality of fruits may be impaired and their storage period is shortened [44,45]. The result of the ANOVA test to investigate differences between the irrigation treatments for N, P, K, Ca and Mg are presented in Table S3. Tukey's test results show that there were significant differences between irrigation treatments I1 and I2, I2 and I4, and I3 and I4 (p ≤ 0.05). Differences are shown in Figure 2, indicated by letters such as A, B, AB and BC according to post hoc results. The same letters indicate similarities, whereas different letters indicate differences. The results of the post hoc test for the concentration of microelements are shown in Figure 2. Another parameter studied in the experiment is the change in the level of micronutrients in tomatoes. The concentration of micronutrients such as Cu, Mn, Fe and Zn was tested (Figure 3). The highest cooper value (9.8 ppm) was recorded for I1 plants, and the lowest value (6.5 ppm) was recorded for I4 plants. The recommended level of this element in tomatoes ranges from 5 ppm to 6 ppm [32]. In this case, it was determined that all irrigation treatments achieved the recommended Cu levels. The Shapiro-Wilk test was applied to the data (p > 0.05), and according to ANOVA results, the Cu concentration obtained under full irrigation (I1) was statistically significantly (F: 18.01: p ≤ 0.05) higher compared to treatments I2 and I4. Therefore, water deficit negatively affects the uptake of Cu by tomatoes. by Jones et al. [32] and Bergman [39], possibly as a result of genotypic diversity. As is well known, one of the most important factors with respect to the mineral nutrition of plants is genetic differences. Even plants grown under the same conditions can accumulate varied amounts of nutrients. These differences can be observed among different plant species, as well as among different genotypes of the same species. One plant may show deficiency symptoms under a given condition, whereas another does not [42,47,48]. Considering the effect of irrigation treatments on the concentration of manganese, we found that the value of this element determined in plants at all irrigation levels was below the recommended level (50-250 ppm) [32,39]. We found that the Mn value closest to the recommended value was obtained for I1 (44 ppm). As shown in Figure 3, increasing the water constraint increases the Mn deficiency. We found that the Mn level obtained for I1 was significantly (p ≤ 0.05) higher than the values obtained for I3 and I4. In this study, we found that water deficit adversely affects the concentration of Mn. Manganese deficiency causes chlorosis in tomatoes. Additionally, Mn is essential in the process of photosynthesis [43,46]. The recommended concentration of another micronutrient, iron, is between 60 ppm and 300 ppm. We determined that Fe concentrations in tomato leaves measured for each irrigation treatment were within the recommended values [32,39]. The lowest Fe value was obtained under condition I1 (103 ppm), and the highest value (211.43 ppm) was obtained under condition I4 (Figure 3). The Shapiro-Wilk test was applied to the data to observe normality (p > 0.05) before ANOVA was used to assess differences between irrigation treatments. The Fe concentration measured in irrigation treatments I3 and I4 was statistically higher (F: 14.95: p ≤ 0.05) than that measured in irrigation treatments I1 and I2 according to ANOVA results. Therefore, we believe that increasing water restriction led to an increase in Fe concentration in tomato leaves due to limited vegetative growth. The analysis of zinc concentrations showed that under all irrigation treatments, the concentration of this microelement was higher than the recommended values ( Figure 3). The highest values were obtained for I3 (144 ppm) and I1 (136 ppm). We found that the concentration of Zn in the leaves of tomato grown under irrigation treatments I1 and I3 was significantly (F: 21.01: p ≤ 0.05) higher than that in plants grown under irrigation treatments I2 and I4. The lowest concentration of Zn occurred in the case of I4. The differences are shown in Table S4 according to the ANOVA test of different irrigation treatments. Although no visual symptoms of nutrient deficiency were observed, most of the microand microelements measured in the leaves were below the threshold levels indicated by Jones et al. [32] and Bergman [39], possibly as a result of genotypic diversity. As is well known, one of the most important factors with respect to the mineral nutrition of plants is genetic differences. Even plants grown under the same conditions can accumulate varied amounts of nutrients. These differences can be observed among different plant species, as well as among different genotypes of the same species. One plant may show deficiency symptoms under a given condition, whereas another does not [42,47,48]. The average leaf area per plant (LA) and the average leaf area index (LAI) assessed in the experiment increased depending on the level of irrigation (Table 4). A linear relationship with a high regression coefficient was obtained between LAI and the amount of water used for irrigation. A reduction in irrigation water by one unit reduced the LAI value by 0.0009 units. The highest value of LAI (0.014 m 2 m −2 ) was obtained for the I1 irrigation treatment (full irrigation), and the lowest value (0.011 m 2 m −2 ) was obtained for the I4 irrigation level (30% of full irrigation). With respect to the I4 irrigation treatment, LAI values increased by 127%, 109% and 109% for I1, I2 and I3, respectively. In the case of LA, similar linear relationships were observed in parallel with the leaf area index. Korkmaz [49] and Avuk [50] also reported that water deficit negatively influenced the leaf area index. The values we obtained in our study are consistent with the above reports. As in the case of LAI, the highest average leaf area per plant was obtained for irrigation level I1 (17,202 mm 2 ), and the lowest value (14,119 mm 2 ) was obtained for irrigation level I4. However, there were no statistically significant (F: 0.657: p > 0.05) differences between irrigation treatments for the LAI and LA parameters. There was no clear difference in the average values of number of leaves per plant depending on the irrigation treatments. However, in the case of cultivation under water deficit conditions (I2, I3 and I4), the number of leaves per plant was higher (11.13, 11.2 and 11.0 leaves, respectively) than in plants subjected to full irrigation (10 leaves). Nevertheless, there was no statistically significant (p > 0.05) difference between LA, LAI and the number of leaves for different irrigation treatments (Table S5). According to the obtained results, it can be concluded that the number of leaves per plant may increase under water deficit conditions, although their development is limited. The study results show that the leaf area of tomatoes grown under water stress decreased. In plants, leaves are the most important organs, through which light energy is captured and used to produce metabolites necessary for plant growth. The level of vegetative activity of plants depends on the amount of light energy they capture with their leaves. Therefore, a reduction in the plant assimilation area as a result of treatments with water deficit adversely affects the development and yield [51][52][53]. In addition, LAI has been reported to control multiple processes, such as photosynthesis, canopy interception of solar radiation, evapotranspiration and pollutant storage [54][55][56]. Therefore, a decrease in LAI has a negative impact on photosynthesis, plant growth and yield. Harvesting of ripe tomatoes under all irrigation treatments started on 15 December 2021 and was completed on 17 January 2022. During the harvest of tomato fruits, yield characteristics such as fruit weight, number of fruits and total yield were assessed (Table 5). However, no statistically significant differences occurred (F: 0.803: p > 0.05) with respect to the assessed traits of plants grown under different irrigation conditions according to ANOVA. Tomato fruits harvested from plants cultivated under I4 deficit irrigation had the highest weight, and fruits harvested from the I3 plot had the lowest weight. The largest amount of fruit and the highest yield were also harvested from plants cultivated the under condition with the most water deficit (I4). I1 plants grown under full irrigation conditions were characterized by the second highest yield, with the fewest tomatoes. Water restriction was found to stress tomatoes, leading to earlier flowering and fruit ripening ( Table 5). Stress conditions increased the total number of flowers per plant, but no data were recorded for this parameter. As the number of flowers increased, so did the number of fruits, but few reached commercial quality. Although the yield under deficit irrigation was higher than that under control irrigation, the difference was not statistically significant (Table S6). Some researchers have reported that water stress practices do not significantly reduce fruit yield and quality or save water. The tomato is not equally sensitive to the lack of water in the soil during each stage of cultivation. In particular, the most critical stage of development affecting yield is flowering and fruit setting [17,35,[57][58][59]. Therefore, it is considered appropriate to focus research on this stage of tomato development. Nuruddin et al. [17] conducted a study during the flowering and fruiting phase of tomatoes in winter and found that the sizes and yield of fruit were higher under deficit irrigation than under control irrigation, although the difference was statistically insignificant. Moreover, it is believed that the number of fruits on plants cultivated under water stress conditions increases with early flowering. In the present study, we determined growth parameters and nutrient concentrations in tomatoes grown at varying levels of irrigation. The values of water deficit used in the present research are important because they differ from implemented in other experiments. Our research on the concentration of macro-and micronutrients in tomato leaves is also innovative in relation to previously published studies. Given that the global human population will continue to increase in the future, in association with water deficit, similar research is becoming increasingly important. Previous studies on the impact of water deficit on tomatoes focused on the assessment of growth and yield parameters. It has been widely reported that the use of water deficit has a negative effect on the yield and development of tomatoes. In our research, we focused on other important issues, such as growth parameters and nutrients. Taking into account fruit yield and quality, as well as nutrient deficiencies, in our research on the impact of water deficit on tomatoes, the reported results significantly expand knowledge in this field. Conclusions We found that the use of water deficiency in tomato cultivation adversely affects some growth parameters, such as plant height, yield, leaf area index, SPAD and uptake of macroand micronutrients. We found that water deficit had a negative effect on plant height. With respect to the values of the leaf area index, we found that the leaf surfaces were smaller and less developed in plants grown under conditions of water deficit compared to fully irrigated plants. With respect to the SPAD values, the lowest value of green leaf color intensity was 47.3 (I3), and the highest was 48.7 (I4). However, the difference was not statistically significant. The tallest tomato heights were achieved during the stages of seedling and fruit ripening with full irrigation (I1). In the case of I3, the lowest values were obtained during other development stages, apart from the flowering period. Analysis of macronutrients in tomato leaves showed that the obtained values were below the threshold values of the recommended values. The concentration of micronutrients in tomato leaves was also determined. Under all irrigation treatments, the manganese values were lower than the recommended values. We believe his is and similar studies will contribute to the success of water stress applications during the phases in which plants are least affected. Determining the periods during which tomato plants or any other crop are affected by the least water stress can contribute to both plant development and economic production.
2022-11-27T17:30:51.078Z
2022-11-24T00:00:00.000
{ "year": 2022, "sha1": "0fbc0d9b62d6df1577743ecc43d62201e3614935", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/12/12/2950/pdf?version=1669363378", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d12cad1590f0e82bb5410a3629579c510956bb84", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
252779878
pes2o/s2orc
v3-fos-license
Increasing equitable access to telehealth oncology care in the COVID‐19 National Emergency: Creation of a telehealth task force Abstract Introduction Telehealth (TH) utilization in cancer care prior to COVID‐19 was variable. Research highlights disparities in access determined by socioeconomic factors including education, income, race, and age. In response to COVID‐19 and these disparities, we assessed the impact of a personalized digital support structure, the Telehealth Task Force (TTF), to reduce disparities in TH. Methods We performed a retrospective review of cohorts between January 1, 2020 and August 30, 2020: Pre (TH use with basic telephone support), Intervention (TH access with TTF), and Post (TH access after TTF initiation and educational material dissemination). Data collected included successful TH access, health literacy (HL), and Area Deprivation Index, a ranking of neighborhoods by socioeconomic disadvantage (ADI). The data were analyzed in univariate ordinary least squares model and adjacent categories ratio model using statistical software R to understand the relationship between TTF, HL, ADI, and TH access. Results We included 555 patients from January 1, 2020 to August 30, 2020 (90 preintervention, 194 intervention, and 271 postintervention), excluding patients without ADI/HL. TTF support successfully engaged older, racially, and socioeconomically diverse patients in TH; ADI is significantly higher in the postintervention group vs. preintervention (mean difference = 7.66, 95% CI 1.00–4.32, p = 0.024) and more patients had low HL during intervention compared with preintervention (adjacent categories ratio = 0.62, 95% CI 0.41–0.93, p = 0.021). Discussion COVID‐19 created an immediate need for TH. Implementation of the TTF helped close the digital divide, increasing TH access for vulnerable patients. Attention to digital readiness can mitigate disparities in access to care. Future research should explore the implementation of widespread routine digital support initiatives. | INTRODUCTION There is a general increase in drive among oncology practices to incorporate modern communication technology into the cancer care continuum. A hallmark of the COVID-19 pandemic has been the rapid uptake of telehealth (TH), defined by the Centers for Medicare and Medicaid Services as the ability to talk with a clinician live via phone or video chat, send, and receive e-messages with a clinician, or use remote monitoring so a clinician can assess a patient at home. 1 Prior to the COVID-19 pandemic, it was uncommon for United States healthcare systems or individual providers to utilize telehealth (TH) for routine clinical care, though rates were steadily increasing. 2 A 2020 study on the rise of virtual care reported TH use was generally low and even in health systems with relatively high adoption of TH in 2019 and early 2020 prior to the "Stay-at-Home" orders implemented in response to COVID, had around 100 visits per day, paling in comparison to pandemic TH visit rates of 600 to 1000 per day. 3 One institution found that their TH use increased from less than 1% of total visits to 70% of total visits at the beginning of the COVID-19 pandemic. 4 In part, this meteoric increase resulted from local and national limitations on state-to-state travel and in person encounters, and relaxation of HIPPA and billing regulations. 5 Digital technology has shown great promise in improving healthcare, but there are system-wide and individuallevel disparities that may deter patients from receiving this type of care. 6 The surge in TH use as a result of the COVID-19 pandemic illuminated vulnerabilities in access to telehealth that mirror many of the socioeconomic factors that affect health outcomes in general, as those with older age, low socioeconomic status, racial and ethnic minorities, lack of broadband access, and low health literacy, defined by the National Institute of health as the degree to which individuals have the ability to find, understand, and use information and services to inform health-related decisions and actions for themselves and others and less social support struggled to adopt TH. 7,8 Prior to COVID-19, patients with cancer who were more engaged in TH and digital health interventions were largely younger and more affluent. 9 Disparities in both access and digital health literacy, an extension of health literacy but in the context of technology, disproportionally affect vulnerable populations, which will not improve without identifying and addressing access and uptake challenges. 10,11 Prior research conducted by our team revealed that our patients with a high school diploma or less or who identified as a racial or ethnic minority were less likely to access the internet, indicating a need for targeted interventions to address these disparities. 12 Although the disparity in accessing TH is a known concern, there is less information about the digital access and preferences of patients with cancer as a population. This is an especially important population to be able to both avoid treatment delays and shield themselves from infectious risks. Patients who perceive their care to rely heavily on physical examination and diagnostic testing may have concerns about receiving the standard of care to which they are accustomed via TH. 13 Patients with disparate traits including lower health literacy, driven by older age, lower socioeconomic status, minority race or ethnicity, mistrust of technology, lower level of education may have increased distress related to TH. [14][15][16] The purpose of this retrospective cohort study is to assess the impact of a personalized, realtime support structure, the Telehealth Task Force (TTF), which was implemented during the COVID-19 pandemic to improve the uptake of TH, defined here as a live audiovisual clinician appointment, among a diverse population of vulnerable cancer patients. With this work, we aim to inform the development of future initiatives focused on reducing patient-level barriers to digital access to cancer care. Specifically, we look to address the structure needed to increase successful access to TH in patients with low health literacy and/or socioeconomic disadvantage. | Study design We conducted a retrospective cohort study from January 1, 2020 to August 30, 2020 to assess the impact of a telehealth task force created in a cancer center (TTF) on patient use of TH. The TTF was created during the COVID-19 pandemic and was continued through December 10, 2020. This study compares outcomes across patients with cancer seeking care pre, during, and post-TTF intervention periods. | Setting Jefferson Health is an 18-hospital academic health system spanning two states and is the largest health system in the clinical oncology, COVID-19 pandemic, healthcare disparities, health literacy, qualitative research, telemedicine Philadelphia region. Jefferson Health began using TH in 2017 to provide both urgent care and ongoing primary and specialty care access to a tertiary medical center. 17 The NCIdesignated Sidney Kimmel Cancer Center (SKCC) is the cancer research and clinical care arm of Jefferson Health. The SKCC catchment area is highly diverse, with significant health and cancer disparities across the region. 18 The SKCC sees more than 9000 new cancer diagnoses annually. The Neu Center for Supportive Medicine and Cancer Survivorship (NCSM) is The SKCC's outpatient interdisciplinary palliative care team. The NCSM sees any patients regardless of age, cancer type/stage, or treatment plan. Patients enter into the NCSM by referral or through a routine SKCC-wide distress screening, which includes the HL questionnaire. | Sample Research team members utilized electronic health records to identify patients being seen at the SKCC for treatment of their cancer and who had completed at least one TH visit from January 1, 2020 to August 30, 2020. Any patients with cancer seen in our cancer center during the time frame January 1, 2020 to August 30, 2020 who completed at least one TH visit were eligible to be included in this analysis. Patients with successful TH and completed health literacy information between January 1, 2020 and March 15, 2020 were considered pre-TTF as these encounters took place prior to the peak of the COVID-19 pandemic and the creation of the TTF. Patients seen between March 16, 2020 and June 7, 2020 were considered the TTF Operational period and those seen via TH between June 8, 2020 and August 30, 2020 were considered post-TTF. The transition from fully operational TTF period to post-TTF occurred because COVID-19 case rates were falling, increased understanding of infection prevention was implemented in the health system and more in-person visits were possible. These patients had varied diagnoses: solid tumor stages 1-4 or varied types of hematologic malignancies such as lymphomas, acute, and chronic leukemia as well as multiple myeloma were referred to and assisted by the TTF. | Intervention Prior to COVID-19, utilization of TH was available to all patients who were able to or were interested in using the service, with the institution providing basic telephonebased support to patients in need of assistance. If the patient took it upon themselves to call a help line. Resulting of the COVID-19 pandemic and an urgent need to keep our patients on treatment and reduce risk, the TTF was created and deployed across the SKCC in March 2020. The TTF consisted of individual outreach with a personalized intervention based on patient-level needs ( Figure 1). It comprised a range of staff including graduate students, project managers, and research coordinators who had no prior expertise in technology education. The Interprofessional leadership team in charge of developing this resource consisted of a physician, a social worker, and a project manager. They consulted with the team leading TH efforts across the healthcare system prior to COVID-19. This consisted of a nurse, IT, and care coordinators with extensive technology experience. The TTF intervention could include any of the following, depending on patient needs: provision of a smart-device and service, access to broadband internet, audio or visual one-on-one education about how to download a smart device app, access to the Jefferson Health patient health portal, creation of email accounts, and" practice" visits to aid in the successful use of TH for cancer care and communication. The interventions needed were developed iteratively with an initial list of 'common problems' drawn from the health system TH team and added to when new problems or barriers were uncovered in the course of patient care. Patients could have had any number of these services, depending on the barrier to TH they identified. Patients were identified for TTF assistance either by self-referral, clinical referral, or missed appointments. When a patient missed a TH call, care coordinators reached out via phone and if the barrier was related to TH, the referral to TTF was made. The TTF served not only to engage patients but also to create educational materials to help staff and care teams educate patients with technology barriers. | Measures Health literacy was assessed via the BRIEF questionnaire. 19 The BRIEF health literacy questionnaire is a 4-item survey with a 5-point Likert scale including items such as, "How often do you have someone help you read hospital materials?" scored from always (1), often (2), sometimes (3), occasionally (4), never (5), and "How confident are you filling out medical forms by yourself?" with answers of not at all (1), a little bit (2), somewhat (3), quite a bit (4), or extremely (5). Scores of 4-12 indicate low health literacy, 13-16 indicate medium health literacy, and 17-20 indicate high health literacy. This tool takes fewer than 5 min to complete. The final score was entered into the patient's electronic health record (EHR). Missing health literacy information was obtained by outreach from research assistants, who attempted to contact patients twice, providing return call information if unable to reach patients. Researcher assistants compiled patients' addresses from the EHR to collect Area Deprivation Index (ADI) scores. 20 The ADI is a tool used to describe socioeconomic disadvantage and considers socioeconomic factors such as; income, education, employment, and housing quality. Results range from 0 to 100, with higher scores indicating higher levels of socioeconomic disadvantage. Previous studies have used the ADI to understand community needs and to prioritize interventions and general healthcare delivery. 21,22 The primary outcome was the completion of at least one successful audio-visual telemedicine appointment. The BRIEF HL tool and ADI were used in this analysis as we collect the BRIEF HL on all patients as a standard of care and the ADI can be calculated using each patient's home address. Given that this was a realtime sample of patients during the COVID-19 pandemic, we could not introduce additional measures meant solely for research purposes within the entire population of our cancer center. | Statistical analysis Descriptive and demographic statistics were summarized for each of the three study phases. The relationship between ADI and study phase (preintervention, intervention, and postintervention) is analyzed based on 549 patients by fitting a univariate ordinary least squares model using statistical software R. 23 The relationship between HL and study phase is analyzed based on 469 patients by fitting a univariate adjacent-categories-ratio model using R package VGAM. 24 Examining the health literacy and socioeconomic disadvantage of patients connecting to TH prior to the TTF and after the TTF available illuminates what is needed to support vulnerable patients with cancer. This study was approved by the Thomas Jefferson University Institutional Review Board to be exempt from consenting patients, as the data collected were all a part of routine patient care. | Study population A total of 555 patients were identified from medical records. Several patients were excluded from analysis due to having missing ADI, as their residence lays within an area that does not have an ADI (i.e., commercial/government area in Center City Philadelphia, PA). Patients who were F I G U R E 1 Telehealth assessment: Workflow for telehealth barrier assessment and possible solutions Telehealth Assessment 3: Significant Barriers (1 in "possible barriers + any of the following) If patient has active email and can access account unable to be reached by telephone to complete the health literacy questionnaire or who were deceased at the time of data retrieval were excluded from relevant analyses, with 86 participants being excluded in total (preintervention, n = 17; intervention, n = 21; and postintervention, n = 48). The total included in the analysis was 469. Patients were included regardless of geographic location. Patients were slightly more female (53%) than male (47%), the average age of 63.9 years old (range: 28-89), and had a diverse racial representation: Black (35%), White (59%), and almost 6% other races. Roughly, 16.9% of the SKCC patient population is Black. Only 3% of patients identified as Hispanic or Latino vs 93% not Hispanic or Latino with 4% declining to answer. | Telehealth visits During the preintervention period, 0.52% of the 15,126 scheduled appointments at the SKCC were TH (n = 79). During intervention, 25.57% of the 16,764 scheduled appointments were TH (n = 4287). During postintervention, 19.85% of the 17,334 scheduled appointments were TH (n = 3446). The dramatic increase from less than 1% of patients being seen via TH to close to one-quarter of patients being seen in this fashion was driven by the "Stay-at-Home" orders and the pandemic, rather than an attempt to change clinical care without this force. Nonetheless, this is a profound shift in day-to-day care in a cancer center than happened within a very short time span. In total, 1127 patients were assisted by the TTF from March 16, 2020 to December 12, 2020, with 596 of these interventions occurring during the TTF Intervention period. Of the 469 patients included in this analysis, 90 were preintervention, 194 were during the intervention period, and 271 were postintervention. One hundred percent of patients individually assisted by the TTF successfully completed at least one synchronous audio-visual TH encounter with an oncology clinician. | Interaction between TTF, HL, and ADI Successful engagement with TH in older adults significantly increased with the support of a TTF. The majority of patients across all time points were 55 or older (n = 440). Slightly more patients self-identified as female (n = 294) than as male (n = 261). Only 27 (30%) patients in the preintervention cohort were 65 or older, whereas 137 (50%) in the postintervention cohort were 65 or older (p < 0.001). Fifty-nine percent of patients identified as White/ Caucasian and 35% identified as Black/African American (n = 325, n = 193), with few patients identifying as Asian (n = 10) and Hispanic (n = 18). Telehealth utilization significantly increased among Black patients with the TTF intervention (25% to 40%, p < 0.001) ( Table 1). The TTF engaged patients with a higher ADI, indicative of more patients with a greater socioeconomic disadvantage being able to participate in TH with the support of the task force (mean difference = 7.66, 95% CI [1.00-14.32], p = 0.024) ( Table 2 and Figure 2). In addition, the TTF increased successful access to TH in patients with low health literacy(adjacent categories ratio = 0.62, 95% CI 0.41-0.93, p = 0.021). Prior to the intervention, 6.7% of patients with low HL (6/90) successfully completed a TH visit, whereas 17.0% of low HL (33/194) completed one during the intervention period (Table 3). | DISCUSSION Creation and deployment of personalized telehealth support, provided via the TTF, among cancer patients significantly increased use of TH for older adults, underrepresented minorities, patients with lower socioeconomic status and those with low HL. The COVID-19 Pandemic created an exponential increase in TH that will assuredly continue; effective strategies to close the 'digital divide' must be researched, deployed and sustained or we will further exacerbate adverse health outcomes and access disparities. 20,21 Utilization of digital health technology resources like e-PROs (electronic patient reported outcomes) and online patient portals requires a combination of internet access, sufficient digital and health literacy. 22 This research calls out the need for future studies to examine digital media literacy and accessibility including; access to smartphone/tablet devices and email, understanding of downloadable applications, and patient willingness and trust in utilizing digital health technologies including TH and e-PROs for their cancer care. Our foundational understanding of the impact of targeted TH interventions can help inform future research and clinical deployment of educational interventions, including TH educational materials, and electronic health record orientation. Further efforts will be made to connect patients to digital health options early and often throughout their cancer care, especially as a means to make healthcare more accessible in times of need (i.e. seasonal illnesses, post-procedure, natural disaster). Our research shows that cancer patients from vulnerable populations are able to, with the appropriate supports; develop new technological skills and confidence in using TH and multiple digital platforms and applications to communicate with their providers. It should be noted that the SKCC was committed to ensuring access to TH resources for all patients and to that end allocated appropriate human and financial resources during the time of this study without which this would not have been possible. Additionally, because of COVID-19, many communities have looked at similar disparities among their general populations and access to needed technology resources to support education as well as access to healthcare. Philadelphia recently completed a Household Internet Assessment Survey. 23 Which gives a broader picture of the challenges faced in our region and provides opportunities for collaboration around systemic interventions to address digital literacy, access and readiness more comprehensively which is ultimately needed. Thomas Jefferson University and The SKCC are collaborators in this report. | LIMITATIONS This was a retrospective study, and thus findings are limited as there was no control. In addition, data are only included for patients who successfully used TH. During this time period 3% (36/1120) patients were still unwilling or unable to use TH despite access to the TTF. While this is a relatively small number, we do not know more about this population to understand what additional needs were not addressed. Inclusion of these perspectives in future work is important to ensure digital equity across all popualtions. Also, though translation services are largely accessible and used across the SKCC, the institutional patient portal is currently only available in English and is therefore inaccessible to patients who are not able to understand written English or who did not have access to in-person assistance due to COVID-19 quarantine. AUTHOR CONTRIBUTIONS Brooke Worster: conception, design, manuscript preparation; Lauren Waldman: data collection, manuscript preparation; Gregory Garber: conception, design, data collection, manuscript preparation; Tingting Zhan: data analysis; AnaMaria Lopez: conception, design; Olivia Trachtenberg: project administration; Nathan Handley: conception design, manuscript preparation; Kristin Rising: conception, design, manuscript preparation; Valerie Csik: conception, design; Amy Leader: manuscript preparation. All authors reviewed the results and approved the final version of the manuscript. F I G U R E 2 Area deprivation index (ADI) score by study phase. Scores range from 0 to 100 with higher scores representing more socioeconomic deprivation. Each depiction of the study phase includes a box plot where the outer boundaries of the box represent standard deviation, and the bolded center-line represents the median. The curved lines around the box plot represent the distribution; a bulbous area therefore demonstrates a disparate distribution.
2022-10-11T06:16:37.811Z
2022-10-10T00:00:00.000
{ "year": 2022, "sha1": "937096ded24b8effa68fa29bb222159ea4f4a104", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.5176", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9aef4072b8671eb4f29c23de69bfb4deb35a0ec1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16209576
pes2o/s2orc
v3-fos-license
Will discrete symmetries help solve the hierarchy problem ? We find that massless Higgs doublets at the GUT scale can be the natural result of a discrete symmetry. Such a mechanism does not require elaborate fine tuning or complicated particle content. The same discrete symmetry will also protect against proton decay and flavor changing neutral currents. However, this mechanism always predicts non-minimal standard models. An explicit example of how this mechanism works is also included. Introduction Ever since the first grand unified theory was proposed 18 years ago [1], physicists have been troubled by the hierarchy problem, i.e., the existence of two mass scales differing by approximately thirteen orders of magnitude. In terms of perturbative field theory, this implies an almost exact cancelation between the bare masses of the Higgs bosons and their radiative corrections. Since no symmetry prohibits these mass terms, it seems unlikely that such cancellations should occur. Two possible solutions have been suggested. The first approach is to introduce new strongly coupled gauge interaction, known as technicolor, and to make the Higgs a composite particle, so that the hierarchy problem is automatically evaded. Unfortunately, this type of theory is not yet well understood due to its strongly coupled nature. We will turn our attention to the second type of theory: supersymmetry (SUSY). Supersymmetry is a symmetry between bosons and fermions. If supersymmetry is exact, the radiative corrections to Higgs mass terms will vanish due to the cancelation between diagrams with boson loops and those with fermion ones. Therefore, if its bare mass is zero at the grand unification theory (GUT) scale, the Higgs particle will remain massless down to the SUSY breaking scale, which can be made sufficiently small, thus solving the hierarchy problem. However, there remains one thorny problem in this rosy picture -the fact that the Higgs doublets have to be massless at the GUT scale. It may not seem unnatural at first, since lepton superfields are also massless. However, leptons come with color triplet partners (quarks), which are also massless at the GUT scale, while color triplet Higgs have to have GUT masses in order to suppress proton decay. We will refer to this problem as the doublet-triplet hierarchy (The authors of Ref. [2] call it the "Second Hierarchy Problem".). Various authors have attempted to construct models that solve this problem without fine-tuning. One approach is the so-called "sliding singlet" mechanism [3] in which a singlet Higgs particle is introduced so that its vacuum expectation value (VEV) will almost completely cancel the doublets' otherwise large masses. Unfortunately, it was pointed out later [4] that this solution is not really the global minimum of the effective potential as hoped, and it is for this reason that no satisfactory models have been successfully built based on this idea. Another ingenious method of producing light Higgs doublets is the "missing partner" mechanism [5]. This method employs special group structure in the Higgs sector to give colored Higgs particles large masses while prohibiting the doublets from getting masses by using additional discrete symmetries. The drawback of this type of models is that they always require very large representations, making the particle content somewhat complicated and aesthetically unpleasant. More recently, a third method was proposed [6] which involves a global SU (6) containing the gauge group SU (5) as a subgroup. When SU (6) spontaneously breaks, the doublet Higgs become pseudo-Goldstone bosons and remain light. However, it is not clear how to justify the introduction of this SU (6) group since global symmetries are usually considered undesirable. There have also been a number of other ideas, but all of them seem to have fine-tuning hidden somewhere. We will explore instead the possibility of making doublet Higgs bosons light with a mechanism using nothing but a discrete symmetry such as the cyclic group C n . This scheme is implemented in the framework of the supersymmetric SU (3) 3 GUT model [7], also known as the trinification scheme [8]. Such discrete symmetries are common results of string compactification, and are needed to prohibit proton decay [2]. However, we will consider the general case of a SUSY SU (3) 3 GUT with all phenomenologically feasible discrete symmetries and particle content, and not restrict ourselves to the specific model proposed in Ref. [7] and [8]. We find that this mechanism is possible only if the Higgs doublet which gives down-type quarks masses differs from the one responsible for the leptons. Therefore, this mechanism is not compatible with the minimal supersymmetric standard model (MSSM). However, flavor changing neutral currents (FCNC) won't be a problem because the same discrete symmetry which generates the doublet-triplet hierarchy also prohibits FCNC. We first describe the basic idea behind this mechanism in Section 2, and give some examples. In Section 3, we start to construct realistic models, and show that we require more than one pair of Higgs doublets. We then present an example of working non-minimal models in Section 4. Basic ideas Our scheme is based on the group SU corresponding to the quarks, the anti-quarks, and the leptons respectively. The explicit assignment of particles is as follows [9]: : where B is an additional superheavy down-type quark, B * is its anti-particle, and E's and part gains the following VEV's, both F -and D-flatness will be satisfied. The number of light generations N g will be equal to the difference between the number of supermultiplets and that of their mirror partners, i.e. [10]. From the fact that U g commutes with and that v does not equal w in general, we conclude that U g must assume the following form, where α, β and γ are integers mod 4. Using the fact that v and w must transform trivially under H ′ , it is easy to show that for any given set of α, β and γ, there exists a unique choice for how the multiplets containing v and w should transform under H . However, the reverse is not true because sets of (α, β, γ) related by the also unbroken U (1) Y are actually equivalent. This means that (α, β, γ) is equivalent to (α + 1, β + 4, γ − 2) since we can redefine U g by Hence we can always choose U g so that, say, α = 0, and we need only consider theories with different (β, γ) values. which transform like i x under H as n Ψ l (x) and nΨ l (x) respectively. After symmetry breaking, these fields will break up into respectively. Here the first number specifies the representation under color SU (3), the second number specifies the representation under weak SU (2), the third number is the hypercharge, the fourth one is the C ′ N charge. It is clear that the number, n − l (y), of negatively-hypercharged SU (2) doublets transforming under H ′ like i y can be expressed as n − l (y) = n Ψ l (y − α − β − γ) + n Ψ l (y − α + γ) + nΨ l (y + α − β). Similarly, for the positively-hypercharged doublets, we have n + l (y) = n Ψ l (y − α + β) + nΨ l (y + α − γ) + nΨ l (y + α + β + γ). Now, for these doublets to gain GUT scale masses, the mass term has to look like The mass matrix will therefore be divided into N blocks, each of size n − l (y) × n + l (−y), where y goes from 0 to N − 1. If n(y) = n − l (y) − n + l (−y) is negative (positive), we will have n(y) positively (negatively)-hypercharged light doublets transforming like i −y (i y ) under H ′ . Notice that so it is only the difference between n Ψ l (x) and nΨ l (−x) that counts. Define ∆n l (x) = n Ψ l (x − α − β − γ) − nΨ l (−x + α + β + γ), we can then rewrite (2.3) as n(y) = ∆n l (y) + ∆n l (y + β + 2γ) − ∆n l (−y + 2β + γ). (2.4) There is also a constraint on the ∆n l from (2.1) , i.e. Models with one pair of Higgs doublets Now we can generalize to C N symmetry. It is actually very straightforward. We simply replace i with i N , the N 'th root of 1, and all the statements in the previous section remain true. It turns out that for any N ≥ 4, we can always find a number of models, i.e. sets of α, β, γ and ∆n l (y)'s, which give the correct lepton and Higgs content of the MSSM. However, when we also take into account the quark sector, no successful model is possible. To see why this is true, let's investigate the constraints coming from the quarks. The analysis is parallel to that of the leptons in the previous section. We first write down the quark analog of (2.2), We know from proton lifetime that there can be only three more light color triplets other than what we have just named above, which will be identified as the anti-particles of down-type quarks. This means that the three extra B quarks left over from the BB coupling have to couple with either d * or B * to gain GUT scale masses. This in turn implies that either −y L − 2α = y R + γ or −y L − 2α = y R − β − γ, i.e. or Let's first consider the case of (3.3). There is obviously no solution which satisfies both (3.6) and (3.7). The same conclusion can be drawn for the case of (3.4) in much the same way; hence we have demonstrated that our mechanism is not compatible with the MSSM if the discrete group is C N . If we generalize the group to C N 1 × C N 2 × . . ., everything we have said in this section remains valid. The only difference is that the variables x, y, α, β, γ are now vector-like with the first component corresponding to C N 1 , etc. The generalization to non-abelian discrete group is more interesting, although it won't work either. Notice that an irreducible CKM matrix actually demands that the light quarks and anti-quarks be in one-dimensional representations of the discrete group. The Higgs also have to be in one-dimensional representations because we want minimal models, and then the same proof goes through. So far we have considered exclusively so-called "phase symmetry", i.e. the fields gain phases when operated on by the group. Another type of representation is so-called "permutation symmetry", where the fields are transformed into each other. Fortunately all symmetries of this kind which are not equivalent to phase symmetries will be broken by the VEV's v and w, thus becoming irrelevant. Therefore our mechanism is not compatible with the MSSM for any discrete group. Realistic models with more Higgs doublets We are now ready to look at realistic models. We already know that they must contain at least two pairs of Higgs. That translates into four Higgs doublets, each with its own H ′ property so that we won't run into trouble with FCNC. Together with three lepton doublets and three singlets, we can estimate roughly that about ten representations are needed. A simple computer routine finds two dozen such models with 3 pairs of Higgs using the group C 12 . Readers should be warned that it has not been our intention to provide an exhaustive list of such models. We will give just one of these models here as an example. With N = 12, we choose The quark sector is the same as we have discussed in the paragraph following (3.2), with a constraint on the choice of y L(R) , y L + y R = 4, where y L can assume any value. It is obvious that only one pair of Higgs couple to the quarks. Thus flavor changing neutral currents will not be a problem. Notice that this mechanism works for most discrete groups with more than ten representations of dimension one. For example, it will work for the group C 2 × C 2 × C 3 which appears naturally in Ref. [7] , although the particular model in the said reference does not exhibit this mechanism. Remarks We have shown that the triplet-doublet hierarchy problem can be solved in a natural way with a discrete symmetry. We have also established that at least two pairs of Higgs are required. If nature should indeed choose this mechanism, it would be very difficult to further determine the details of the mechanism such as the group C N , ∆n, etc., without a complete knowledge of SUSY breaking. In other words, at this stage all the "good" models seem to be equally successful in most respects. One exception is the number of Higgs because it will affect how coupling constants run. It is well known that the three coupling constants intersect each other at one point in the MSSM [11]. Additional Higgs particles will surely spoil this nice result. Fortunately SU (3) 3 is not a real GUT group in the rigorous sense. It still allows three distinct coupling constants even if one goes above the symmetry breaking scale v. These coupling constants will be united instead at the supposed string scale s. With two pairs of Higgs, it turns out that s can be easily made small enough so as to coincide with the generally accepted value. With three pairs, s tends to be higher than desired.
2014-10-01T00:00:00.000Z
1992-11-19T00:00:00.000
{ "year": 1992, "sha1": "ab812270546b463d506628116cd355ed57586dea", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9211277", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "48b13589932c25ab51e16548414cb557a911b832", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54784850
pes2o/s2orc
v3-fos-license
Teaching with Literature : The Needs of Indonesian Islamic Universities Reading literary works helps learners grow linguistically, personally, culturally, and spiritually. However, researchers in the field of ESL and EFL have not conducted adequate analysis on the use of literature as a resource particularly in a multi-layered educational contexts like Indonesian Islamic universities where values embedded in literature might be in conflict with each other. This research therefore aims to provide a thick description on the target needs and the learning needs of teaching with literature in such context. A case study with qualitative and quantitative methods of data collection was conducted. A questioner was distributed to 30 students and a semi-structured interview was conducted to five lecturers from three Islamic universities. Major findings show that short stories with the topics of noble character, self-empowerment, freedom, code of conduct, and greed are preferable to novel, drama, and poem. The stories in the forms of their simplified and original versions should be used to teach language skills and to inculcate global, national, and Islamic values within the CTL framework. Values similarities are to be the basis of teaching universal values while their differences are to strengthen cross-culture understanding. INTRODUCTION In the last decades, there has been a wide interest in how to use literature with the English language teaching (ELT) because it is beneficial to students' linguistic competence, intellectual capacity, social awareness, and cultural understanding.Literature is promising learning materials as it possesses spectacular features (Khatib, et al., 2011: 207); is a natural resource (Chalikendy, 2015: 233); is the primary materials for a communicative language teaching (Mohammad, et al., 2012: 36); and promotes creative skills (Choudhary, 2016: 1).Thus, literature deserves a place in the context of English as a Second Language (ESL) and English as a Foreign Language (EFL) seeking to develop communicative competence and character education. The rich body of related researches in the field convinces that literature is pedagogically, linguistically and culturally advantageous.However, less attention has been paid to its use to teach English in Indonesian Islamic university, a particular institutional context whose tradition and education philosophy, to certain extent, is different from those of the Western's.Diallo (2012: 175) argues that differences between Islamic tradition and Western tradition bring about divergent pedagogical and epistemological implications.Rohmah (2012: 157) argues that the failure of bridging the two traditions might create tension among learners.Thus, within Indonesian context, English materials should include Islamic messages.Meanwhile, Adebayo (2010: 198) reports that European publishers sometimes present EFL texts and illustrations which are insensitive to Islamic idioms or symbols.The same is true with the Islamic symbols used in A Thief's Story which seems insensitif for English learners at Indonesian Islamic unversity (see Kasser and Silverman, 1986: 102-103).Thus, an investigation on what the learners and teachers expect from reading literature should be conducted.Only then can literature be used to foster the growth of students' heart, head, and hand. The present study will fill in the gab by investigating the target needs and the learning needs of teaching with literature in the English Department of Islamic Universities.The aims of the study are formulated into the following research questions: 1.What are the students' target needs and the learning needs of teaching English with literature? 2. What are the teachers' target needs and the learning needs of teaching English with literature? The findings of this research will shed some light on designing appropriate literature-based reading model for students of the English Department at Islamic universities within Indonesian context. LITERATURE REVIEW Literature and EFL Stern (1991: 330-337) holds that literature could be integrated with the mastery of language (vocabulary and grammar) and with the language skills (listening, speaking, reading, and writing).It serves as a good context to teach idioms, culture-tied words and phrases, and grammatical structures; helps learners build literal understanding, inferential skill, and evaluation skill; provides a model for writing activities; and serves as a basis for oral activities through role playing and oral reading.Nevertheless, it is not always easy to identify which text constitutes literary and which one non-literary.Although specific features like intertextuality and foregrounding language often characterize a literary text, those features are relative rather than absolute since fi urative languages like metaphors and similes often occur in everyday colloquial conversation (Lazar, 2009: 7).For the purpose of this study, the term literature refers to poetry, song, fiction, drama, essay, and biography, philosophical and religious texts (Maley, 2012: 302).When it is hard to categorize whether a text is literary or non-literary, a teacher should ask himself or herself whether the text is linguistically, culturally, and spiritually 'sweet and useful' (dulce et utile) for the students or not. The use of literature in language classroom has been faced with two main approaches.First, 'literature as study' aims at teaching about literature or the knowledge of literature in order to gain qualification in literary study (Maley, 2012: 303).Synonymous to this approach are: traditional approach (Hall, 2005: 49), literature as content (Lazar, 2009: 24), and teaching literature as literature (Carter and McRae, 1996: xxiii).These approaches emphasize the study of canon works, the writer's biography and influences, stylistic study, literary critic, moral agenda historical and socio-cultural information about the text.The proponents of these approaches view a literary text as sacrosanct in that it should not be altered grammatically, extended, or cut up.In other words, literary text should not be adapted or simplified Second, 'literature as resource' centers upon the notion of teaching with literature in that literary texts are used as springboards to engage with other language learning activities (Maley, 2012: 303).Other related terms include: communicative language teaching approach (Hall, 2005: 49), language-based approach (Lazar, 2009: 23), and teaching literature as language (Carter and McRae, 1996: xxii).This approach implies that a literary text should not be treated differently from other texts so that it be cut up or adapted to suit the instruction goals. Cox (2012: 2-3) states that literature includes efferent reading and aesthetic reading in that readers will explore the information of the text and associate it with their feelings, attitudes, and experiences.Treating a literary text as just another text without linking it with students feelings and experiences might not be appropriate.Yet, exposing stylistic and socio-cultural aspects of the work to less proficient students is not wise.Hence, this paper uses the term 'teaching with literature' to denote the teaching of linguistic elements prior to the values embedded in the work.Teachers might want to expose central vocabularies and certain grammatical points before asking the students to respond idea, message, morals, and values of the text Models of Teaching Literature The fundamental question of why teaching literature has led to theories on three models of teaching literature: the cultural model, the language model, and the personal model (Carter and Long, 1991: 2-10).First, the cultural model treats as a source of information about the target culture and aims at appreciating different cultures and ideologies.Rashid, et al., (2010: 89-90) point out that the cultural model is a traditional approach of teaching literature since the teacher's role is to pass knowledge about the text while students' roles are to discover the social, political, and historical context of the given text. Second, the language model uses literature as an instrument to mainly teach the linguistic forms or features.Bibby and Mcllroy (2013: 19-20) maintain that the model tends to be more psycholinguistic since it focuses on how language is used within the given text.Third, the personal growth model is rooted in the idea of helping the students to read and engage with the literary text by relating the themes of the text to the students' personal experiences.Violetta-Irene (2015: 75-76) notes that the model is to help students grow and mature individually and socially. This study mainly uses the language model as the steppingstone to teaching literature.Cultural issues and personal experience are emphasized when students have been adequately exposed to linguistic features.Thus, the cultural model and the personal growth model are used in addition to the language model so as not to make a literary text just as another text.As such, a literary text helps students grow linguistically, personally, culturally, and spiritually. Indonesian Islamic University: A Particular ELT Context While Moslems constitute a large number of ESL and EFL learners all over the world in that English expressions related to Islam have been included into a comprehensive English dictionary (Ali, 2007: 32), Indonesian Islamic university is one of the settings where a vast number of Moslem students learn English.The university is particular since it is built upon three pillars: Islamic faith (aqidah), worldly matters (muamalah), and Islamic morals (akhlaqul karimah) (Muhadjir, 2011: 309).It is that vast number of learners and the particularity of the educational setting that make Indonesian Islamic university deserves ELT practitioners. Islamic university is pedagogically and epistemologically attached to Qur'anic revelation, prophetic tradition (Sunna and Hadith), and the opinion of the righteous predecessors (qawl al-salaf al-salih) (Diallo, 2012: 175).It places the spirits of Qur'anic revelation and Prophetic precedent as the heart of education and the glue of the entire curriculum (Halstead, 2005: 525).Such pillars and principles are supposed to shed a light on the EFL goals, syllabus, and classroom activities of any Islamic university. It is in the above context that an investigation on the needs of Indonesian Islamic university is of a real significance.By the needs we mean the target needs and the learning needs.The former refers to what the students need to do in the target situation.It is pertinent to what is necessary for students to learn (necessities); what aspects of the subject that the students lack of (lacks); and what the students want to learn (wants).The latter deals with what the students need to do in order to make learning happen (Nation & Macalister, 2010: 24-25).It covers the analysis on goals, input, setting, procedure, teacher's roles, learners' roles and so forth. Participants A total number of 5 lecturers and 95 students of Indonesian Islamic universities participated in this study.The lecturers participated in the interview phase were lecturers of English Language Education Department at Islamic universities: two from Muhammadiyah University of Metro (MUM), one from Ma'arif Islamic Institute NU of Metro (MIINM), and two from State Islamic Institute of Metro (SIIM), Indonesia.While the students involved in the questionnaire phase were the second year students of English Language Education-Department of SIIM.All partipants involved in the research were from the Province of Lampung, Indonesia. Instrumentation Two instruments were used to collect the data: an interview and a questionnaire.The semi-structured interview consisted of 12 items.The questions were pertinent to the target needs (necessities, lacks, wants) and learning needs (goals, input, procedure, learners' role) of teaching with literature at Indonesian Islamic universities. The questionnaire consisted of 41 items in the form of a four-point Likert-scale.The items covered topics of the literary text, text comprehension level, forms of literary works, goals of instruction, text source, text version, text length, materials distribution, and students' expectation of teacher's role.The items were conceptually validated by experts in the field of reading instruction and literature teaching Procedure A case study was used as an approach to elicit the perspectives of the involved participants (Gall, et al, 2007: 447).Although it was basically a qualitative research, it applied a quantitative method in valitdating the instruments and displaying the data.This case study design included five steps.First, the items of the interview and questionnaire were validated by experts in the field.Second, the interview was administered to five English lecturers to gain qualitative data.The lecturers are representatives of state Islamic university, private Islamic university affiliated with Nahdhatul Ulama organization, and private Islamic university affiliated with Muhammadiyah organization.Third, the qualitative data and interpretations were validated through member checking technique in that the researcher asked one of the participants to check the accuracy of the interview description.Fourth, the questionnaire was piloted to 65 students to ensure its reliability and validity.Cronbach's alpha was calculated to be 0.805.Fifth, the reliable questionnaire was distributed to other 30 students to gain quantitative data.Quantitative analysis was used to support the interpretation of the qualitative findings FINDING AND DISCUSSION In general, it is found that the use of short stories with rich topics is more preferable than that of poem, novel, and drama.The short stories are expected to be a vehicle to teach micro-skills and macro-skills of reading as well as a springboard to explore global, national, and Islamic culture.Meanwhile, the topics, which ranges from noble character to greed, are to be related with Islamic values.The rest remarkable finding will be discussed further in the following section. The Students' Target Needs and the Learning Needs of Teaching English with Literature The data on students' perspectives related to the sub-items of the target needs (necessities, lacks, wants) and the learning needs (goals, input, setting, and teacher's role) are ranked in order to gain a general description.The results from the ranking are presented in the Table 1. The elaboration of each sub-item is further presented in the form of rating tables so as to provide a richer description. Target needs For this research, target needs is directed at gathering information about: (1) necessities which is mainly about the top-ics to be presented through the literary texts; (2) lacks that is particularly pertinent to the students' understanding level on certain topics; and (3) wants which is related to the forms of literary works that the students need to read.The results from this research are presented in the tables below. Table 2. shows the topics of the literary texts that students wish to read.More than half of students rated noble charac-ter as absolutely necessary (73.3%), and self-empowerment as necessary topics (56.7%).Findings also reveal that few students think that the topics on human and society as well as environment are unnecessary.It is likely due to the rare exposure to the last two topics mentioned. Table 3. is related to the students' self-assessment of their understanding on particular topics.A majority of respon- The students' understanding on freedom, code of conduct, and greed appear to be low which implies the need to include more texts on the mentioned topics.Khairuddin, et al. (2014: 128-129) report that Muslim Malaysian undergraduate students tend to read texts about personal relationship, Islamic studies like Qur'an and Prophet's tradition, and personal development like motivation and seeking knowledge.Table 4. shows the percentage of what literary works that the students wanted to read in the Reading class.It appears that short story was the most wanted form as none of the respondents perceived it as unnecessary.However, drama and novel were also welcome. Learning needs The learning needs is aimed at collecting relevant information on: (1) goals or the things that the students ought to learn in a Reading class; (2) inputs which cover the cultural background of the texts, the version of the texts, and the length of the texts; (3) setting which mainly deals with the material distribution preferred by the students; and (4) teach-er's role or the activities that the students want from a lecturer of reading.The results from this research are presented in Table 5. Tabel 5. shows what the respondents prefer to learn in a Reading class.More than half of the students strongly agree to learn reading strategies (66.7%), reading comprehension (60%), vocabulary and grammar (56.7%), and reading speed (56.7%).More than half agree to learn values (60%) and cohesion devices (73.3%).Interestingly, all respondents agreed to include values in addition to reading microskills and macroskills. Table 6.presents the the cultural backround of the texts that the respondents wanted to read.All of the respondents agree (46.7%) and strongly agree (53.3) to read the texts from British and American culture.Most of the respondents disagree (60%) with reading texts from outer cirle countries like India and Singapore.Some respondents suggested that the texts be taken from Korean, Japanese, Middle East, and Indonesian cultural background. Table 7. shows that the respondents prefer to read simplified version of the literary work, 43.3% agree and 36.7% disagree.However, although 36.7% of the respondents disagree with reading the original version, the rest 63.3% stated that they wish to read the original ones.This indicates that both simplified and original versions of the texts could be included.The use of simplified and original literary works has been oserved by Khanum (2016: 43) in the context of literature-based materials in Bangladesh, where he noticed that 'in case of using original form of literature simplified versions should be used.' Zhen (2012: 38) also reports that in the EFL context in China, exposing original texts without their simplified versions are not suitable for students with limited command of English.Thus, both simplified version of literary work can be used as a springboard for appreciating the original version.Tabel 8. shows that the majority of the students have positive attitude toward reading 1to 6 pages.The text length of more than 6 pages was disagreed by around 66.7% of the respondents. Tabel 9. is related to the reading materials distribution.26.7% disagree with perceiving the materials weekly, while 36.6%(strongly) disagree with perceiving them all in the first meeting.73.3% would like to perceive the materials weekly, and 63.3% all in the first meeting Tabel 10. shows a list of the most wanted roles that the respondents want their lecturers to play, consecutively: role model, resource developer, information provider, facilitator, assessor, planner, participant, and manager.However, few students do not expect the teacher's role as facilitator (3.3%), assessor (6.7%), and manager (3.3%). In sum, the findings of the target needs gathered from the questionnaire imply that short stories about noble charac- ter, self-empowerment, freedom, code of conduct, and greed are mostly needed.Meanwhile, the findings of the learning needs imply that reading class should include micro-skills, macro-skills, and values.Both simplified and original texts with the length range of 1-6 pages are needed.Reading lecturers are expected to be role models and resource developers at most. The Teachers' Target Needs and the Learning Needs of Teaching English with Literature All the interviewees are reading lecturers except one from SIIM who is a senior lecturer and a curriculum developer. As the subjects of the research, the lecturers were coded into S1, S2, S3, S4, and S5.The interview was conducted during October, 2016.The questions of the semi-structured interview were related to the target needs and learning needs.The findings of the the interview are displayed in tables below.Table 11.shows that all respondents agree to integrate the Islamic values of faith, wordly matters, and morals into the reading materials.A reading text should be utilized to improve students' intellectual, social, and spiritual domains as well as environmental awareness.The text is to be connected with Muhadjir's (2011: 309) terms of 'three pillars of Islamic university'; Diallo's (2012: 175) notion of Islamic pedagogy and epistemology; and Halstead's (2005: 525) opinion on Islamic concept of education which is based on the Qur'anic revelation and Prophetic tradition.Thus, the values underpinned the developed model should be derived from Qur'anic verse, prophetic tradition, and the opinion of the righteous Muslim scholars.The values are believed to be universal as Islam is mercy for all creatures (rahmatan lil 'aalamin). Table 12. is related to the topics which are either easy or difficult for the students.Most lecturers found the topics related to daily lives (love, friendship, religion) tend to be easier than those to science and medicine.The stories from prophetic tradition were easy to understand as the students had possessed the relevant schemata of the stories.However, a religious text with Islamic technical terms are not always easy to understand for their English equivalence are not always available in English dictionary.To cope with this, glosses or the explanation difficult words or phrases should be interestingly added to the reading text to help students with difficult topics Table 13. is related to the feasibility of using poem, novel, drama, and short story in a Reading class.Despite the fact that all forms of literary work are usable, short story is the most plausible for it suits the time allotment of reading activity in the class.It could belong to shorter text that fits fits in-class reading and supports in class-discussion.However, a short story could be complicated in term of language and message in that it needs to be adapted to fit the students' language proficiency and intellectual capacit . Table 14.shows the learning needs, particularly the goals of teaching reading.The respondents agree that a reading class should go beyond micro-skills of reading (word, sentence, cohesive device) and macro-skills of reading (inference, guessing meaning, activating schemata, applying read- Necessities The pillars of an Islamic university (tauhid, muamalah, and akhlakul karimah) should be integrated into the reading materials (S1-S5) The topics in reading class should help students grow personally (S1-S5); socially (S3); intellectually (S4), and spiritually (S3, S4) The topics should also make the students aware of the environmental issues (S1-S5) Reading texts ought to be connected with the verses in the Holy Qur'an (S4) Lacks Most students deal positively with the topics on: current phenomena (S1); love (S1, S2, S3); friendship (S2, S3); daily lives (S3); the story of the prophets (S4); and the story of the companions (S4) Most students find it difficult to understand scientific articles (S1, S2, S4), journals (S1), medical texts (S3) English texts with specific Islamic terms tend to be harder than scientific texts.Mostl , the terms are not available in English dictionary (S5) Rohmah (2012: 164) reports that teachers and students in Indonesian Islamic schools 'are in need for English materials with some Islamic messages.'Thus, English learning will be more meaningful when it accommodates students' spiritual domain.Table 15. is pertinent to the questions on text source, version, and length.All respondents include Islamic world as the source of the reading texts in addition to inner circle countries, outer circle countries, and expanding circle countries.A study by Makhdoom (2014: 420) reports that the use of various sources will encourage the teachers to include indigenous literature so as to reduce the hegemony of the western literary text.Qiping & Shubo (2002: 323) also report that texts from the students' cultural background and other cultures areto be designed within the framework of CTL so that the students would connect the text with their personal, social, and cultural contexts.Literary texts that are linked with students' memories, feelings, and imagination will amplify educational, cultural, and even economic values. With regard to the text version and length, it seems that most respondents agree to utilize both simplified and original versions.While the text length range from 1-6, most respondents suggest 1-2 pages.It is safe to state that 2-4 pages will be a moderately suitable text length. Table 16. is related to the efforts that a lecturer should make for an effective literature reading class.The efforts could be broadly categorized into planning, implementing, and evaluating.The planning stage includes: stating instructional goals; developing learning materials, planning student-centered classroom scenario; designing classroom activities.The implementing stage covers: activating prior knowledge, providing relevant information, and modelling activities.The evaluation stage embraces assessing reading skills. Table 17.shows the lecturers' expectation on the students' roles.The students are hoped to be independent learners, active participants, problem solvers, peer tutors, and team workers.These roles would improve the students' reading skills and help form good reading habit. In sum, the findings of the target needs gathered from the interview imply that the literary texts incorporate Islamic values emphasizing noble characters.The materials, mostly needed in the form of short stories, need to be accompanied The reading of literary texts followed by role-play or 'acting it out' activity is preferable (S2) Inputs The source of the texts could be inner circle countries (S1, S2, S3, S5), outer circle countries (2), expanding circle countries mainly Indonesia (S1, S2, S3), and Islamic literature (S1, S2 S3, S4, S5) The reading texts should be selected on the framework of Contextual Teaching Learning (CTL) (S2) Both simplified and original texts could be used (S1, S2, S3, S5) Simplified text is preferable (S4) The text length could be 1-2 pages (S2, S3, S4), 3-5 pages (S5), 5-6 pages (S1) Students should be able to work in team, solve problem raising from the text, and tutor their peers by glosses to help students with difficult words, phrases, and terms.Meanwhile, the findings of the learning needs imply that the course should go beyond teaching micro-skills and macro-skills within CTL framework.The texts ought to represent global, national, and Islamic contexts with 2-4 length average.The scheme of combining original and simplified short stories is needed.Further, proper classroom activities and tasks are needed to help the learners be independent, active participants, problem solvers, peer tutors, and team workers. CONCLUSION With regard to the target needs, this research showed that teaching English with literature will be effective when it utilizes short story with various topics such as noble character, self-empowerment, freedom, code of conduct, and greed.The stories should be related to Islamic values and equipped with glosses of difficult words, phrases and expressions.Meanwhile, the learning needs showed that teachers should utilize literature, both simplified and original versions, to teach micro-skills, macro-skills, and values within the framework of CTL.Besides, the texts to be used should represent global, national, and Islamic cultural backgrounds.The concluding remarks of this research will help future researcher to design a contextually-relevant learning materials and to develop a model of teaching with literature within the particular educational context of Indonesian Islamic universities. the students gain their fullest potential of reading (S3) Lecturer should develop a relevant learning materials (S3) Lecturer should plan an intersting, engaging, and student-centered classroom scenario (S3, S4) Lecturer should manage group assignment and peer teaching scheme (S4) Lecturer should activate the students' prior knowledge (S1) Lecturer should provide information that helps students bridge the content of the text with their personal, social, and spiritual life (S1, S2) Lecturer should model the students through reading aloud and paraprashing activities (S5) Lecturer should assess the students' active participation during the class (S3) Table 1 . The rank of sub-items of students' needs analysis 7%) comprehend the texts on friendship.More than half of the students are good at understanding the topics on heroism (50%), peace (56.7%), and love (56.7%).Half of them are good at texts on heroism (50%) and bravery (50%). Table 2 . Necessities: Ratings according to the topics of the literary text Table 3 . Lacks: Ratings according to text comprehension level Table 4 . Wants: Ratings according to the forms of the literary works Table 5 . Goals: Ratings according to the goals of instruction Table 6 . Input: Ratings according to the text source Table 7 . Input: Ratings according to the text version Table 8 . Input: Ratings according to the text length Table 9 . Setting: Ratings according to materials distribution Table 11 . The necessities Table 10 . Teacher's roles: Rating according to students' expectation of the teacher's roles Table 13 . The wants Table 14 . The goals Table 15 . The inputs Table 16 . The procedure Table 17 . The learners' role
2018-12-11T04:49:01.102Z
2017-10-10T00:00:00.000
{ "year": 2017, "sha1": "95d670578a4544c50d7ba9c0608164f747f0e9ef", "oa_license": "CCBY", "oa_url": "http://www.journals.aiac.org.au/index.php/IJALEL/article/download/3780/3077", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "95d670578a4544c50d7ba9c0608164f747f0e9ef", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
85941885
pes2o/s2orc
v3-fos-license
Adaptation to High Temperature and Water Deficit in the Common Bean ( Phaseolus vulgaris L . ) during the Reproductive Period This paper reviews the adaption to heat and drought stresses in Phaseolus vulgaris, a grain and vegetable crop widely grown in both the Old and New World. Substantial genotypic differences are found in morphophysiological characteristics such as phenology, partitioning, plant-water relations, photosynthetic parameters, and shoot growth, which are related to reproductive responses. The associations between (a) days to podding and leaf water content and (b) the number of pods per plant and seed yield are consistent across different environments and experiments. Leaf water content is maintained by reductions in leaf water potential and shoot extension in response to heat and drought stress. Heat-tolerant cultivars have higher biomass allocation to pods and higher pod set in branches. These traits can be used as a marker to screen germplasm for heat and drought tolerance. In this paper, we briefly review the results of our studies carried out on heat and drought tolerance in the common bean at the Tropical Agriculture Research Front, Ishigaki, Japan. Introduction Transitory or constantly high temperatures cause an array of morphoanatomical, physiological, and biochemical changes in plants, which affect plant growth and development and may lead to a drastic reduction in economic yield.The adverse effects of heat stress can be mitigated by developing crop plants with improved thermotolerance using various genetic approaches [1].However, achieving this requires a thorough understanding of the physiological responses of plants to high temperature, the mechanisms of heat tolerance, and potential strategies for improving crop thermotolerance. The common bean (Phaseoluls vulgaris L.) is originally a crop of the New World [2], but it is now grown extensively in all major continental areas [3].Its production spans from 52 • N to 32 • S latitude [4] and from near sea level in the continental US and Europe to elevations of more than 3000 m in Andean South America.The common bean has two major gene pools [5], the Andean and the Mesoamerican, based on their centers of origin in South and Central America, respectively [6].Within these gene pools are a total of six races, including three Mesoamerican (Mesoamerica, Durango, and Jalisco) and three Andean (Peru, Nueva Granada, and Chile) [7,8].An additional Mesoamerican race has been designated Guatemala, which includes certain climbing beans from Central America [9]. After domestication, the common bean spread across Mesoamerica and South America and, after the European discovery of the Americas, to Europe and Africa, where it was cultivated in diverse environments and agricultural conditions [10].As much as 60% of bean production in the developing world occurs under conditions of significant drought stress [11].This includes large areas in Mexico and Africa where the growing season is short and the rainfall unreliable; regions of Central America where beans are planted after maize and may be subjected to the abrupt cessation of the rains; areas of Brazil where overall rainfall may be adequate but the growing period is interrupted by significant periods without precipitation.In the highlands of Mexico, beans are subjected to extended periods of intermittent drought.The only traits that have proven to be valuable in tolerating both terminal (end-of-season) and intermittent drought are earliness and partitioning toward reproductive structures, resulting in a greater harvest index [12,13].Bean breeders in Mexico have developed bean cultivars with indeterminate prostrate growth habits similar to pinto bean landraces in the semiarid highlands [14].Cultivars such as Pinto Villa use phenotypic plasticity to respond to intermittent drought [15].Interracial and intergene-pool crosses have been made in Mexico to combine different drought tolerance traits [16]. In lowland environments, terminal drought stress can be aggravated by high temperatures [11].In Central America and the Caribbean, breeders have focused on heat as a constraint to expanding bean production in the lowland tropics [11,17].They have made significant progress in developing bean cultivars with improved levels of heat tolerance [18,19].In the subtropical island of Okinawa, Japan, vegetable production in the summer season is very difficult due to high temperatures and intense solar radiation, along with associated effects such as drought and infestation by insects and other pests [20].High temperature in the summer is causing drastic reductions in common bean yield [21][22][23][24].The heat-tolerant cultivar Haibushi was developed by the Okinawa Subtropical Station (now the Tropical Agriculture Research Front), JIRCAS, Okinawa, by screening the germplasm collected from Southeast Asian countries [25]. Development during the reproductive growth stage in the common bean is sensitive to temperature.High temperatures during this stage result in a reduction in pod and seed set due to enhanced abscission of flower buds, flowers, and pods [26][27][28].Pollen-stigma interaction, pollen germination, pollen tube growth, and fertilization are all negatively affected by high temperature [29][30][31][32], with the lowest pod set observed in plants exposed to high temperature 1-6 days prior to anthesis [11].Exposure to 35/20 • C or 35 • C reduced pollen viability (evaluated by pollen staining) [31].Lower pod and seed set caused by high temperature at anthesis (32/21 • C [29] and 35/20 • C [28], resp.) were related to pollen injury, as assessed by pollen stainability and reciprocal pollinations.Continued exposure to 35/20 • C did not affect embryo sac structure, but fertilization failed and it degenerated after anthesis [33].Lower pod and seed set after the exposure of common bean plants to high temperature (32/27 • C) are the combined result of both lower pollen viability (evaluated by pod and seed set resulting from reciprocal hand pollination) and impaired female performance in a large proportion of the flowers [32].High temperature (33/25 • C) affects the endoplasmic reticulum structure and blocks its function in the tapetum and then induces earlier-than-usual degeneration of the tapetum.Pollen sterility is associated with tapetal degeneration [34].Weaver et al. [35] reported a close relationship between pollen stainability and tolerance to high-temperature stress among bean selections.Pollen staining by acetocarmine has been used widely for the rapid determination of pollen sterility occurring under environmental stresses [36,37].A highly positive correlation was observed between pod set and pollen stainability in flowers that were affected by high temperature (32/28 • C for 24 h) 8 to 11 days before anthesis [23], which corresponds to the early microspore stage in the common bean [38]. It is recognized that high temperature affects many physiological processes, including photosynthesis and the translocation of photosynthetic production, across a wide range of crops [39][40][41].For example, in studies on birch trees, river birch was found to maintain the high net photosynthetic rates (P n ) at high temperature, ranging from 25 to 40 • C, while the P n of paper birch was reduced the most.Inhibition of P n at higher temperatures was due largely to nonstomatal limitations in both taxa [41].At high temperature (40 • C), Norchip, the most heat-tolerant cultivar of potato, synthesized small heat shock proteins for a longer time period than the other cultivars.The levels of an 18 kDa small heat shock protein increased up to 24 h in Norchip and Desiree, which are heat-tolerant cultivars, whereas the levels started to decrease after 4 h in Russet Burbank and after 12 h in Atlantic, which are heat-sensitive cultivars [39].Suzuki et al. [42] examined the effect of succinic acid 2,2dimethylhydrazide (SADH) on the drought tolerance of bean plants.In SADH-applied plants, leaf water potential below which photosynthetic rate decreased was lower than that in control plants.Phenological adjustment and shoot biomass distribution on seed yield of drought-stressed common bean were assessed in two locations in Mexico [43].Days to flowering and days to physiological maturity showed a negative and significant relationship with seed yield.Under drought stress, a significant reduction in the harvest index was observed in susceptible cultivars.Genotypic variation was detected in all partitioning indices, chiefly harvest index and relative sink strength by drought stress [44].The crop faces water deficit due to excessive transpiration caused by high temperature (31/27 • C) [45].Even short diurnal fluctuations in the plant's water status [46] at the time of anthesis could adversely affect the development and function of its reproductive organs [24]. Phenological adjustment, plant-water relations, photosynthetic parameters, and shoot growth are all related to reproductive responses and thus may play an important role in heat and drought tolerance in the common bean.In this paper, we reviewed the results of our own studies on the above factors, but focused on photosynthesis in relation to leaf water status, genotypic differences in water status in relation to reproductive responses, genotypic differences in drought tolerance in relation to vegetative growth, and the seasonal performance of cultivars to elucidate the way in which heat tolerance and water deficit are related to reproductive responses in the common bean.transpiration rates, which vary within a narrow range [47].This indicates that the effect of high temperature on the biochemical factors controlling intercellular CO 2 assimilation is similar in all the cultivars.The midday leaf water potential decreases with increasing air temperature, but the decline is greater in heat-tolerant cultivar Haibushi and strain Ishigaki-2 than in the remaining cultivars/strains.A steeper water potential gradient from soil to plant may enhance the ability of plants to absorb water at a faster rate [48].This would reduce the development of severe internal water deficit in the reproductive organs and increase their survival and growth.Sinclair and Ludlow [49] support our assumption that photosynthesis, protein synthesis, NO 3 reduction, and leaf senescence are better correlated with changes in tissue water content than with leaf water potential.It is worth noting that the heat-tolerant cultivar Haibushi and strain Ishigaki-2 display an association between (a) photosynthesis and leaf conductance and (b) leaf water potential, while this is absent in the heat-sensitive cultivars [47].This indicates that the heat-tolerant cultivars possess better stomatal control over CO 2 and H 2 O exchange in leaves in response to high temperature.This is evidenced by the fact that the sensitive cultivar Kentucky Wonder and strain 92783 show greater water loss [50]. Genotypic Differences in Water Status in relation to Reproductive Responses Haibushi, a heat-tolerant cultivar, displays better leaf water status than Kentucky Wonder, a heat-sensitive cultivar, which exhausted soil water quickly, resulting in a greater deterioration in water status [51].The reduction in leaf water content with water potential occurred faster with the increase in high temperature and is larger in the heat-sensitive than in the heat-tolerant cultivar [52].Under field conditions, strains 86884 and 92783, collected from Southeast Asia countries [25] and cultivar Kentucky Wonder failed to show any relationship between leaf water potential and water content and produced very few pods despite the higher pollen fertility.In contrast, in strains 45817, Ishigaki-2, and 3028520 and cultivars Kurodane Kinugasa and Haibushi, relatively higher leaf water content was maintained with declining water potential and a larger number of pods were set [50].Osmotic adjustment and cell wall elasticity enable the plants to maintain higher water content, turgor, and other turgorrelated processes during water deficit [53,54].This allows plant organs to survive longer in tolerant than in sensitive types.The cultivars with a smaller midday drop in leaf water content showed a higher pod-setting ratio and consequently had higher yield than the plants with a larger midday drop in leaf water content [55]. Genotypic Differences in Drought Tolerance in relation to Vegetative Growth The common bean cultivars display distinct responses to prolonged drought stress under field conditions.The responses of photosynthetic parameters and shoot extension to leaf water status are related to soil water content.A decrease in soil water causes a decline in leaf water status.The high-yielding cultivars display a smaller reduction in leaf water content but a larger reduction in leaf water potential than the poor yielders.Such differences in leaf water content and leaf water potential may arise due to differences in osmotic adjustment [48,56,57] and cell wall elasticity [53].Coyne et al. [58] argue that a steeper leaf water potential gradient from soil to plant may enhance the ability of the plants to extract soil water at low soil water content.The reduction in leaf water potential due to water stress is linearly correlated with reductions in shoot extension rate and leaf water content. A discriminant analysis revealed that the five cultivars display two distinct types of responses [59].One group includes cultivars Haibushi, Kurodane-Kinugasa, and strain Ishigaki-2, which showed a large reduction of about 16-20% in both shoot extension and water potential, and they also produced a higher number of pods per plant and seed yield than cultivar Kentucky Wonder and strain 92783.Kentucky Wonder and 92783, which form a separate group, displayed a comparatively smaller reduction (4-8%) in both water potential and shoot growth.In contrast, the former group displayed a smaller reduction in leaf water content, while the latter group showed a larger reduction in leaf water content.This suggests that tissue water content is kept high by restricting excessive vegetative growth and a large reduction in water potential.The reduction in shoot growth due to stress contributes to a build-up of water-economizing traits, such as specific leaf weight and succulence index. Seasonal Performance of Cultivars The performance of common bean cultivars Haibushi, Kentucky Wonder, and Kurodane Kinugasa and the strains Ishigaki-2, 45817, 92783, 86884, and 3028520 was evaluated between 2003 and 2005 in many field and controlledenvironment experiments during the winter and summer seasons.Across the seasons, days to pod formation was positively associated with the number of pods per plant, seeds per pod, seed weight, and yield (r > 0.97).On the contrary, among the cultivars/strains, shorter duration to podding or flowering resulted in a higher number of pods per plant (r = 0.93) and number of seeds per pod (r = 0.82).Haibushi and Ishigaki-2 consistently produced a higher number of pods per plant and seed yield across the seasons and environments than the remaining cultivars.The number of pods per plant is the most important yield attribute and is precisely determined by thermal units and the duration between emergence and flowering.Porfirio and James [44] report that a high partitioning index (chiefly harvest index) shows high heritability, contributing to drought stress in the common bean.Thus, we can evaluate this character as genetic variation for adaptation to high temperature and drought. Morphological Characters and Partitioning for Adaptation to High Temperature The partitioning of dry matter (the ratio of dry weight of individual parts to that of total dry matter) was analyzed in the common bean at four temperature regimes (24/20, 27/23, 30/26, and 33/29 • C) [60,61].Haibushi, a heattolerant cultivar, has a higher pod weight per plant, number of pods per plant, average pod weight, pod set ratio, number of branches, and rate of biomass allocation to pods, but lower rates of biomass allocation to leaves, stems, and roots, than Kentucky Wonder, a heat-sensitive cultivar, across all temperature regimes [61].A sharp decline in dry matter partitioning to pods is observed at 33/29 • C [60]. In the temperature range of 24/20 to 30/26 • C, Haibushi showed higher partitioning to pods than Kentucky Wonder, independent of temperature.On the contrary, Kentucky Wonder showed higher partitioning to pods at 27/23 • C than at 24/20 • C.These results show that higher biomass allocation to pods and higher pod set in branches, which vary with the cultivar and temperature, play an important role in achieving a higher harvest index in the heat-tolerant compared to the heat-sensitive cultivars.Konsens et al. [27] recognize that high night temperature promotes branching in the common bean.Drought stresses induce genotypic variation of shoot biomass accumulation, pod and seed number, and biomass partitioning index [43,44]. Concluding Remarks Our results reveal that leaf water content is involved in heat and drought tolerance in the common bean, but the supporting system for maintaining high water content is unclear.Leaf water content is better correlated with leaf vapor pressure deficit, internal CO 2 concentration, and leaf conductance than with water potential.Therefore, plant water status can be explained better in terms of leaf water content in the common bean.Evaluation of the association between (a) number of pods per plant and seed yield and (b) midday drop of leaf water content provides clear evidence that leaf water content is responsible for the genotypic variations in heat and drought tolerance.A small reduction in leaf water content is displayed by the tolerant cultivars, which show larger reductions in shoot extension and leaf water potential than the sensitive cultivars.Therefore, we can conclude that leaf water content is an important physiological trait for improved productivity and that it can be used as a screening tool for heat and drought tolerance in the common bean.
2019-03-30T13:08:53.276Z
2012-05-28T00:00:00.000
{ "year": 2012, "sha1": "7a8d188081ba7661625cc6430903f8db20082837", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2012/803413.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7a8d188081ba7661625cc6430903f8db20082837", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
235461966
pes2o/s2orc
v3-fos-license
Use of infographics as a health-related knowledge translation tool: protocol for a scoping review Introduction Efforts to bridge the know–do gap have paved the way for development of the field of knowledge translation (KT). KT aims to understand how evidence use can best be promoted and supported through different activities. For dissemination activities, infographics are gaining in popularity as a promising KT tool to reach multiple health research users (eg, health practitioners, patients and families, decision-makers). However, to our knowledge, no study has yet mapped the available evidence on this tool using a systematic method. This scoping review will explore the depth and breadth of evidence on infographics use and its effectiveness in improving research uptake (eg, raising awareness, influencing attitudes, increasing knowledge, informing practice and changing behaviour). Methods and analysis We will use the scoping review methodological framework first proposed by Arksey and O’Malley (2005), improved by Levac et al, and further refined by the Joanna Briggs Institute (2020). The search will be conducted in MEDLINE, Cumulative Index to Nursing and Allied Health Literature, PsycINFO, Social Science Abstracts, Library and Information Science Abstracts, Education Resources Information Center, Cairn and Google Scholar. We will also search for relevant literature from the reference lists of the included publications. Two independent reviewers will select the studies. All study designs will be eligible for inclusion, with no date or publication status restrictions. The included studies will have evaluated infographics that disseminate health research evidence and target a non-scientific audience. A data extraction form will be developed and used to extract and chart the data, which will then be synthesised to present a descriptive summary of the results. Ethics and dissemination Ethics approval is not required. To inform the research and KT communities, various dissemination activities will be developed, including user-friendly KT tools (eg, webinars, fact sheets and infographics), open-access publication and presentations at KT events and conferences. Introduction Efforts to bridge the know-do gap have paved the way for development of the field of knowledge translation (KT). KT aims to understand how evidence use can best be promoted and supported through different activities. For dissemination activities, infographics are gaining in popularity as a promising KT tool to reach multiple health research users (eg, health practitioners, patients and families, decision-makers). However, to our knowledge, no study has yet mapped the available evidence on this tool using a systematic method. This scoping review will explore the depth and breadth of evidence on infographics use and its effectiveness in improving research uptake (eg, raising awareness, influencing attitudes, increasing knowledge, informing practice and changing behaviour). Methods and analysis We will use the scoping review methodological framework first proposed by Arksey and O'Malley (2005), improved by Levac et al, and further refined by the Joanna Briggs Institute (2020). The search will be conducted in MEDLINE, Cumulative Index to Nursing and Allied Health Literature, PsycINFO, Social Science Abstracts, Library and Information Science Abstracts, Education Resources Information Center, Cairn and Google Scholar. We will also search for relevant literature from the reference lists of the included publications. Two independent reviewers will select the studies. All study designs will be eligible for inclusion, with no date or publication status restrictions. The included studies will have evaluated infographics that disseminate health research evidence and target a non-scientific audience. A data extraction form will be developed and used to extract and chart the data, which will then be synthesised to present a descriptive summary of the results. Ethics and dissemination Ethics approval is not required. To inform the research and KT communities, various dissemination activities will be developed, including user-friendly KT tools (eg, webinars, fact sheets and infographics), open-access publication and presentations at KT events and conferences. BACKGROUND Knowledge translation Efforts to mobilise vast amounts of research results and evidence-based information have paved the way for development of the knowledge translation (KT) field. [1][2][3] The Canadian Institutes of Health Research defines KT as 'a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically sound application of knowledge' to improve health, health services delivery and the healthcare system. 4 KT science aims to understand how evidence use can best be promoted and supported through different KT activities. 5 The choice of activities will vary depending on KT objectives (eg, raising awareness, improving action through practice change among professionals, influencing political decision making, mobilising public action), knowledge users' needs, implementation context and the nature and type of knowledge to be shared. 6 In this study, we will focus on dissemination activities that require expertise in plain-language communication and popularisation. 1 7 8 The primary goal of dissemination activities is to 'make new knowledge understandable and accessible so as to effectively reach the groups of actors concerned' (p. 30). 8 (2005), as well as to guidelines from the Joanna Briggs Institute (2020). ► To reduce bias and errors, this review will include multiple reviewers in all phases of study selection and data extraction. ► The scope of this review will be limited, in that only literature published in English and French will be included. ► Following accepted scoping review guidelines, this review will not formally assess the quality of the included studies, limiting our ability to assess the strength of existing evidence. Open access dissemination of documents poorly suited to the preferences and characteristics of the target audience is often ineffective. 5 8 9 Accordingly, the KT field emphasises the importance of developing dissemination tools that are attractive and adapted to users' preferences. 5 Examples of dissemination tools include summary sheets or infographics, practice guides, newsletters, brochures, leaflets, policy briefs, cartoons, videos, books, reports, plainlanguage articles, etc. 8 10 11 Thanks to the KT movement, research dissemination is no longer limited to peerreviewed publications and scientific conferences. More innovative and promising tools are now used for knowledge sharing. This project will specifically focus on one of these tools, infographics. [12][13][14] Infographics for KT Infographics-an abbreviated term for informational graphics-have become increasingly popular in today's digital age. [15][16][17] In fact, however, data visualisation is not a new phenomenon; maps and illustrations, for instance, have been around for many centuries. 18 While no single definition has gained wide acceptance, an infographic is often understood as an eye-catching one-page document that uses striking and engaging visuals to communicate complex evidence-based information in an attractive and easily understandable way. 17 19 20 An infographic 'uses visual cues, illustrations and large typography to display facts in a long, vertical orientation, and are distributed through print media, embedded into websites, and shared on social media' (p. 2). 21 It usually presents information in a logical manner to tell a story. 13-15 22 Infographics are ubiquitous and used by many different industries and sectors: business, environment, food, finance, politics, and the healthcare sector, among others. 14 Their purpose is to capture users' attention, help them better understand the information presented, increase their ability to retain and recall the message, and encourage them to act in accordance with the information. 23 Infographics are thus gaining ground as a promising research or health information dissemination tool to reach multiple potential knowledge users, such as health practitioners, patients and families, decision-makers and community members. Several research community initiatives have been aimed at producing and distributing infographics in scientific journals or on social media (eg, Twitter, Facebook, LinkedIn, Pinterest, Instagram). Moreover, with the recent emergence of user-friendly software for producing infographics, they have become the go-to tool in many contexts, targeting different audiences and using a variety of formats and designs. Thus, research on infographics is essential to better understand their real effectiveness in improving knowledge uptake and to highlight best practices for designing, producing and sharing them. In fact, many empirical studies have explored the use of infographics as an intervention tool for disseminating research results or evidence-based information. 20 [24][25][26] Purpose To our knowledge, no knowledge synthesis has been conducted using a methodology that is both systematic and inclusive of all study designs and evidence sources to map the available evidence on the effectiveness of infographics in supporting dissemination. Although a review of literature was produced related to this topic, 27 our review differs in that we use a systematic methodology specific to scoping reviews, include all study designs, and add references published since 2015, to capture the important number of new studies using infographics in recent years. Our overarching goal is to explore the depth and breadth of evidence about the use and effectiveness of infographics as a KT intervention tool to improve knowledge uptake in health (eg, raising awareness, influencing attitudes, increasing knowledge, informing practice, changing behaviour). To produce an evidence synthesis, we will conduct a scoping review. This approach is recommended when the purpose is, for example, to clarify key concepts and definitions in the literature, to identify key characteristics or factors related to a concept, or to examine how research is conducted on a certain topic. 28 According to the Canadian Institutes of Health Research, a scoping review is 'undertaken when feasibility is a concern-either because the potentially relevant literature is thought to be especially vast and diverse (varying by method, theoretical orientation or discipline) or there is a suspicion that not enough literature exists' (p. 34). 29 As such, a scoping review is useful to identify knowledge gaps that might be addressed in future research. METHODS AND ANALYSIS To guide our methodology, we will primarily use the scoping review methodological framework first proposed by Arksey and O'Malley, 30 improved by Levac et al 31 and further refined by the Joanna Briggs Institute. 32 A scoping review includes six key phases: (1) identifying the research questions; (2) identifying relevant studies; (3) selecting studies; (4) charting the data; (5) collating, summarising, and reporting the results and (6) consulting with relevant experts. This protocol is congruent with the Preferred Reporting Items for Systematic reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR), as will be the reporting of the scoping review. 33 This scoping review protocol is inspired by and based on previous scoping reviews on similar KT activities and tools. 34 35 Stage 1: identifying the research questions The first stage is to identify research questions related to the purpose of this study. As stated earlier, this scoping review is aimed at determining the scope of evidence on infographics use as a KT intervention tool to disseminate research results or evidence-based information (in health-related sectors) to those who can benefit. Table 1 describes the core elements of the scoping review based on the Population-Concept-Context (PCC) framework. 32 Open access We formulated five specific research questions to guide this review. Because the scoping review process can be iterative, we will adopt a reflexive approach and will revise research questions, if needed, as we become more familiar with the body of evidence. Question 1: what is an infographic? Given the recent popularity of infographics for KT, and to clarify the nature of this tool, we want to know more about the terms and definitions put forward in the literature to characterise infographics. We will also document the theories or conceptual frameworks most used to study infographics (eg, dual-coding theory, cognitive load theory, theory of planned behaviour, etc.). Question 2: why are infographics used, for whom and what do they contain? Next, we will identify the main characteristics of the studied infographics, such as their goals (eg, to raise awareness, influence attitudes, increase knowledge, inform practice, change behaviour), the nature of their content in relation to the information presented, their target audiences, the process used to develop the tool, as well as the visual appearance and format of the infographics. We will use the basic principles of public health infographic design (eg, coherence, colours, alignment, visual hierarchy, use of charts, imagery, headings) as a general framework to extract data related to the visual quality of the infographics in the selected studies. 12 Question 3: how is research conducted in the field of health infographics? We aim to produce a portrait of how empirical studies on infographics are designed. From each of the selected studies, we will extract and analyse data related to its research design (eg, objectives, methods, comparator(s), study procedure), study population, sample size, indicators (outcomes of interest), measurement tools and types of analyses. We will also document how the infographics were delivered in the studies (eg, online vs printed infographic, targeted mail, social media). Question 4: how effective have infographics been in achieving their goals? We will document the available evidence on the effectiveness of infographics as a KT intervention in relation to the objectives of the infographics used. The potential of this tool will be discernable to the extent that the studies will have demonstrated their infographics' effectiveness in relation to outcomes of interest. Finally, we will document the authors' conclusions regarding perceived barriers and enablers of infographics effectiveness. Question 5: what are the knowledge gaps and future research needs? With this last question, we aim to uncover persisting knowledge gaps. To do this, we will describe the main limitations of the selected studies, with a view to discerning any questions that remain unanswered. We hope to make recommendations on needs for research to further advance knowledge. Stage 2: identifying relevant studies Search strategy The search strategy was developed by the first author (EMC) with a senior information specialist. It was then circulated to the research team and further refined. Search terms will include keywords and terms related to 1 : KT (eg, research dissemination, health communication, knowledge transfer) and 2 infographic (eg, informational graphic, data visualisation, visual graphic) (see online supplemental appendix 1). To capture as many relevant publications as possible, the list of terms will be iteratively revised after searching the databases. The search strategy will not be limited by study design, year of publication or publication status. Searches will be limited to English and French language publications, due to resource constraints. The search strategy for the MEDLINE database is presented in online supplemental appendix 2. It will be adapted for the other databases and will also be available from the corresponding author on request. The search strategy will be validated using the Peer Review of Electronic Search Strategies checklist. 36 Information sources A systematic search of the published and grey literature will be conducted to identify relevant publications. We will search the following electronic databases from inception onwards: MEDLINE, Cumulative Index to Nursing and Allied Health Literature, PsycINFO, Social Science Abstracts, Library and Information Science Abstracts, Education Resources Information Center and Cairn. These databases were chosen to capture the most comprehensive body of literature possible. The grey literature (eg, reports, conference proceedings, theses, working papers, Potential knowledge users (non-scientific audience), such as health professionals, decision-makers, patients and families and communities. Concept An infographic or any shareable tool that uses striking and engaging visuals to communicate complex evidence-based information in a user-friendly way. Context The use, in health-related sectors, of an infographic intervention to promote and improve knowledge use (eg, raise awareness, influence attitudes, increase knowledge, inform practice, change behaviour) PCC, Population-Concept-Context. Open access evaluations) will be searched using Google Scholar and Google Web search engines. Reference lists of key publications will also be handsearched by the review team to capture any paper missed in the electronic searches. The search in the databases will be conducted by our information specialist. Results will be imported into Covidence, a systematic review software programme, and duplicate citations will be removed before the study selection process. Stage 3: selecting studies The study selection process will consist of two stages: (1) title and abstract screening and (2) full-text screening by two reviewers, independently. We will use Covidence to manage these two stages of selection. Before beginning the screening, the eligibility criteria (inclusion and exclusion) will be pilot tested on a random sample of publications and modified if low inter-reviewer agreement is observed (eg, a kappa statistic below 60%). If the level of agreement is acceptable, the two reviewers will independently screen the titles and abstracts of all publications retrieved to determine whether they are eligible for full review. The reviewers will meet regularly to discuss uncertainties related to eligibility criteria and to resolve differences in study selection, with a view to ensuring inter-reviewer reliability and reaching consensus. Publications identified as potentially relevant to this scoping review will be retrieved in full text. After completion of the first stage and prior to the full-text review, the two reviewers will meet to revisit the scope of the review and to refine or extend inclusion and exclusion criteria, if necessary. They will also meet regularly during the second stage to discuss and resolve differences. In cases of unresolved decisions related to the inclusion of a study at any stage, a third researcher will adjudicate. A flowchart will be produced using the PRISMA template to report on the selection process (figure 1). Inclusion criteria The inclusion criteria are based on the PCC framework (see table 1). As such, we will include studies that: ► Empirically evaluate an infographic tool (ie, one that includes textual and visual content). ► Disseminate research results or other health-related information. ► Target a non-scientific audience to improve knowledge use. All study designs will be eligible for inclusion, with no publication date or status restrictions. Relevant publications that do not meet these inclusion criteria (eg, theoretical paper on information design principles, visual literacy) will be held in a separate folder; if appropriate, they will be used to support data analysis and interpretation. Exclusion criteria We will exclude studies that: ► Do not focus on health-related issues. ► Target children, such as primary school students. ► Cconcern one type of graph or charts (eg, bar charts, forest plots, three-dimensional graphs). ► Only address interactive data visualisation tools (eg, video, apps, websites). ► Uuse health data (eg, personal data contained in electronic health records). ► Use infographics as a form of therapy or clinical intervention. ► Focus on developing data visualisation skills. ► Do not make the evaluated infographic tool available. ► Are published in languages other than French and English. Stage 4: charting the data After completing the study selection process using Covidence, we will develop a data extraction form using Microsoft Excel to capture the data of interest from the selected studies. Two reviewers will pilot test the form on a random sample of the included studies (10%). They will then meet with the research team to discuss uncertainties and additional potentially relevant information to be included in the form. Data from the remaining studies will be abstracted by one reviewer and verified by a second reviewer to ensure correctness and completeness. The data extraction form will be iteratively revised as necessary, to ensure its rigour and ability to capture all relevant data to answer the review questions. Table 2 presents the data to be extracted. Given that the aim of a scoping review is primarily to identify gaps in the evidence base, and consistent with guidance on conducting scoping reviews, we will not conduct a critical appraisal of the selected studies. Stage 5: collating, summarising and reporting the results The synthesis stage of this review will involve producing a descriptive summary and thematic analysis of the extracted Open access data. 31 To ensure rigour, two reviewers will conduct the analysis with input from collaborators during the process. A descriptive summary of the publications' characteristics (year of publication, country of origin, health topic and type of article) will be presented using frequencies and percentages. We will also prepare descriptive summary tables of all data extracted from included studies that are aligned with our research questions (based on the research question variables presented in table 2. These tables will map key findings regarding infographic definitions and theories used, characteristics of the studied infographics (goals, content, target audience, visual and format, development process), characteristics of the research designs, outcomes of interest used to measure the infographic's effectiveness, main results, author conclusions and future research needs. We will prepare a qualitative descriptive summary to accompany the tabulated results to describe how they relate to our research questions. Finally, if the extracted data allow it, a more in-depth qualitative analysis will be conducted to discuss or nuance the evidence of effectiveness in light of potential barriers and enablers identified by the authors. We will use the PRISMA-ScR to guide the final reporting of our results. Stage 6: consultation While consultation is optional, it can be a relevant and useful stage of a scoping review process, adding methodological rigour and enhancing the validity and usefulness of the review results. 31 37 Given that all authors of this protocol are members of a multidisciplinary research team on KT in Canada (RENARD team), we will mobilise our network. We will develop a consultation panel made up of KT researchers, including graduate students and practitioners. All RENARD members have expertise in the KT research field and/or in developing and implementing KT activities to improve knowledge uptake. Input from these informants will be essential to: (1) provide additional references to include in the review; (2) contribute valuable insights into our preliminary results and (3) develop, contextualise, and validate recommendations based on the results of our scoping review (eg, research priorities, criteria for developing effective infographics). The consultation exercise will consist of two focus groups (one on preliminary results and one at the final stage) with approximately 10 experts per group. Patient and public involvement Patients and members of the public were not involved in the conception and design of this protocol. ETHICS AND DISSEMINATION To our knowledge, this will be the first comprehensive and systematic scoping review on the use and effectiveness Open access of infographics as a KT intervention tool to improve knowledge uptake in the health sector. This review will contribute to both dissemination science and practice. In summary, we will identify gaps in the literature as well as research areas that require systematic review or primary research. This scoping review will be helpful not only to improve research carried out in this field (eg, recommendations for study designs, indicators, measurement tools), but also to offer preliminary guidelines to those planning to use infographics for KT. This review will enable us to describe what an infographic is and what form(s) this tool can take (offering a common terminology and definition in the KT field), to identify in which contexts infographics can be effective and for what purposes, and to identify key principles to consider when developing an infographic for KT. The present study is exempt from ethics approval because it involves no patient or personal data collection. After completion of the search strategy and data extraction process in the spring, the scoping review results are expected to be ready by August 2021. We will then develop a KT plan to disseminate the results. The main objectives will be to inform the research and KT communities on the state of knowledge on this increasingly popular tool and to raise awareness of its potential usefulness (or non-usefulness) in certain contexts, depending on the conclusions of our review. To achieve these objectives, we will use a combination of user-friendly KT activities such as webinars, fact sheets, summaries, and infographics. They will be widely disseminated via our research team's website ( www. equiperenard. org), newsletters and social media. Results will also be published in an open-access peer-reviewed international journal and presented in relevant KT conferences or events (eg, Canadian Knowledge Mobilization Forum).
2021-06-18T06:16:23.533Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "11c60b7f208ede959638a5280d6393154982a43e", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/6/e046117.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "08641f82b07907c5d80096f7eeba26a1cd78e287", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253573988
pes2o/s2orc
v3-fos-license
Mycoplasma hominis and Ureaplasma urealyticum infections after knee arthroplasty: A case report Rationale: Artificial joint infection caused by Mycoplasma hominis and Ureaplasma urealyticum is rare and has not been reported. Patients concerns: A 59-year-old man underwent left total knee arthroplasty for 1 year of pain in the left knee joint. The indwelling urinary catheter was removed after 48 hour of the surgery. On day 8 after the surgery, the patient had fever, increased skin temperature, swelling and redness around the surgical site, and floating patella test (+). According to experience, Vancomycin, Ciprofloxacin and Linezolid were administrated. Evident decrease in C-reactive protein was observed after Linezolid administration, while there was no significant improvement in clinical symptoms. Microbiome sequencing was performed, resulting in diagnosis of positive M hominis and U urealyticum. The patient was then treated with Doxycycline in the following 3 months. During the 11-month outpatient follow-up, there was no evidence of recurrence of infection. Diagnosis: Microbiome sequencing was performed, resulting in diagnosis of positive M hominis and Ureaplasma urealyticum. Interventions: The patient recovered following with Doxycycline in the following 3 months. Outcomes: During the 11-month outpatient follow-up, there was no evidence of recurrence of infection. Lessons: M hominis and U urealyticum are common pathogens of the urinary system infections but they are rare in osteoarticular infections. In cases of fever, swelling and heat pain around the surgical site, joint fluid, negative blood culture and being irresponsive to anti-bacterial agents against the cell wall, special bacteria-related infection should be highly suspected. Introduction Arthroplasty has been increasingly popular in patients with osteoarthritis or arthritis given its excellent efficacy in alleviating pain and improving joint mobility. Prosthetic joint infection is rare. Negative cultures, may result from pre-culture antibiotics administration, or the condition that the routine treatment fails to identify the responsible pathogens. Mycoplasma hominis and Ureaplasma urealyticum tend to be adherent to the mucosal epithelial cells of the urogenital tract, leading to urinary system, and gynaecological infections. In addition, they can also cause infections outside the urogenital system, such as sepsis, central nervous system infection, respiratory infection, joint infection, and wound infection, which are mainly due to the mucosal injuries (such as mechanical operation, surgery and trauma) and immune dysfunction. [1] Here, we report a case who suffered from infections caused by M hominis and U urealyticum after knee arthroplasty. Case presentation A 59-year-old man was admitted to Hefei BOE Hospital on May 10, 2021 for 1 year of pain/discomfort in the left knee joint. Admission physical examination revealed body temperature = 36.6 °C, heart rate = 84 beats/minute, respiratory = 20 breaths/minute, and blood pressure = 110/75 mm Hg (1 mm Hg = 0.133 kPa). The patient was conscious with a good spirit, continent, and had a normal diet and body weight. Both lower limbs were basically equal in length. The right lower-limb muscle atrophy was evident with the muscle strength decreased to grade III, while the sensory function of the limb was normal and the end vascular actions were good. Mild edema was observed in the left knee joint, but the skin temperature was normal and there was no tenderness. In addition, the left lower limb was slightly limited in movement with a range of around 5 to 100° and a sensation of bone friction. The sensory function of the left lower limb was normal and the end vascular actions were good. The patient was initially diagnosed with unilateral (left) knee osteoarthritis, which was confirmed after admission. The patient was in good general condition and had stable vital signs after admission. There was no evidence of abnormality in preoperative examinations. On May 13, the patient underwent left total knee arthroplasty, with prophylactic Cefuroxime (1.5 g, ivgtt, q8h) administrated for 48 hour. In the meantime, a urinary catheter was indwelled during the procedure and then removed 48 hour after the operation. Other treatments included anticoagulation using Nadroparin Calcium (0.4 mL, iv, qd), analgesia with Parecoxib (40 mg, ivgtt, bid) plus Oxycodone Sustained-Release Tablet (10 mg, po, q12h) and cold therapy as an adjuvant to relieve the swelling and pain. On May 21, the patient had a fever and reached the highest body temperature of 39.1 °C (Fig. 1). Laboratory examination revealed significantly increased white blood cells (WBC), ultrasensitive C-peptides and erythrocyte sedimentation rate (ESR) (Fig. 1). There was obvious redness around the surgical site, and positive tenderness and floating patella test were obtained. Movement of the knee joint was good, and the sensory function and end vascular actions of the lower limbs were good as well. Magnetic resonance imaging (MRI) showed hydrops in the joint cavity with swelling of the surrounding soft tissues. On May 21 and 24, knee joint puncture was performed on the surgical site. The puncture fluid was in dark red and negative for bacteria and acid-fast bacilli by blood culture and puncture fluid smear, respectively. In the meantime, negative results were also shown in blood culture and drug sensitivity test under aerobic or anaerobic conditions. On May 21, Vancomycin (1.0 g, ivgtt, q8h) and Ciprofloxacin (0.4 g, ivgtt, q12h) were given by experience (Fig. 1). On May 24, the blood drug concentration of Vancomycin was 10.44 ug/mL. The body temperature failed to fall significantly and the highest reached 39 °C. There was no improvement in the surgical site but progressive aggravation of the redness and swelling. On May 25, Vancomycin was replaced by Linezolid (0.6 g, ivgtt, q12h) [2] and Ciprofloxacin was continued. PIseq™DNA + RNA was then performed. The patient had slight decline in body temperature in the following 5 days but the highest still reached 37.6 °C. Reductions in inflammatory indicators, especially C-reactive protein (CRP), were also observed ( Fig. 1), while the redness and heat pain in the surgical site persisted. On May 30, PIseq™DNA + RNA result revealed growth of M hominis and U urealyticum. Ciprofloxacin was then replaced by Doxycycline with a loading dose of 0.2 g followed by 0.1 g, po, q12h. The body temperature recovered to normal on the next day, and the regional rednessand heat pain got relieved significantly. On June 15, Linezolid was discontinued. On June 17, the clinical symptoms and inflammatory indicators of the patient gradually recovered to normal. The patient was then discharged and asked to take Doxycycline for 2 more months. During the 11-month outpatient follow-up, there was neither significant adverse drug reaction nor symptomor signs of reinfection. Discussion Mycoplasma is a general term of any microorganisms of the Mollicutes. It has been established that the Mycoplasma species are the smallest self-replicating organisms. Currently genitalium and U urealyticum. [3] M hominis and U urealyticum with no cell wall can survive independently, which enables them to be naturally resistant to glycopeptides, such as Penicillin, Cephalosporin, β-lactams, and Vancomycin. In addition, they cannot be detected by Gram staining. To know better about the characteristics of infections with both M hominis and U urealyticum after joint arthroplasty, authors of the study searched CNKI, PubMed, etc, and found that there was no relevant report as of July 2021 domestically and abroad. Most reported cases were infected with single pathogens. For example, Lili Xiang et al [4] and Luttrell LM et al [5] analyzed a total of 7 cases who were infected with M hominis after arthroplasty, including 4 cases receiving knee arthroplasty, 2 cases undergoing hip arthroplasty and 1 case experiencing shoulder arthroplasty. In addition, Farrell et al, [6] MacKenzie et al [7] and Sköldenberg et al [8] reported 3 cases of U urealyticum infection after arthroplasty. In most cases of infection, the patients will present fever, increase in WBC, ultrasensitive C-peptide and ESR, redness, heat pain, and exudation in the surgical site, and they are generally irresponsive to β-lactams. The specific source of infection remains to be clarified. One possible source is the invasive operations to the urinary tract performed before arthroplasty in most patients. Additionally, the reproductive system infection and the hematogenous dissemination to the surgical site are also possible sources. [9] The case reported here had redness and heat pain in the surgical site after knee joint arthroplasty, accompanied by significant elevation of WBC, ESR, and ultrasensitive C-peptides most obvious. In the meantime, the patient was free of any β-lactams and irresponsive to Vancomycin and Ciprofloxacin. No Genitourinary infection, a possible primary source of septic arthritis reported in the medical literature, was ruled out in our case by urine culture. However, invasive operation of the urinary system was considered as a possible source of knee joint infection after surgery as the patient carried a urinary catheter during the procedure. Microbiological tests for M hominis and Ureaplasma, such as Gram staining and early routine culture, generally are negative or insensitive, given requirements of specific operations and culture medium. [10] Besides, such kinds of pathogens are not covered in most hospitals where microbiological tests are performed for infections outside the urogenital system only. It is believed that microbiome gene sequencing can be a viable option upon negative cultures. [10] Research found that nucleic acid tests, such as PCR, are time-saving (within one day) and commonly highly sensitive to microbiomes than cultures. [11] However, they are not available in a large number of hospitals. In the present case, negative bacterial and fungi cultures were obtained by two times of blood culture and joint puncture fluid smear. Microbiome gene sequencing was then performed and infections caused by M hominis and U urealyticum were confirmed. It has been established that M hominis and Ureaplasma are generally sensitive to Tetracyclines, Macrolides and Quinolones. Recent studies have revealed that both of the two pathogens are highly resistant to Quinolones but they still have a high sensitivity (>70%) to Doxycycline and Minocycline. [12] Here, the patient had a proper blood drug concentration of Vancomycin, but no evident improvement was observed. We reasoned that this might be due to the development of drug resistance or the insufficiency of regional drug concentration. Referring to the literature, we noted that Vancomycin concentration in bone tissue is low (7-13 ug/mL) but Linezolid concentration can be up to 60 ug/mL. Vancomycin was then replaced by Linezolid, contributing to reductions of CRP and body temperature. However, the patient was persistently hypothermic and the surgical site still showed redness and heat pain. Certain clinical efficacy was achieved. Kenny et al [13] reported that Linezolid was active on M hominis but inactive on U urealyticum when its blood drug concentration reached 8.0 ug/mL (MIC50). Based on this study, we speculated that the patient here responded well to Linezolid, probably because Linezolid was only active on M hominis at that time. Nevertheless, whether the activity of Linezolid against M hominis is dependent on its blood concentration could not be determined. Fang et al [14] suggested that Linezolid was not recommended for treatment of M hominis infections, as they found that Linezolid with a blood drug concentration of over 8.0 ug/ml was associated with a high incidence of thrombocytopenia in Chinese population, and the range between 2 to 7 ug/mL seems to be safe and effective. Here, drug sensitivity test was not performed. The patient was not effectively treated with Ciprofloxacin until replacement by Linezolid, considering resistance to Quinolones. According to Harrison's TM Infectious Diseases, Doxycycline, which is highly sensitive to M hominis and U urealyticum, was administrated in this patient. On the next day, the patient had normal body temperature and showed significant relief of symptoms of surgical site infection with reduced inflammatory indicators. Furthermore, Doxycycline was continued for 2 more weeks and the clinical symptoms and indicators of infection recovered to normal. Linezolid was discontinued on June 15. The patient was discharged on June 17 and asked to take Doxycycline for 2 more months. To conclude, it is rare to have M hominis or U urealyticum infection after joint arthroplasty, and there was no related case reported before suffering from infections caused by both pathogens. The present case suggested that, upon negative bacterial/fungal test and failure of treatment with Vancomycin or β-lactams, M hominis or U urealyticum infection should be considered. In addition, Tetracyclines (such as Doxycycline) with high sensitivity to these kinds of pathogens can be administrated if drug sensitivity test result is not available. Furthermore, microbiome gene sequencing can be an option in cases with negative culture results. It is notable that Linezolid is active on M hominis but inactive on U urealyticum. It is not recommended for treatment as there is a high risk of thrombocytopenia when the blood drug concentration is over 8.0 ug/mL.
2022-11-18T06:18:14.037Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "df4330919338f54c8ad4c4ffa4c283d60efd45b5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "df4330919338f54c8ad4c4ffa4c283d60efd45b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10022937
pes2o/s2orc
v3-fos-license
Human Papillomavirus-associated oropharyngeal cancer: an observational study of diagnosis, prevalence and prognosis in a UK population Background The incidence of Human Papillomavirus (HPV) associated oropharyngeal cancer (OPC) is increasing. HPV-associated OPC appear to have better prognosis than HPV-negative OPC. The aim of this study was to robustly determine the prevalence of HPV-positive OPC in an unselected UK population and correlate HPV positivity with clinical outcome. Methods HPV testing by GP5+/6+ PCR, In Situ Hybridisation (ISH) and p16 immunohistochemistry (IHC) was performed on 138 OPCs diagnosed in South Wales (UK) between 2001–06. Kaplan-Meier analysis was used to correlate HPV status with clinical outcome. Results Using a composite definition of HPV positivity (HPV DNA and p16 overexpression), HPV was detected in 46/83 (55%) samples where DNA quality was assured. Five year overall survival was 75.4% (95% CI: 65.2 to 85.5) in HPV-positives vs 25.3% (95% CI: 14.2 to 36.4) in HPV negatives, corresponding to a 78% reduction in death rate (HR 0.22, p < 0.001). HPV-positives had less locoregional recurrence but second HPV-positive Head and Neck primaries occurred. Poor quality DNA in fixed pathological specimens reduced both HPV prevalence estimates and the prognostic utility of DNA-based HPV testing methods. As a single marker, p16 was least affected by sample quality and correlated well with prognosis, although was not sufficient on its own for accurate HPV prevalence reporting. Conclusions This study highlights the significant burden of OPC associated with HPV infection. HPV positive cases are clinically distinct from other OPC, and are associated with significantly better clinical outcomes. A composite definition of HPV positivity should be used for accurate prevalence reporting and up-front DNA quality assessment is recommended for any DNA-based HPV detection strategy. Background Squamous cell carcinoma of the oropharynx, affecting the tonsils, base of tongue, pharyngeal wall and soft palate, has increased in incidence in developed countries over the last 20 years [1,2]. This increase has been attributed to Human Papillomavirus (HPV). In Sweden, a doubling in tonsil cancer incidence prompted reports of an 'epidemic of a virus-induced carcinoma' [1]. HPV prevalence rates in oropharyngeal cancer (OPC) range from 36% to >80%, varying with geographical location and anatomical subsite [3][4][5]. It is likely that populationspecific incidence rates of HPV-induced OPC are influenced by oral HPV infection rates, sexual behaviour, and rates of smoking and drinking. Among HPV positive OPC, HPV16 is the predominant genotype, accounting for approximately 95% of cases [6]. Whereas virtually all cervical cancers are HPVinduced, OPC has two distinct aetiologies: consumption of tobacco and alcohol, or HPV infection, which may co-exist [5]. Until recently, the relevant aetiological agent in an individual patient was unknown but making this distinction is clinically important because HPV-positive OPC is associated with better response to chemotherapy, chemoradiotherapy (CRT) and radiotherapy (RT) and has a better prognosis compared to HPV-negative OPC [6][7][8][9][10]. Better outcomes have also been reported after surgery, suggesting that improved outcomes in HPVpositive patients are independent of treatment received [3,11]. Other factors, particularly tobacco smoking, may adversely affect prognosis in HPV-positive OPC [5,7]. A variety of HPV detection methods are available and differences in test characteristics may partly explain the variation in HPV prevalence rates reported in different studies [12]. Because the presence of HPV DNA in a tumour per se is not evidence of a causal relationship, a marker of HPV activity is needed to diagnose HPVinduced OPC. A widely used 'surrogate' biomarker of HPV activity, is p16 immunohistochemistry (IHC). p16 is a cyclin-dependent-kinase inhibitor and is induced as a consequence of inhibition of Rb activity by the HPV E7 oncoprotein (in most other Head and Neck (H&N) cancers, p16 is down regulated). Two main diagnostic algorithms have emerged for use in the clinical setting: both advocate screening by p16 IHC followed by detection of HPV DNA, either by consensus PCR or In Situ Hybridization (ISH) [13,14]. This study aimed to determine the prevalence of HPVassociated OPC in South Wales (UK) and investigate the diagnostic and prognostic utility of GP5+/6+ PCR enzyme immunoassay, ISH and p16 IHC. Most published data relating HPV status with clinical outcome is based on clinical trial cohorts and, while this enables collection of high quality clinical data, it results in exclusion of some patient groups, including palliative patients. This study did not systematically exclude any patients and demonstrates the impact of HPV status on outcome in a 'real-world' population of patients with OPC. This provides insight into the behaviour and late outcomes of HPV-positive OPC. Study population Patients diagnosed with OPC (ICD-10 codes C01, C05.1, C05.2, C09, C10) in South Wales (UK) 1/9/2001-31/8/ 2006 were identified from pathology databases. Data on clinicopathological characteristics and outcome were obtained from an electronic health record used at the regional Cancer Centre. Deaths in peripheral hospitals were automatically fed into the electronic record. Where cause of death was not documented on the electronic record, it was elucidated by review of patient notes, review of clinic letters and/or discussion with General Practitioners. For every patient who was alive at the point of analysis but had not been seen in hospital for the preceding 12 months (eg had been discharged from follow-up), the study team contacted the General Practitioner to ensure that the patient was indeed still alive with no evidence of disease recurrence. Where smoking history was available, patients were classified as current, never or previous smokers (stopped smoking >3 months before diagnosis). Locoregional recurrence was defined as recurrence at the primary site and/or cervical lymph nodes after a complete response to treatment. One representative formalin fixed paraffin embedded (FFPE) block was retrieved for each case. Histological diagnosis of squamous carcinoma of the oropharynx was confirmed by two pathologists with special interest in OPC (MR and ST). Approval for the study was obtained from South East Wales Research Ethics Committee (ref: 09/WSE03/44). HPV detection DNA extraction and assessment of sample adequacy Sectioning was performed with appropriate precautions to prevent inter-block DNA contamination (eg thorough cleaning of microtome, use of fresh blades). DNA was extracted from 2 × 10 μm sections of FFPE biopsies using the Qiagen FFPE Kit (Qiagen, Hilden, Germany). DNA quality was assessed by PCR for a 119 bp fragment of the human HMBS gene. To control for contamination during sectioning, regular sections were cut from a blank paraffin block and processed in parallel with the tumour sections. Positive (HPV16 positive Caski cell line DNA) and negative (water) controls were included for each PCR run. All blanks and negative controls tested negative for HMBS and HPV DNA. GP5+/6+ PCR enzyme immune assay (EIA) Samples were genotyped for HPV DNA by GP5+/6+ PCR EIA. HPV typing was performed in 2 stages the first stage used cocktails of probes for 14 high risk and 6 low risk HPV types; PCR was then repeated on positive samples, which were then typed with individual probes [15]. This assay detects DNA from high risk HPV types: 16 CaSki cells (HPV16 positive; 200-400 copies/cell), HeLa cells (HPV18 positive; 10-50 copies/cell) and C-33A (HPV negative) were used as controls. The HR-HPV ISH test was scored as positive if blue reaction product colocalised with the nuclei of malignant cells. Diffuse nuclear and cytoplasmic staining and punctate nuclear staining were scored as positive. Focal specific staining of only part of the tumour section was regarded as positive. Diffuse staining of tumour and stromal tissues, considered to represent non-specific chromogen precipitate, was scored as negative. p16 immunohistochemistry p16 IHC was carried out using the CINtec Histology kit (mtm Laboratories, AG, Germany) on a Ventana Benchmark Autostainer. A tonsil SCC with high p16 expression was used as a positive control. The primary antibody was omitted from negative controls. p16 IHC was scored as positive if there was strong and diffuse nuclear and cytoplasmic staining present in greater than 70% of the malignant cells. All other staining patterns were scored as negative. All samples were scored independently by two expert H&N pathologists and discordant cases were reviewed to come to a consensus score. Both ISH and p16 IHC were carried out at the Department of Cellular Pathology, Newcastle, UK as previously described [14]. Interpretation of HPV test results A binary classification (positive vs negative) was used to score the p16 IHC and HPV ISH. Stained sections were assessed independently by two pathologists, who met to resolve discordant interpretations and establish a consensus categorization. For PCR-EIA, positivity was defined as giving an absorbance at 405 nm of greater than three times background. Statistical methods Overall Survival (OS) analyses were based on time from diagnosis to death; survivors were censored at their last follow-up. Progression Free Survival (PFS) analyses were based on time from diagnosis to first event (locoregional recurrence, distant metastasis or death from any cause); patients without an event were censored at their last follow-up. Analyses of OS and PFS included all patients, irrespective of treatment intent and response to treatment. Kaplan-Meier analysis was used to obtain survival plots and 3-and 5-year survival. The Cox proportional hazards model was used to estimate Hazard Ratios (HR) characterising the independent prognostic significance of single and multiple variables, namely HPV and smoking status/treatment method. Further analyses for variables including age and stage were not performed as subgroups were insufficient to be statistically robust. Results Histology blocks were obtained for 147 cases, representing 83% of patients diagnosed with OPC in South Wales during the period. Nine blocks did not contain sufficient tumour for analysis. Analyses are presented for 138 patients with histologically confirmed squamous OPC. Among cases that tested positive for HPV by GP5+/6+ PCR, 97% (67/69) were positive for HPV16. One case contained HPV33 and one case showed co-infection with HPV18 and HPV56. No low-risk HPV infections were detected. Influence of DNA quality on HPV detection rate DNA quality was assessed by PCR for the human HMBS gene which was amplifiable in 83/138 cases (60%), suggesting high levels of DNA degradation in the other samples. DNA adequacy ranged from 30% in hospitals using unbuffered formalin as a fixative to 96% in those using neutral buffered formalin. HPV testing was carried out on all samples regardless of DNA quality. This revealed a high false negative rate when DNA-based HPV detection methods (PCR and ISH) were used on samples containing poor quality DNA (HMBS negative) e.g. HPV DNA positivity by GP5+/6+ PCR was 23% lower in HMBS negative than HMBS positive cases. DNA degradation did not have a significant effect on p16 IHC testing results. Estimated false negative rates are shown in Table 2. Agreement between HPV testing methods In samples with good quality DNA (HMBS positive), the proportion of positive samples was similar when analysed by p16 (47/83 cases, 57%) and GP5+/6+ PCR (49/83, 59%) and slightly lower using ISH (42/83, 51%). When samples containing poor quality DNA were included, prevalence rates by GP5+/6+ PCR and ISH were lower (50% and 43% respectively), consistent with the presence of false negatives in this group. Concordance between tests was highest for p16 and GP5+/6+ PCR (5% of cases discordant) than for either test with ISH (11% discordant for each). The number of discordant cases increased for each comparison if HMBS negatives were included showing that poor DNA quality reduced consistency between HPV testing results, as well as overall estimates of HPV prevalence. Analysis of concordance is shown in Table 3. Prognostic value of HPV testing methods Individually, each HPV testing method was highly prognostic for Overall Survival (OS) and Progression Free Survival (PFS). When all cases were included, p16 correlated well with prognosis (point estimate of HR for death 0.24, 95% CI 0.15-0.39), as did ISH (0.27, 95% CI 0.16-0.46) and GP5+/6+ PCR (0.29, 95% CI 0.18-0.47), although all were slightly inferior to the composite definition of HPV-positivity (0.22, 95% CI 0.13-0.37), and no test performed significantly better than another. If HMBS negative cases were excluded, the prognostic value of GP5+/6+ PCR improved to equal that of the composite marker and p16 (HR 0.20), suggesting that GP5+/6+ PCR and p16 are equally prognostic when DNA quality is assured. Hazard Ratios are shown in Table 4. Patient characteristics Baseline patient and tumour characteristics are shown in Table 1. The proportion of HPV-positive cases was similar in men and women, although men were more frequently affected overall (75% cases). HPV-positive patients were 6 years younger (p = 0.002), had better performance status (p < 0.001) and were less likely to be current smokers (p < 0.001) than HPV-negative patients. HPV-positive cancers occurred exclusively in the tonsil (78%) and tongue base (22%), whereas 17% of HPVnegative cancers arose elsewhere in the oropharynx (soft palate, uvula, posterior pharyngeal wall). Overall disease stage was similar, although HPV-positive patients were less likely to present with distant metastases (0% vs 8%, p = 0.04). HPV-positive patients were treated radically (curatively) more often than HPV-negative patients (p < 0.001), 19% of whom were treated palliatively. Of radically treated patients, 54% underwent primary surgery (+/− post-operative RT) and 46% underwent primary RT (+/− chemotherapy, CRT), reflecting local practice at the time. There was a trend for more HPV-positive cases to have primary surgery (42/69 cases, 61%) and more HPV-negative cases to have primary RT/CRT (27/48, 56%) (p = 0.068). Patients treated with RT/CRT were older (mean age 61 years) and had poorer performance status than those treated surgically (mean age 52 years), in keeping with different HPV prevalence in both groups. Effect of HPV on survival and recurrence After median follow-up from diagnosis of 4.9 years (range 0.1 to 10.1 years), 77 deaths occurred in 138 patients. Overall Survival (OS) by Kaplan-Meier survival analysis was 59.4% at 3 years (95% CI: 51.2 to 67.6) and 52.2% at 5 years (95% CI: 43.8 to 60.5). No significant difference in survival was seen when HMBS negative cases were excluded from the analysis and, as a result, HMBS positive and negative cases were combined for subsequent analyses, although every analysis was repeated in HMBS positive cases only to ensure that the results were consistent (data not shown). For 126 radically treated patients, OS was 65.1% at 3 years and 57.1% at 5 years. For the 12 palliative patients, median survival was 186 days (6 months), range 28-802 days. At last follow-up, 87 patients (63%) had suffered an event (progression, recurrence or death). For HPVpositive patients, 3 and 5-year Progression Free Survival (PFS) rates were 72.5% (95% CI: 61.9 to 83.0) and 68.1% (95% CI: 57.1 to 69.1), compared to 25.4% (95% CI: 14.3 to 36.5) and 17% (95% CI 7.4 to 26.5) in HPV negatives, corresponding to a 75% reduction in the rate of progression, relapse or death associated with HPV-positivity (HR 0.25, 95% CI 0.15 to 0.39, p < 0.001) ( Figure 1C). As with OS, PFS in patients with equivocal HPV status was intermediate between that of 'true' HPV-positives and negatives. PFS at 3 and 5-years in radically treated HPVpositive patients was 72.5% and 68.1%, compared to 31.2% and 20.8% in HPV-negative patients, corresponding to a 72% reduction in the rate of relapse or death (HR 0.28, 95% CI 0.17-0.46, p < 0.001) ( Figure 1D). Kaplan-Meier analysis of survival by HPV and smoking status simultaneously demonstrated a significant survival advantage associated with HPV-positivity, regardless of smoking status ( Figure 1E). Survival was significantly better in previous and never smokers than in current smokers (HR for death by not current smoking 0.48, 95% CI 0.28-0.81, p = 0.006) ( Figure 1F); Cox regression analysis showed that this was due entirely to their tendency to be HPV-positive. HPV-positivity was also associated with better survival regardless of primary treatment modality (surgery or RT/CRT). Although OS and PFS were better in surgically treated patients overall than those treated with primary RT/CRT (HR 0.5 for surgery; 95% CI: 0.3-0.83, p = 0.007), a higher proportion of the surgical group were HPV-positive (66.7% vs 50%, p = 0.068); using Cox regression analysis to adjust for HPV status, the survival difference between the two groups was no longer statistically significant (HR 0.74, p = 0.36). Discussion Biologically relevant HPV infection, defined as presence of HPV DNA by ISH and/or PCR and p16 over-expression, was identified in 55% of patients diagnosed with OPC in South Wales (UK), 2001-2006. The survival advantage afforded by HPV in a 'real world' population of patients with OPC, including those managed palliatively, is clearly demonstrated as is the effect of HPV on the long-term clinical behaviour of the disease. The effect of poor quality DNA in fixed pathological specimens on the diagnostic and prognostic utility of DNA-based HPV testing methods, including ISH, is shown for the first time. P16 expression is not affected by DNA quality and may be utilized as a single marker of HPV infection in clinical practice, although a composite definition of HPV positivity is recommended for accurate HPV prevalence reporting. HPV prevalence rates differ between different geographical regions and time periods. The rate in this study (55%) is consistent with international (51.2% (88/172)) and US series (59.4% (192/323)) collected between 2002-2005 [7,17]. It also adds to a picture of regional and temporal variation in HPV prevalence across the UK where rates of 37.5% (33/88) and 42.7% (77/180) have been reported [12,18]. The number of 'equivocal' cases with discrepant HPV DNA and p16 testing results was significantly lower than in some other studies [7,18], suggesting that a testing algorithm combining PCR and ISH increases sensitivity for HPV DNA detection [14]. Discordant HPV DNA and p16 testing results occurred in 6-7% of cases showing that p16 alone is not sufficient for studies that aim to accurately report HPV prevalence. Poor quality DNA significantly reduced HPV prevalence estimates using PCR and ISH-based techniques, because of the occurrence of false negative results in samples containing degraded DNA. Although PCR-based testing protocols routinely incorporate assessment of DNA quality, DNA-based ISH techniques do not, and therefore risk under-estimating HPV prevalence; this may partly explain the lower sensitivity reported for ISH compared to other HPV detection methods in previous studies [17]. However a recently developed RNA-based ISH test for HPV does include a control for sample quality and shows considerable promise as a diagnostic marker for OPC [19]. The three HPV testing methods evaluated in this study were all good markers of survival, with no test performing significantly better than another. Hazard ratios (HR) for death were: 0.24 for p16, 0.27 for ISH, 0.29 for GP5+/6+ PCR and 0.22 for the composite definition of HPV positivity. Poor quality DNA reduced the prognostic value of DNA-based HPV testing methods and when poor quality samples were excluded, the prognostic value of GP5+/6+ PCR was similar to that of p16 and the composite marker (0.22). The effect of poor quality DNA on prevalence and prognostication is reduced by using the composite definition of HPV-positivity. The clinical implications of HPV positivity in this unselected population of patients are clear. HPV positivity was associated with a 78% reduction in death rate (HR 0.22) and a 75% reduction in rate of progression, relapse or death (HR 0.25). This effect is greater than in many clinical trial cohorts, due in part (but not entirely) to the inclusion of palliative patients. Survival of radically treated HPV-positive patients was comparable to that reported in a large US study; 3y OS was 82.6% (95% CI: 73. 7 [7]. Similarly low survival figures for HPV-negative OPC have been reported previously [20] and poor performance status (~30% would have been excluded from the US study on this basis) and infrequent use of concurrent chemotherapy (<30% vs 100% in the US study) may have affected outcome in this study. Their prognosis was poor regardless of whether they were treated with primary surgery or RT/CRT. The excellent outcomes of HPV-positive patients were independent of smoking status or treatment method. Retrospective analyses have suggested that smoking can negatively affect survival in some HPV-positive patients [5,7,21], and this data has influenced the design of several clinical trials. Although the relatively small cohort (n = 117) with known smoking history, crude definition of smoking and/or large effect of HPV status on outcome may have masked the effect of smoking in this study, it is possible that the effect of smoking, particularly past smoking, on outcome from HPV-positive OPC has previously been over-estimated, and this issue must be addressed prospectively in future studies. There was a trend for more HPV-positive patients to undergo primary surgery in this study (p = 0.07); although HPV status was unknown when treatment decisions were made, it is likely that selection of younger, fitter patients for surgery, resulted in preferential selection of HPV-positive patients. This highlights the dangers of comparing outcomes from non-randomized studies of surgery and RT/CRT, without knowledge of HPV status; randomized trials with mandatory HPV testing are required to assess treatments for OPC in future. Improved outcomes in HPV-positive patients reflected better locoregional control rates. In contrast, rates of distant metastases occurring on follow-up were similar in both HPV-positive and negative patients. The occurrence of second HPV16-positive primaries, both in the tongue base and nasopharynx (EBV-negative) in this study is intriguing. Second HPV-associated cancers in the tonsils and nasopharynx and HPV-positive/EBVnegative nasopharyngeal carcinomas have previously been reported [22][23][24] and it is possible that the lymphoid tissue throughout Waldeyer's ring is particularly susceptible to HPV-induced transformation. Second primaries occurring in patients with a history of HPVpositive OPC should be tested for HPV and further studies to investigate the frequency and timing of HPV-positive second H&N primaries are required to inform future follow-up protocols. There are several potential limitations to the study. Histology blocks for 83% of OPC patients presenting across South Wales over the study period were included. There were several reasons why other cases were not included: blocks were not collected from a number of smaller centres, there was limited collection from one major centre due to logistical difficulty in identifying the relevant cases, mismatches were observed in coding between registry and pathology databases, and some blocks were missing from pathology archives. There is no reason to suspect systematic bias in the sample, especially given the multi-factorial reasons for samples not being included, but the potential for some bias cannot be completely excluded. The proportion of HPV-positive OPC is likely to have increased since 2006 and the sample has limited geographical representation, thus whilst it adds to the picture of HPV prevalence in the UK, caution should be exercised in generalising the findings. Conclusions HPV was responsible for the development of 55% of OPCs in this study. Significantly better locoregional control and survival were seen in HPV-positive cases. Given the substantial difference in prognosis, routine assessment of HPV status should be mandated in clinical practice. Standardisation of tests is clearly a significant issue but, as a single marker, p16 IHC appears prognostic and is unaffected by sample DNA quality, making it a useful test
2017-08-03T01:44:06.695Z
2013-05-01T00:00:00.000
{ "year": 2013, "sha1": "ed92809d1206683f2e512ea22cdcb14e2c2756a9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1471-2407-13-220", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed92809d1206683f2e512ea22cdcb14e2c2756a9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17287057
pes2o/s2orc
v3-fos-license
“Sightblind”: Perceptual Deficits in the “Intact” Visual Field Unilateral visual cortex lesions caused by stroke or trauma lead to blindness in contralateral visual field – a condition called homonymous hemianopia. Although the visual field area processed by the uninjured hemisphere is thought to be “intact,” it also exhibits marked perceptual deficits in contrast sensitivity, processing speed, and contour integration. Such patients are “sightblind” – their blindness reaches far beyond the primary scotoma. Studies showing perceptual deficits in patients’ intact fields are reviewed and implications of these findings are discussed. It is concluded that consequences of partial blindness are greater than previously thought, since perceptual deficits in the “intact” field likely contribute to subjective vision loss in patients with visual field defect. This has important implications for vision diagnosis and rehabilitation. "BLINDSIGHT" AND "SIGHTBLINDNESS" -PECULIARITIES OF PERCEPTION IN PATIENTS WITH VISUAL SYSTEM DAMAGE Lesions of the visual system result in blindness in parts of the visual field retinotopically corresponding to the damaged tissue. Such visual field loss significantly impairs patients' vision-related quality of life (Gall et al., 2009). Patients with unilateral visual cortex damage caused by trauma or posterior artery stroke are typically blind in the contralesional half of the visual field. However, the ipsilesional visual field, processed by the intact hemisphere, is considered intact and fully functional. Therefore, contralesional blindness is thought to be the main, if not only, cause of subjective visual impairment. Visual field loss is typically measured by perimetry, where simple detection tasks are used to approximate the location and extent of the underlying anatomical damage (Roux et al., 2001). Here, based on contrast threshold values or detection rates, visual field sectors are classified as absolute defect, relative defect, or intact areas (Figure 1). The "absolute" defect (blind field, scotoma) is the area where the subject does not consciously detect any perimetric stimuli. In cortically lesioned patients such blind fields are usually found on the contralesional side. In contrast, in areas of "relative" defect some detection abilities for moving stimuli or stimuli with increased luminance remain. These are typically located at the border of the lesion, but they can also be found deep inside the blind field as "islands of vision" (Fendrich et al., 1992). Both are believed to be the functional representation of partially damaged tissue. Areas of relative defect have also been termed "areas of residual vision" because of their restoration potential (Sabel et al., 2011a). Finally, the visual field area where all perimetric stimuli are detected is considered "intact." Abbreviations: HRP, high resolution perimetry; RT, reaction time. Contrary to the assumption that the "intact" field is fully functional, there are indeed perceptual deficits in this part of the visual field. Despite being "normal" in detection ability, the visual field regions corresponding to the uninjured hemisphere are deficient when perceptual functions of patients are tested more thoroughly and compared to an uninjured control group. We now term this phenomenon "sightblindness," leaning on the reverse situation of "blindsight," where residual perceptual capacities exist deep in the field of "absolute" blindness (Pöppel et al., 1973;Weiskrantz et al., 1974;review Cowey, 2010). We now review the existing evidence of sightblindness and discuss possible mechanisms and implications. PERCEPTUAL DEFICITS IN THE "INTACT" VISUAL FIELD Several studies suggest that visual functions are impaired in the "intact," ipsilesional visual field of subjects with unilateral cortical lesions (homonymous hemianopia). Firstly, in comparison to healthy controls, hemianopic patients exhibit elevated contrast thresholds in the ipsilesional visual field (Hess and Pointer, 1989). Secondly, in a task requiring detection of a luminance change on noisy background, performance of patients in their intact field was characterized by longer reaction times (RT) and more false positive responses when compared to normal controls (Rizzo and Robin, 1996). Thirdly, longer RT in a simple light detection task and a higher double pulse resolution threshold, i.e., the lengthening of the minimally perceivable temporal gap between two light pulses, are yet another signs of visual processing deficits in hemianopia (Poggel et al., 2011). Sightblindness is also observed in tasks demanding more complex processing of visual information. Contour integration in the intact field was probed with a task requiring patients to detect a figure (square) composed of aligned Gabor patches embedded in a background of randomly distributed Gabor patches (Paramei and Sabel, 2008;Schadow et al., 2009). Compared to control subjects, patients needed longer presentation times to accurately www.frontiersin.org FIGURE 1 | Visual field maps of four patients with visual field loss due to post-chiasmatic damage. Two types of visual field maps are shown: standard perimetry and high resolution perimetry (HRP), both showing comparable topographies of the visual field. In both methods, the visual field can be divided into absolute defect areas (black sectors, no detection), relative defect areas (also termed area of residual vision; gray sectors, partial detection, or elevated thresholds), and intact field (white sectors, full detection, low threshold). RT charts and RT histograms show processing speed deficits: (i) patients vary with respect to processing speed in the intact field, and (ii) residual vision areas are impaired when compared with intact sectors. Eccentricity (in degrees of visual angle) is denoted on horizontal and vertical axes in threshold perimetry and HRP charts. detect the target stimuli and yet their detection accuracy was worse. Interestingly, in a study investigating detection and categorization of natural scene images in the spared central visual field of hemianopic subjects, patients with lesions of the right hemisphere were impaired in both tasks, while patients with left hemispheric lesions were impaired in the categorization task only (Cavézian et al., 2010;Perez et al., 2012). Further, hemianopic patients often report difficulties searching their environment with eye movements (Pambakian et al., 2000) leading to disorientation and problems in avoiding obstacles. Lack of visual input does not fully account for abnormal patterns of eye movements during visual search (Machner et al., 2009), and this might be yet another manifestation of perceptual or temporal processing deficits in the intact field, though this hypothesis needs further study. Finally, a case study of a patient suffering from quadrantanopia indicates that the intact field adjacent to the scotoma might not represent the visual stimuli accurately (Dilks et al., 2007). The patient perceived presented shapes as elongated toward the scotoma, e.g. a square as a rectangle, and a circle as an ellipse, which was interpreted as a perceptual consequence of maladaptive visual cortex retinotopic remapping (review Wandell and Smirnakis, 2009). DISCUSSION AND IMPLICATIONS The body of evidence for"sightblindness"only starts to emerge and further studies supporting these initial observations are necessary. However, if confirmed, the presence of the "intact" visual field deficits has significant implications for researchers and clinicians working with visually impaired subjects. MECHANISMS OF SIGHTBLINDNESS The neurophysiological alterations resulting from visual system damage that account for the "intact" field deficits still need clarification, yet there are several possible explanations. It has been already shown that synchronization evoked by a stimulus presented in the seeing field of hemianopia patients is compromised (Schadow et al., 2009) and that different neural networks are activated during visual tasks in hemianopia subjects than in healthy controls (Perez et al., 2012). We hypothesize that lesion-induced disturbance of interhemispheric interactions might be the key mechanism, as in the lesioned hemisphere visually induced activation is weaker (Goebel et al., 2001;Nelles et al., 2007) and delayed (Rossion et al., 2000;Schoenfeld et al., 2002) when compared to the uninjured hemisphere. Reduced and delayed activation in the lesioned hemispheres might hamper the interhemispheric functional connectivity and consequently synchronization in the uninjured hemisphere (Schadow et al., 2009). Further, cortical lesions lead to retinotopic reorganization of the visual cortex (specifically: receptive fields plasticity) which takes place in the area adjacent to the scotoma (Gilbert and Wiesel, 1992;Eysel et al., 1999;Baker et al., 2005;Wandell and Smirnakis, 2009). Such receptive field reorganization is related to increased excitability, and indeed, in cortically lesioned patients the area near the scotoma exhibits hyperexcitability as probed with neurophysiological methods (Braun et al., 2001). Perceptual distortions presumably resulting from cortical reorganization were already presented (Dilks et al., 2007) and might explain deficits occurring in the vicinity of the lesion. However, intact field deficits were observed to occur in the intact visual field area distant from the scotoma, e.g., processed by the uninjured hemisphere. It has been shown that the visual cortex lesion affects activity and connectivity of down-stream visual structures (Goebel et al., 2001;Schoenfeld et al., 2002;Nelles et al., 2007). Crucially, unilateral cortical lesions alter activity of visual cortical areas not only in the damaged, but also in the seemingly unaffected (uninjured) hemisphere, which has been shown in animal model (Rushmore and Payne, 2003) and in patients (Henriksson et al., 2007;Nelles et al., 2007). Changes in activity are related to modification of anatomical (Bridge et al., 2008) and functional connectivity (Silvanto et al., 2009) between both hemispheres. However, none of the studies related the physiological changes in the uninjured hemisphere to the perceptual functions in the ipsilesional visual field. Therefore, future studies must define how the reorganization of visual networks, including the uninjured hemisphere, affects perception, and whether (or when) it is adaptive or maladaptive. The two mechanisms, retinotopic remapping in the scotoma vicinity and modifications of activity of down-stream visual structures, might affect the intact field in a local and global manner, respectively. Indeed, in patients with visual field loss the temporal processing speed in the intact field is related to its distance to the scotoma -the closer to the scotoma the stimulus is presented the longer the RT (Bola et al., 2013). We interpret this as a sign of a local, spatially constrained (retinotopic) influence of the scotoma. At the same time "intact" field performance is associated with the scotoma size -the larger the scotoma, the longer the RT in the intact field. This may be interpreted as a manifestation of a global, spatially non-specific, i.e., non-retinotopic, influence of the lesion. The existence of such a "global" lesion effect raises the question whether perceptual deficits in patients are limited to the visual domain. In the reviewed studies perceptual deficits were manifested not only by slower RT, but also by worse detection accuracy of figures on a noisy background (Paramei and Sabel, 2008;Schadow et al., 2009), worse accuracy in detection/categorization task (Perez et al., 2012), and lower double pulse resolution (Poggel et al., 2011). Further, the retinotopic influence of the scotoma (see above) indicates that intact field RT deficits are at least to some extent specific to visual processing -otherwise the deficits should been evenly spread over the whole visual field. Therefore, our working hypothesis is that the perceptual deficits (e.g., RT slowing) are specifically visual or greater in the visual domain than in other domains, but this needs to be tested in greater details. However, it is conceivable that extensive brain lesion, although located in the brain areas typically considered visual, might cause general, non-specific slowing of information processing (manifested by longer RT), affecting other domains (auditory, motor) as well. At the same time, persistent visual field defect might lead to widespread changes in the brain, e.g., disturbance of synchronization, oscillations, or functional connectivity (e.g., Dai et al., 2012), causing non-specific slowing secondary to the loss of visual input. Further, because intact field deficits were found in both post-chiasmatic and pre-chiasmatic patients (Bola et al., 2013), this suggests that not only cortical lesions but also pre-chiasmatic lesions might cause intact field deficits as well, although we do not know if the mechanisms of this impairment is different. These hypotheses need to be tested in future studies. In this respect, sightblindness has important implications for the planning of experiments. When studying the effects of visual system damage either in animals or visually impaired patients, in many experiments the unlesioned hemisphere serves as a reference point as it is presumed to be "normal." In view of "sightblindness," researchers should keep in mind that these reference points (control values) are also to some extent defective, possibly biasing the results. It may be that existing data and their interpretation may require reappraisal. To avoid such a bias in future studies with hemianopic patients performance in perceptual tasks should always be compared to uninjured controls. CLINICAL IMPLICATIONS The "sightblindness" concept shows that our understanding about patients' vision loss can only be rather incomplete when basing it solely on perimetry results of the primary scotoma. By testing only very basic perceptual functions, namely detection of simple static dots on uniform background, perimetry underestimates the true extent of functional deficits, especially those related to everyday visual functions. Thus, tests of higher visual functions are to be included in standard vision examinations if valid and comprehensive diagnosis of vision loss is the goal (see also Raz et al., 2012). Further, the intact field deficits are expected to influence subjective quality of vision. An "objective-subjective mismatch" (Sabel www.frontiersin.org et al., 2011a) was observed in subjects with persistent visual field loss, as objective measures of blindness (scotoma size measured by perimetry) and subjective vision loss (measured by visionrelated quality of life questionnaires) were only modestly correlated (Müller et al., 2003;Gall et al., 2009). This indicates the existence of factors other than scotoma size to account for the subjective visual impairment, and sightblindness is one possible candidate. Therapeutic applications in vision rehabilitation do not typically aim at improving visual processing deficits in the intact field. Although activating residual structures is crucial for visual recovery and restoration (Sabel et al., 2011a), improving the quality of vision in the intact field sectors is expected to benefit patients as well. Therefore, measures of intact field functioning should also be included when testing vision restoration methods like behavioral trainings (Kasten et al., 1998;Poggel et al., 2004), non-invasive brain stimulation (Sabel et al., 2011b), or both combined (Plow et al., 2012). Altogether, advancing our knowledge about perceptual deficits in the intact visual field may result in a better understanding of normal and abnormal visual system functioning; it is the "second face of blindness." Measuring sightblindness is also an opportunity to improve diagnostic and therapeutic tools with the aim to maximize recovery of vision including these more subtle deficits. This will then better appreciate -and improve -the subjective suffering of patients with partial blindness caused by damage of brain structures and consider their vision impairment in a more holistic manner. ACKNOWLEDGMENTS This work was founded by Otto von Guericke University, by DFG (grant no. 436 POL 126/0-1), and by the ERA-net neuron project "Restoration of Vision after Stroke (REVIS)," BMBF (grant nr: 01EW1210). The sponsors funded the analysis and interpretation of the data but had no influence over the content or interpretation of the data.
2016-06-17T14:19:30.087Z
2013-06-25T00:00:00.000
{ "year": 2013, "sha1": "34a5ac349ed46b8b42efad4fa28d2df9b6c68394", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2013.00080/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34a5ac349ed46b8b42efad4fa28d2df9b6c68394", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
236208066
pes2o/s2orc
v3-fos-license
Screening for Fatal Traumatic Brain Injuries in Cerebrospinal Fluid Using Blood-Validated CK and CK–MB Immunoassays A single, specific, sensitive biochemical biomarker that can reliably diagnose a traumatic brain injury (TBI) has not yet been found, but combining different biomarkers would be the most promising approach in clinical and postmortem settings. In addition, identifying new biomarkers and developing laboratory tests can be time-consuming and economically challenging. As such, it would be efficient to use established clinical diagnostic assays for postmortem biochemistry. In this study, postmortem cerebrospinal fluid samples from 45 lethal TBI cases and 47 controls were analyzed using commercially available blood-validated assays for creatine kinase (CK) activity and its heart-type isoenzyme (CK–MB). TBI cases with a survival time of up to two hours showed an increase in both CK and CK–MB with moderate (CK–MB: AUC = 0.788, p < 0.001) to high (CK: AUC = 0.811, p < 0.001) diagnostic accuracy. This reflected the excessive increase of the brain-type CK isoenzyme (CK–BB) following a TBI. The results provide evidence that CK immunoassays can be used as an adjunct quantitative test aid in diagnosing acute TBI-related fatalities. Introduction Forensic biochemistry is an emerging discipline that provides new insights into deathrelated systemic pathophysiology [1] and has been found useful in medico-legal investigations such as assisting in determining the cause of death [2][3][4][5] or estimating the time since death [6]. Traumatic brain injury (TBI) is a global health priority due to the growing burden it constitutes in injuries worldwide [7]. Numerous biomarkers have been clinically investigated as markers for diagnostic purposes [8,9] or as outcome predictors [10,11]. A biomarker that can diagnose TBI would be useful, especially in cases where there is an absence of or minimal external head trauma such as in infant head injuries [12] or deceleration injuries from falls [13] or sports-related injuries [14]. Previous postmortem studies showed that central nervous system biomarkers and acute phase proteins were significantly increased in the cerebrospinal fluid (CSF) of TBI fatalities [4,5]. It was recently shown that combining postmortem biomarkers were more accurate for diagnosing TBI compared to a single biomarker [15]. Thus, combining biomarkers may be a reasonable approach as no single, TBI-specific and TBI-sensitive biomarker is available. Rather than looking for new biomarkers, it would be economically more favorable to investigate established laboratory biomarkers and assays that may not have been associated with TBIs and then use them use in combination with already studied TBI makers [15]. One such biomarker is creatine kinase (CK), and its heart-type isoenzyme CK-MB, which is routinely used in clinical settings to investigate myocardial injury [16]. After an impact to the head or intracranial bleeding, the CSF CK level increases as the cytosolic brain-type CK isoenzyme (CK-BB) is released from damaged brain cells (i.e., neurons and astrocytes) [17,18]. Serum and CSF CK-BB elevation is observed a few hours after a TBI [19] and after a subarachnoid hemorrhage respectively [20]. CSF CK-BB is also useful for clinically estimating the degree of brain damage and can be used to assess neurological prognosis following cardiac arrest [21]. In forensic pathology, both the total postmortem CSF CK and CK-BB have shown potential diagnostic utility in only a small number of cases [18,22]. However, postmortem serum CK-BB was unable to differentiate TBIs from control cases. This may be due to cytolysis in the gastrointestinal tract, where CK-BB is also expressed [22]. Despite their potential role in TBI, CK-BB assays have limited application because they are not routinely available in clinical settings. A potential biomarker would be CK-MB, another CK isoenzyme, which is routinely available in laboratories. Following a TBI, sustained serum CK-MB elevation caused by sympathetic nervous system stimulation and subsequent myocardial damage was observed in living patients [23]. However, postmortem serum CK and CK-MB are unstable and unsuitable for TBI detection [24]. This limitation was also recognized in studies that attempted to use postmortem serum CK-MB to diagnose cardiac deaths [25][26][27]. Although studies showed postmortem CSF CK-MB unable to diagnose cardiac deaths, no study has investigated its use on TBI-related fatalities [28]. The aims of the present study were to explore changes in CSF CK and CK-MB levels in TBI fatalities and assess their diagnostic potential for screening purposes in forensic trauma biochemistry. Sample Retrieval CSF samples were collected during routine forensic autopsies at the Institute of Legal Medicine of the University of Leipzig, Germany. Ethical approval was granted by the Ethics Committee of the University of Leipzig (approval number: 388-15-ek). A total of 92 samples were collected and allocated into two groups according to the cause of death. The TBI group consisted of 45 samples with intracranial bleeding following blunt force trauma and macroscopically/microscopically proven cortical contusions. The samples were further divided into three subgroups: 23 acute (survival time < 2 h, median survival time = 8 min, interquartile range (IQR) = 55 min); 12 subacute (survival time 2-72 h; median = 10 h, IQR = 29.5 h); and 10 delayed death (survival time > 72 h; median = 162.5 h, IQR = 165.5 h). The control group, 47 samples, was subdivided into acute myocardial infarction (AMI; 14 samples), diffuse cerebral hypoxia (DCH; 15 samples: suffocation, hanging and strangulation), and isolated torso trauma (ITT; 18 samples: blunt and sharp forces to any part of the body excluding the head, predominantly the neck and front side of the trunk- Figure 1). part of the body excluding the head, predominantly the neck and front side of the trunk - Figure 1). Figure 1. Allocation of postmortem CSF CK and CK-MB samples to TBI and control subgroups (AMI, acute myocardial infarction; DCH, diffuse cerebral hypoxia; h, hours; ITT, isolated torso trauma; TBI, traumatic brain injury; t, survival time). Survival time for the AMI and DCH group was "non-existent", as the individuals died immediately following the respective event, whereas the ITT cases showed a median survival time of 60 min (IQR = 65 min). Times were determined on the basis of medical and police records and pathological findings. The CSF samples were aspirated from the suboccipital subarachnoid space using a sterile syringe. The CSF samples were centrifuged at 5000 rpm for 5 min at 4 °C and then stored in aliquots at −80 °C until further processed. This allowed reducing the influence of cellular components such as erythrocytes on the biomarker levels. Visible signs of putrefaction and decay, neurodegenerative diseases, neoplasms, and both previous and existent TBIs of the control group samples were excluded from the study. Data regarding age, brain weight, postmortem interval (PMI) and sex are shown in Table 1. Table 1. Median age, brain weight (mbrain), postmortem interval (PMI) and sex ratio (interquartile range in parentheses) for the traumatic brain injury (TBI) and control groups and respective subgroups. AMI, acute myocardial infarction; DCH, diffuse cerebral hypoxia; ITT, isolated torso trauma. Survival time for the AMI and DCH group was "non-existent", as the individuals died immediately following the respective event, whereas the ITT cases showed a median survival time of 60 min (IQR = 65 min). Times were determined on the basis of medical and police records and pathological findings. TBI The CSF samples were aspirated from the suboccipital subarachnoid space using a sterile syringe. The CSF samples were centrifuged at 5000 rpm for 5 min at 4 • C and then stored in aliquots at −80 • C until further processed. This allowed reducing the influence of cellular components such as erythrocytes on the biomarker levels. Visible signs of putrefaction and decay, neurodegenerative diseases, neoplasms, and both previous and existent TBIs of the control group samples were excluded from the study. Data regarding age, brain weight, postmortem interval (PMI) and sex are shown in Table 1. Table 1. Median age, brain weight (m brain ), postmortem interval (PMI) and sex ratio (interquartile range in parentheses) for the traumatic brain injury (TBI) and control groups and respective subgroups. AMI, acute myocardial infarction; DCH, diffuse cerebral hypoxia; ITT, isolated torso trauma. Laboratory Procedures All the CSF samples were analyzed on a standard automated clinical chemistry analyzer (Cobas 8000, Roche Diagnostics, Mannheim, Germany) using fully automated assays for CK (Roche Diagnostics, Mannheim, Germany, ref. 07190794190, photometry, UV test) and CK-MB (Roche Diagnostics, Mannheim, Germany, ref. 07190808190, immunologic UV test), both certified for in-vitro diagnostics in plasma and serum. The hemolysis index (H index), representing the concentration of free hemoglobin in the sample, was determined by measuring absorbance at 600/570 nm in saline-diluted samples (ref. 05172179190, using the c701 module by Roche Diagnostics, Mannheim, Germany). According to the manufacturers data sheet for the CK and CK-MB immunoassays, an interference of hemolysis can be excluded up to an H index of 20 [29]. Statistical Analysis Statistical analysis was performed using GraphPad Prism version 8 (GraphPad Software, San Diego, CA, USA) and Microsoft Excel version 16.16 (Microsoft Corporation, Redmond, WA, USA). The Anderson-Darling normality test was used to assess the Gaussian distribution of the data. Receiver operating characteristic (ROC) curve analysis was performed to identify sensitivity and specificity by conservative estimations for differentiation between TBI and controls using the Wilson-Brown method (confidence interval = 95%). Potential threshold values (accentuated for specificity) were determined from the ROC analysis based on maximum positive likelihood ratios with sensitivity values of at least 20%. A Kruskal-Wallis test followed by an uncorrected Dunn's test was applied to compare subgroups among each other, TBI subgroups to the overall control group and control subgroups to the overall TBI group. The same test was applied to compare confounding parameters between TBI cases and controls and CK/CK-MB levels between TBI cases and controls with H indices < 20. CK/CK-MB levels between the overall TBI and the control group were tested for statistically significant differences using the area under the ROC curve (AUC). Bivariate analyses (Pearson's r for parametric data and Spearman's ρ for non-parametric data; two-tailed) were performed among the CK/CK-MB levels, age, brain weight, H index and PMI. A p value ≤ 0.05 was considered statistically significant. Medians and IQRs were provided in the text. Diagnostic Accuracy of CK/CK-MB Levels in CSF Is Moderate to High The ROC curve analyses showed that both CSF CK and CK-MB levels were able to differentiate among all TBI fatalities and controls with moderate to high diagnostic accuracy ( Figure 4, Table 2). The diagnostic accuracy to distinguish between acute TBIs and controls using CSF levels of CK was high (AUC = 0.876, Figure 4, Table 2). A threshold Biomolecules 2021, 11, 1061 6 of 12 CSF CK value of 137.2 µkat/l was found differentiating a fatal TBI from other traumatic, cardiovascular or hypoxic causes of death with a sensitivity of 65.2% and a specificity of 95.7% (Table 2). All other TBI subgroups could be distinguished from control cases with moderate diagnostic accuracy based on their CSF CK and CK-MB levels ( Figure 4, Table 2). Diagnostic Accuracy of CK/CK-MB Levels in CSF is Moderate to High The ROC curve analyses showed that both CSF CK and CK-MB levels were able to differentiate among all TBI fatalities and controls with moderate to high diagnostic accuracy ( Figure 4, Table 2). The diagnostic accuracy to distinguish between acute TBIs and controls using CSF levels of CK was high (AUC = 0.876, Figure 4, Table 2). A threshold CSF CK value of 137.2 μkat/l was found differentiating a fatal TBI from other traumatic, cardiovascular or hypoxic causes of death with a sensitivity of 65.2% and a specificity of 95.7% (Table 2). All other TBI subgroups could be distinguished from control cases with moderate diagnostic accuracy based on their CSF CK and CK-MB levels ( Figure 4, Table 2). Table 2. TBI, traumatic brain injury. Table 2. Descriptive data to the receiver operator characteristics curves in Figure 4 and results of the specificity-accentuated threshold value analysis. AUC, area under the curve; CK, creatine kinase; CK-MB, heart-type creatine kinase; TBI, traumatic brain injury. Quality Control of Samples and Subgroups with Bivariate Analysis Results The CSF samples were clear on visual inspection in the majority of cases (median H index for CSF = 6, range 0-288), but diminutive traces of blood were unpreventable during Table 2. TBI, traumatic brain injury. Table 2. Descriptive data to the receiver operator characteristics curves in Figure 4 and results of the specificity-accentuated threshold value analysis. AUC, area under the curve; CK, creatine kinase; CK-MB, heart-type creatine kinase; TBI, traumatic brain injury. Quality Control of Samples and Subgroups with Bivariate Analysis Results The CSF samples were clear on visual inspection in the majority of cases (median H index for CSF = 6, range 0-288), but diminutive traces of blood were unpreventable during sampling. The H index for TBI cases was 41 (IQR = 136) and was significantly higher compared to control cases which had a H index of 2 (IQR = 3, p < 0.001). The significant differences between the CSF CK and CK-MB levels of TBI fatalities and control cases persisted even when only cases with a H index of up to 20 were compared, resulting in a comparison of 17 TBI cases and 46 controls (CK: p = 0.022; CK-MB: p = 0.017). The H index between the TBI subgroups showed no significant statistical difference (p ≥ 0.574). The H index was independent of the PMI in both TBI (p = 0.372) and control (p = 0.545), but moderately correlated with the CSF CK (TBI cases: p = 0.010, r = 0.380; controls: p = 0.013, r = 0.358) and CSF CK-MB (TBI cases: p = 0.005, r = 0.408; controls: p = 0.028, r = 0.320). No statistical difference in age, brain weight, sex and PMI were found between TBI and control. However, both the CSF CK (p < 0.001, r = 0.507) and CK-MB (p < 0.001, r = 0.503) values of the control group revealed a moderate positive correlation with PMI, which was absent in the TBI group (p ≥ 0.887). Both CSF CK and CK-MB levels were otherwise independent of age (p ≥ 0.502), brain weight (p ≥ 0.078) and sex (p > 0.069) in both the TBI and control groups. Discussion Postmortem CSF CK-MB had previously been investigated as a biomarker for the cause of death. One study examined 295 fatalities which had 29 TBI cases with only seven cases categorized as acute deaths with survival times of up to three hours [30]. In that study, acute CK-MB CSF levels were shown to be higher when compared to fire-or temperaturerelated fatalities [30]. In another study of 1923 autopsy cases, TBI was not specifically analyzed [28]. Total CSF CK was investigated in TBI-related fatalities and higher levels were noted compared to non-traumatic hypoxic brain damage, cardiac and miscellaneous causes of death [18,22]. No studies used both CSF CK and CK-MB for analysis and none stratified TBI survival times. The results of the study showed that both blood-validated assays for CK and CK-MB can be used in postmortem CSF to discriminate TBI fatalities from control cases with moderate to high diagnostic accuracy. After stratifying the TBI group into different survival times, the overall differences in CSF CK and CK-MB levels between TBI cases and controls came from the acute TBI subgroup (survival time of up to 2 h). Both CSF levels of CK and CK-MB decreased with increasing survival time, resulting in comparable levels between longer survival times of TBI and controls. In-vivo CSF samples of healthy individuals are supposed to be free of CK-MB and the muscle-type isoenzyme CK-MM [31]. However, following brain injury, CSF CK-BB and mitochondrial CK increase [18,32]. It has been suggested that CSF CK-MB are influenced by recombination of CK-MM and CK-BB as a consequence of blood contamination [31]. The increased levels of CSF CK following TBI observed in this study could be interpreted as a consequence of either the (traumatic) breakdown of the blood-brain barrier (BBB) by an influx of CK that originated outside of the central nervous system, or an increased release of CK from damaged cells within an intact BBB, or a combination of the two. Effects of Increased CK Influx into the CSF from Outside the CNS Following Severe TBI After traumatic injury to the brain, the BBB becomes mechanically disrupted leading to an extravasation of erythrocytes and proteins. This may be reflected macroscopically by scattered petechiae throughout the brain tissue or a yellow-reddish discoloration of the CSF during autopsy [33]. As even large plasma proteins such as albumin can be found in the CSF of TBI patients clinically [34], the mechanical and functional disruption of the BBB [33] can lead to an uncontrolled influx of CK-MM and CK-MB into the CSF. In blood serum, CK in vivo levels in patients with rhabdomyolysis were shown to rise within 12 h and peak within one to three days [35]. Similarly, serum CK-MB levels following myocardial infarctions have a latency time of at least one hour before an elevation can be detected in living patients [36]. Therefore, the survival times of acute TBI fatalities in this study are insufficient to account for the peripheral CK isoenzymes that caused significant elevations in the CSF. Increase of CK Is Likely Caused by CK-BB Isoenzyme The acute significant increase of CSF CK following a traumatic impact to the head is probably caused by the release of CK-BB in the cytosol of damaged brain cells, especially neurons and astrocytes [18,37]. It accounts for 80-95% of total CK activity in the brain [18]. An increase in CSF CK-BB is expected in fatal TBIs, and this increase correlated with the amount of brain damage in both human and animal studies [38,39]. Thus, the increase in CK levels of acute TBI fatalities seen in this study was likely the consequence of an increase in brain-specific CK-BB. Given that a specific CK-BB assay is commonly unavailable in routine laboratories, the widely established and economical standard CK enzyme activity assay served as a sufficient substitute in a postmortem setting. Previous studies on postmortem CK-BB used the now discontinued "Impress-BB" kit, which was manufactured by International Immunoassay Laboratories and allowed to directly determine CK-BB activity, obviating interference from CK-MB subunits [18,22]. Nevertheless, the purpose of this study was to determine if standard laboratory assays such as CK and CK-MB were sufficiently accurate to discriminate between TBI fatalities and controls. The results support routine CK and CK-MB assays as potential replacements for the rare and expensive specific CK-BB immunoassays. Likewise, acute TBI fatalities could be distinguished from controls by using a commercial CK-MB immuno-inhibition assay to analyze CSF samples. However, CSF CK-MB appears to be influenced by CK-BB to a large degree, and so does not solely reflect the "true" CK-MB levels. Consequently, the CK-MB immunoassay used here served as a qualitative indicator of cerebral damage when applied to the CSF of TBI fatalities. The test principle was originally designed to measure the total catalytic activity of the CK-MB isoenzyme in serum or plasma based on measuring the activity of the CK-B subunit after immuno-inhibition of the CK-M subunit followed by multiplying the result by a factor of two [29]. Regarding the former, it was assumed that the CK-B subunit is generally negligible in blood with higher values, which indicates a malignant disease [40]. Thus, values for the indirectly measured CK-BB can be disproportionally high if it is excessively present in the given fluid, which can be postulated for the fatal TBI cases. This well explains that the measured CK-MB values in this study were often larger compared to the CK values. Additionally, it was impossible to determine the extent to which the CK-MB results reflected the excessive activity of CK-BB or the potential CK-MB activity. Derived from the test principle of the CK-MB immunoassay and assuming the almost exclusive presence of CK-BB in TBI-related CSF samples, the true CK-BB value could potentially be estimated by halving the CK-MB value. However, this remains speculative until it has been validated against the value obtained by a specific CK-BB immunoassay in future research. An electrophoresis evaluation of cadaveric CSF samples for both TBI and non-TBI-related fatalities could further clarify the portion of isoenzymes of the total CK, which could not be performed in the given study due to a lack of material. However, the qualitative activity of the CK isoenzymes could be deduced from this distribution analysis in postmortem samples. CSF Test Results Are Independent of the Presence of Blood or Contamination In the study, both CK and CK-MB activity correlated with the H index of the samples, with the latter being significantly larger in TBI fatalities. Both CK and CK-MB levels were also significantly higher in TBI cases when only samples with a maximum H index of 20 were compared as a sub-cohort, for which the used immunoassays were known to be unaffected according to the manufacturer's data sheet [29]. Increased blood admixtures in CSF samples of TBI fatalities compared to non-TBI-related fatalities were a common finding [5] that can be attributed to the traumatic disruption of blood vessels and the collection of blood in proximity to the CSF, especially in subarachnoid hemorrhages [5]. Hence, an increased H index should be expected when the CSF samples of severe TBIs are processed and not be misinterpreted as contamination due to improper sampling. Any iatrogenic admixture of blood during the sampling should have been avoided or reduced to irrelevance by using the proper centrifugation process before storage. This was underlined by the minute H indices of all control samples. For TBI Cases, CK Levels in CSF Are Useful for Cause of Death Determinations and Survival Time Estimations but Not for Time since Death Estimations The results revealed that CK and CK-MB levels in the CSF of TBI fatalities were higher compared to non-TBI-related deaths; therefore, the study hypothesis was confirmed. Interestingly, this was the case even though the CK-MB elevation was most likely at-tributable to a CK-BB increase in the CSF rather than to an expected response to damaged cardiomyocytes [23] and beyond that to other peripheral interactions after the TBI outside the central nervous system [41]. The diagnostic accuracy for detecting fatal acute TBI in CSF determined with ROC analyses was high for the commercially available standard CK immunoassay and moderate for the commercially available standard CK-MB immunoassay. Consequently, we believe that the results justify adding CK analysis in CSF to forensic biochemistry databases [1] to validate the observations of larger samples, those beyond the 20-88 age range of the study's TBI group, and to include pediatric samples. The future inclusion of infant cases, such as inflicted TBI, might help to establish CSF CK as an additional postmortem TBI-related biomarker and support trauma diagnoses by means of forensic biochemistry. Furthermore, CK can be implemented in decisiontree models using several biomarkers to enhance diagnostic accuracy to detect acute, fatal TBI. Our results suggest that a postmortem CSF threshold of 137.2 µkat/l for CK would be adequate to detect acute, fatal TBI irrespective of age, brain weight, sex or PMI. Based on the ROC curve analysis, the measurement of the CSF CK-MB activity using commercial immuno-inhibition assays was inferior compared to the blood-validated total CK assay; therefore, it is not recommended for postmortem TBI-related screening purposes. Furthermore, future studies should investigate how the TBI-related elevation of CK and its isoenzymes relate to the specific TBI subtypes such as epi-or subdural hematomas, intracerebral bleeds or diffuse axonal injuries. In summary, the CK level in CSF is a useful supplement for a cause-of-death determination and a TBI survival time estimate in a forensic context, but it is unsuitable for a time-since-death estimate. Moreover, when measuring CK postmortem, a baseline level has to be expected for all CSF samples, which can be taken from CK values in control cases. This is likely caused by either the functional postmortem breakdown of the BBB or the release of CK-BB following the hypoxic-ischemic neuronal death [32] after the oxygen supply to the brain ceases. CSF CK-BB levels were shown to be 10 to 20 times higher than the last in vivo measurements as soon as 2 h after death [42], which supported our previously stated hypothesis that TBI-related biomarkers in the CSF of fatalities are exceptionally high compared to cases in which the patient survived the trauma [3,5]. Limitations The study had a limited sample size, and unalterable factors like the environmental temperature during death and storage-related freezing of the CSF samples could have affected degradation and, consequently, biochemical observations. Although considered unlikely for the aforementioned reasons, the observed increase in total CK levels in the CSF could be based on an increase in the isoenzymes CK-MB and CK-MM to an unknown extent, but this could only be excluded by a future specific CK-BB test. Here, an additional CK-BB test would have been advantageous for clarifying whether the TBI-related CK increase was, in fact, caused by an excessive increase of CK-BB, but this was not possible due to limited financial recourses. As CK-MB activity level exceeded values known from clinical investigations, the CK-M subunits might not have been inhibited entirely during the immuno-inhibition and might have contributed to the enzymatic activities for CK as well as for CK-MB. The postmortem influence on a macro-CK formation of CK-MB is unclear, and the measurements might have been affected by this. Furthermore, leakage of peripheral CK isoenzymes due to agony and postmortem changes cannot be excluded. Increased levels of mitochondrial CK were observed in the CSF of patients with hypoxicischemic brain damage [32]. Thus, the results in this study might have been influenced by the increased levels of mitochondrial CK in TBI fatalities compared to those of non-TBI fatalities. It should be mentioned that the biomarker levels measured in this study might have been influenced by other injuries related to the traumatic event and might not have been exclusively caused by the fatal TBI. Conclusions Our data suggested, that elevated CSF levels of CK, determined by a commercial clinical chemistry assay developed for in vitro diagnostics in plasma and serum, may be used to discriminate between fatal acute TBIs and controls. Both CK and CK-MB values seemed to display an excessive increase in CSF CK-BB following trauma-related damage to brain cells. It was suggested that the CK immunoassays can serve as a quantitative postmortem biochemistry test in fatal acute TBI samples. Further studies combining CSF CK and CK-BB to become more established biomarkers for TBI is recommended.
2021-07-25T05:26:10.501Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "5559857977fd6a0cae53bdd9d63d97eb1d0c32d9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/11/7/1061/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5559857977fd6a0cae53bdd9d63d97eb1d0c32d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258960606
pes2o/s2orc
v3-fos-license
Fitting a Deep Generative Hadronization Model Hadronization is a critical step in the simulation of high-energy particle and nuclear physics experiments. As there is no first principles understanding of this process, physically-inspired hadronization models have a large number of parameters that are fit to data. Deep generative models are a natural replacement for classical techniques, since they are more flexible and may be able to improve the overall precision. Proof of principle studies have shown how to use neural networks to emulate specific hadronization when trained using the inputs and outputs of classical methods. However, these approaches will not work with data, where we do not have a matching between observed hadrons and partons. In this paper, we develop a protocol for fitting a deep generative hadronization model in a realistic setting, where we only have access to a set of hadrons in data. Our approach uses a variation of a Generative Adversarial Network with a permutation invariant discriminator. We find that this setup is able to match the hadronization model in Herwig with multiple sets of parameters. This work represents a significant step forward in a longer term program to develop, train, and integrate machine learning-based hadronization models into parton shower Monte Carlo programs. Introduction Hadronization connects theory and experiment by transforming the fundamental degrees of freedom -quarks and gluons -with observable degrees of freedom -hadrons. However, we do no have a first-principles understanding of hadronization and so existing approaches use physically-inspired, highly flexible models fit to data. Our vision is to replace these hand-crafted models with deep learning, where the additional exprresivity would have the potential to enhance precision, the models would be readily differentiable, and they would be naturally compatible with Graphical Processing Unit (GPUs). There are currently two hadronization models in wide use: the cluster model [1] and the string model [2,3]. The former is employed by default in the Herwig [4][5][6][7] and Sherpa [8,9] Parton Shower Monte Carlo (PSMC) programs and the latter is used by default in the Pythia [10,11] PSMC. Previously, Refs. [12] and [13] showed that deep generative models could emulate the string and cluster models, respectively, in a simple setting where the neural network has access to parton-hadron pairs and only pions are produced * . Furthermore, these models were integrated into the Pythia and Herwig PSMC programs. These papers marked an important milestone, but represent only the first steps along a multiyear program to achieve a complete, integrated, and tuned machine learning (ML)based hadronization model. While previous work has shown that neural networks can emulate the existing hadronization models, we want to eventually fit the models to data. A fundamental challenge with using data directly is that hadronization acts locally on partons while only non-local information about hadrons is observable. In other words, events are measured as a permutationinvariant set of hadrons that have no inherent order or grouping to know which hadrons 'came from' the same partons. This means that we need a model that can learn to generate hadrons from partons based on information from a loss function that acts on the set of observable hadrons. The two-level challenge of fitting to data rules out most standard implementations of deep generative models. Variational Autoencoders (VAE) [14,15], Normalizing flows (NF) [16,17], and diffusion models [18][19][20] do not directly apply because we need to know the probability density of the partons and we need a permutation invariant reconstruction loss (VAE), probability density (NF), or score function (diffusion). While there has been some progress on these fronts [21][22][23][24][25][26][27][28][29][30], Generative Adversarial Networks (GANs) [31,32] can be naturally applied to this setting. For GANs, the latent space does not require a tractable probability density, the discriminator can be applied on a different level (hadrons) as the generator (partons), and permutation invariance can be enforced by using a setbased classifier for the discriminator. GANs were the first deep generative model applied to particle physics data [33][34][35] and have since been extensively studied (see e.g. Ref. [36][37][38]). GAN-like setups have also been used for two-level fitting in the context of parameter estimation [39] and unfolding [40]. We propose to use GANs for fitting hadronization models to data. We embed the GAN-based hadronization model HadML introduced in Ref. [13] in a full event-level fitting framework. A fully connected neural network takes as input individual clusters and outputs pairs of hadrons. This network acts in the cluster rest frame. The resulting hadrons are then boosted to the lab frame and the GAN discriminator is based on Deep Sets [41], which is a permutation invariant neural network architecture. We restrict ourselves to the cluster model inputs (clusters created from pre-confined partons) and pion outputs in order to focus on the two-level fitting challenge. These simplifications will be relaxed in future work. This paper is organized as follows. Section 2 introduces the conceptual and technical details behind our fitting framework. Numerical examples are presented in Sec. 3, including two variations on the cluster model. The paper ends with conclusions and outlook in Sec. 4. Statistical Approach Our goal is to learn a conditional generator function G (z, λ; ω G ) which maps cluster kinematic properties onto the kinematic properites of the two † hadrons from each cluster decay {h 1 , h 2 } ∈ R 2N h with the parameters ω G . Here, z ∈ R Nz is the input noise variable sampled from the prior p (z), and λ ∈ R N λ is the conditional variable, namely the cluster kinematic properties. Since two hadrons from a cluster decay must be back-to-back in the rest frame of cluster, the generator G can instead output the polar angles θ and φ of the "first hadron" in the cluster rest frame. Note that here φ is defined in the range of (−π/2, π/2), and the hadron with φ in this range is defined to be the first hadron. In the original setup [13], a discriminator function D (θ, φ; ω D ), parametrized with ω D , is learned to represent the † The cluster model can produce more than two hadrons, but most of the time, at the energies we consider, there are only two. We restrict to two for this study and will explore more complex decays in future work. probability that {θ, φ} came from cluster fragmentation rather than the generator G. G and D are then trained alternately to maximize and minize the loss function, respectively: (log (D (τ (λ))) + log (1 − D (G (z, λ)))) , (2.1) where τ is the cluster fragmentation. In the setup above, all hadrons are paired and matched to a cluster. In the actual data, however, the only observables are the kinematic properties of each individual hadron. In order to be able to fit the model to actual data, where the hadron matching and cluster information is not accessible, the discriminator function is modified to be D E (x), where D E takes a set of hadron kinematic properties x ≡ {h 1 , h 2 , ..., h n } in the same event as inputs. Furthermore, we parameterize D E as a Deep Sets model [41]: where Φ embeds a set of hadrons into a fixed-length latent space and F acts on the average of the latent space. Due to the average, D E can take any length of hadron set and is invariant under permutations of hadrons. The loss function thus becomes: where {G (z, λ)} is generated by a set of clusters that came from the same event. The generator acts in the cluster rest frame and then the resulting hadrons are boosted into the lab frame before being passed to the discriminator. A summary of the setup and how it differs from Ref. [13] is presented in Fig. 1. In our implementation, G is a neural network. However, this approach could also be used to fit (without binning) data to a parametric physics model as well. For that case, G would be e.g. the cluster model and the parameters would not be weights and biases of a neural network, but instead the parameters of the cluster model. This would require making the cluster model differentiable so that gradients could be passed through the model. We leave explorations of this hybrid setup to future work. Machine Learning Implementation Both the generator and discriminator functions are parametrized as neural networks and implemented using PyTorch [42]. The generator is a fully connected network which consists of two hidden layers with 256 nodes per layer. The noise dimension is set to 10. The discriminator comprises two networks φ and F . Both φ and F are a fully connected network with two hidden layers of 256 nodes each. Each intermediate layer in these networks uses a batch normalization and a LeakyReLU [43] activation function. The last layer of the generator uses a tanh activation function to restrict the outputs to be in the range of (−1, 1). The outputs are then scaled and transformed linearly to match the actual range Figure 1. An overview of the model presented in this paper and how it compares to HadML v1 from Ref. [13]. Since the clusters are not observable in data, the discriminator in v2 acts on sets of hadrons and does not have access to cluster-hadron-hadron labels. We first study the performance in the same Herwig setup as in Ref. [13] ('Closure Test') and then check that it is also able to fit another Herwig setup (Cluster Frag') with variations in the cluster hadronization model ('Stress Test'). (−π/2, π/2) for φ and (0, π) for θ. The last layer of F uses a sigmoid activation function and no activation is used for the last layer of Φ. All neural network inputs are normalized to the range of (−1, 1), whereas the noise prior p is a Gaussian distribution with a mean of 0 and width of 1. The generator and discriminator are optimized alternately (1 discriminator step and 5 generator steps) with Adam [44] with a learning rate of 5 × 10 −7 and 10 −4 for the generator and discriminator, respectively. The training uses a batch size of 10,000 and is performed for 6,000 epochs. The hyperparameters were optimized with Weights and Biases [45]. Datasets Crucial data for fitting hadronisation models are LEP events collected in e + e − collisions at the center-of-mass energy √ s = 91.2 GeV. Therefore, we used such events generated with version 7.2.1 of the Herwig Monte Carlo generator for a training dataset for our Generative Hadronization Model. As mentioned earlier, the cluster model [1] is used for hadronisation in the Herwig generator. Based on the color preconfinement [46], the cluster model groups a partonic final state into a set of colour-singlet clusters (pre-hadrons) with an invariant mass distribution that is independent of the specific hard scattering process or its centre-of-mass energy and that peaks at low masses. Therefore, most clusters decay into two hadrons. However, a small fraction of clusters are too heavy for this approach to be justified. Therefore, these heavy clusters are first split into lighter clusters before decaying. The decay of such massive clusters is not discussed in this publication but will be considered in future work. Each entry in our training data set includes information about the four-momentum of all the light clusters in an event and the four-momenta of their parents (partons) and children (hadrons), along with their flavours. An example of an entry from our data sets is available on Zenodo at Ref. [47]. To simplify the training data further, only decays into π mesons were considered ‡ . To check whether the model can adapt to different variants of the kinematics of hadron decays, we also prepared two datasets with different, minimal (0) and maximal (2) settings of the ClSmr parameter. The ClSmr parameter is the main parameter governing the kinematics of cluster hadron decay. Hadrons that contain a parton produced in the perturbative stage of the event retain the direction of the parton in the cluster rest frame with possible Gaussian smearing of the direction. The smearing is controlled by the the ClSmr parameter through an angle θ smear where cos θ smear = 1 + ClSmr log R. where R is a uniform random number chosen from [0, 1]. For more details about the parameters of the cluster model implemented in Herwig, see Chapter 7 of the generator's manual [5]. In Sec. 3.2 we use the minimal ClSmr as our alternative sample and refer to this setup as Herwig Cluster kin min . As would be the case with actual data, we use clusters from the nominal setting when fitting the alternative sample, although changing ClSmr does not change the cluster kinematic properties and thus the inputs to the GAN model are statistically correct. When we fit the nominal sample, the cluster inputs to the fit are distinct but statistically identical to those in the dataset we are fitting. Fitted Models The training history of the fit is presented in Fig. 2. As expected, the discriminator loss increases and the generator loss decreases, with a final value near log(2) (classifier outputs 0.5 for all examples). As an independent evaluation of the model performance, we also compute the Wasserstein distance between the true and generated four-momenta in the lab frame that are used by the discriminator to update the generator. The Wasserstein distance is computed as the average over the first Wasserstein distance for each four-vector component with Scipy [48]. Interestingly, the best Wasserstein distance decreases for the first 1000 epochs, then plateaus for the next 3000 epochs, before dropping to the final value around 5500 epochs. There are many possible variations on the GAN training setup that are possible to further improve the performance and we plan to explore these in the future. The direct inputs and outputs of the model are shown in Fig. 3. The generator produces two outputs per cluster, corresponding to the angle of one of the pions in the cluster rest ‡ In Herwig, this is achieved by adding the following line: set HadronSelector:Trial 1 into the default LEP.in input card. The only other modification to the default hadronisation settings was the change that the hadrons produced from cluster decays were on the mass shell. This can be achieved by adding the command: set ClusterDecayer:OnShell Yes in the input file. The fact that the initial GAN is so far from the final GAN is a non-trivial demonstration of the learning. Both GAN models match their respective truth Herwig spectra well. The marginal φ distribution is uniform, which is difficult for generative models to reproduce exactly. In the future, it may be possible to make this more precise by constructing the model to give a uniform marginal. After the clusters are decayed, the resulting hadron kinematic properties are Lorentz boosted to the lab frame and then aggregated over all clusters in the event. The second row of Fig. 3 shows histograms of the resulting hadron four-vectors, which are the inputs to the discriminator. We only show the energy E and the x momentum p x , but similar trends hold for p y and p z . Since hadronization is a small correction for such inclusive observables, the kinematic properties are mostly set by the Herwig parton shower, which is the same for the Herwig and GAN lines in the plots (since the GAN takes the clusters from the parton shower as input). This is the reason why the initial GAN starts so close to Herwig truth. However, the alternative Herwig sample differs significantly from the nominal Herwig sample, in particular in how hadrons split energy, which is most clearly seen in the tails of the energy and momentum distributions. The GAN model is an excellent match to the Herwig events across the full spectra. Figure 4 goes beyond the direct inputs and outputs by studying derived, but measureable, quantities. The first plot in Fig. 4 is the number of hadrons. Since we restrict our attention to 1 → 2 decays only, the number of hadrons is an even number, with a mode of 12. It is not possible to uniquely pair observed hadrons with their partner from the same cluster decay, but we can approximate the combination using nearest neighbor information. In particular, since the hadron masses are small compared to the typical cluster energy in the lab frame, the two hadrons tend to be close together in phase space. For all hadrons, we assign a hadron neighbor as the particle that minimizes § ∆R 2 = ∆φ 2 + ∆η 2 . A histogram of the resulting ∆R distribution is shown in the middle left plot of Fig. 4. The peak is at about 0.1, with most hadrons having a neighbor less than 0.1. While there is some difference between models in the ∆R distribution, a most distinguishing observable is the energy sharing between hadrons in the reconstructed cluster (middle right of Fig. 4). The nominal Herwig has more equal sharing of energy, while the alternative Herwig sample is much more asymmetric. The GAN models are able to match these trends, which both differ significantly from the initialized and untrained GAN model. Future GAN models could be improved by adding in these features to the discriminator directly. Additionally, we consider properties of the hadrons in the reconstructed cluster frame (bottom row of Fig. 4). Since the reconstructed clusters are not exactly the true clusters, the φ and θ distributions do not exactly match the top row of Fig. 3, although they are qualitatively similar. The distribution of φ is more discriminating between models, where the GAN models perform well, except near the edge of phase space where both GAN model match the nominal Herwig events. A key advantage of this fitting protocol over other methods is that it can accommodate unbinned and high-dimensional inputs. It would be possible to replace our neural network discriminator (and cross-entropy loss) with a χ 2 fit to binned histograms, like the ones in Fig. 3 (bottom) and 4, which are all observable in the lab frame. However, this would be a highly non-trivial modification to our setup and would necessarily be less effective. Comparing with standard tools that process low-dimensional and binned inputs would likely be inconclusive because we will not know if the difference in performance is from the tool or from the less information contained in the data. As a compromise in order to quantify the information gained from using our discriminator setup, we use a set of auxiliary classifiers. Our nominal setup is represented by our discriminator trained on the same inputs as our GAN model and to distinguish the two Herwig cluster model variations. The information content is represented by the area under the Receiver Operating Characteristic (ROC) curve or AUC, which is a standard metric for information content. An AUC of 0.5 means there is no useful information and an AUC of 1 means that the models can be exactly distinguished. For comparison, we compute the AUC also of the single observables in Fig. 3 (bottom) and Fig. 4. We do not bin these observables to avoid arbitrary binning choices and assume (which is conservative) that the bins of any actual measurement would be chosen to be maximally effective for this task. Technically, the AUC for single observables is computed by scanning over the observable to determine the true positive rate versus the false positive rate. Since a threshold cut may not be optimal for all observables, we have also checked how the results change if we train a simple Boosted Decision Tree (BDT) using sklearn [49]. We find that the BDT-based AUCs (including for the neural network as an observable) are consistent with the non-BDT ones. Numerically, the AUCs are as follows: neural network: 0.77, energy ratio ( Fig. 4 Since each cluster decays into two pions, the number of hadrons is an even integer. Middle: ∆R between a given hadron and its nearest neighbor in φ − η in the lab frame (left) and the ratio of energies between a given hadron and its neighbor (right). Bottom: The φ (left) and θ (right) of the hadrons in the reconstructed cluster frame. Conclusions and Outlook We have presented a setup for fitting deep generative hadronization models to data. The main challenge we have addressed is the lack of truth labels connecting partons and hadrons, which were used by previous deep generative hadronization models [12,13]. In order to address this challenge, we used a two-level Generative Adversarial Network (GAN) setup, where the generator acts at parton level and the discriminator acts on hadron level. Since there is no natural order to the hadrons, the discriminator is a classifier based on the Deep Sets architecture that can process variable-length and permutationinvariant inputs. We have shown that we can fit this model to two variations of the Herwig cluster hadronization model. The GAN is able to reproduce Herwig well, with additional refinement and optimization required in the future to improve the prevision further. While this represents a significant step towards realizing a deep generative hadronization model, there are still other aspects to address. We have restricted our attention to pions, but a complete model will need to generate the full spectrum of hadrons in addition to kinematic information. Additionally, we have started from clusters decaying to two hadrons, while in reality, more complex arrangements are possible. In fact, we ran a test to fit the string model in Pythia using our setup ¶ , but the cluster model is not flexible enough. Modifications that allow for more general parton to hadron mappings, including variable-length generation [24][25][26][27][28][29][30]50], will be required in the future. In particular, we would not take pre-confinement as a starting point and instead also model the combination of partons with a neural network (so partons to hadrons instead of clusters to hadrons). Such a model would have the capacity to mimic the cluster or string models as well as go beyond either model. Such an architecture could be swapped out for our generator and use our same GAN setup to do the final fit. Once we have a full model, there is a question of which data to use for the fit. Traditionally, hadronization models have been fit to histograms (binned differential cross section measurements) from e + e − data using tools like Professor [51] and other automated tuning protocols [52][53][54]. However, these approaches may need to be modified since the parameter space of the models is much bigger. One possibilitiy is to use a variation of Unbinned Profiled Unfolding (UPU) [55], which uses histograms to steer neural networks with a two-level fit for unfolding. The reweighting function in UPU could be replaced with the hadronization model. Another possibility is to start with unbinned data, as is now possible with machine learning-based unfolding methods [21,[56][57][58][59][60][61][62][63][64][65]. There are also now first unbinned cross section measurements [66][67][68][69][70], although none are currently published without binning [56]. There are not yet any unbinned measurements from e + e − , but results from deep inelastic scattering may be effective, since they share many of the features of e + e − that makes them particularly clean with respect to hadron colliders. While there are still multiple components needed to arrive at a complete ML-based hadronization model, the program ahead is well-motivated. Current models are excellent, but the additional flexibility of neural networks will allow us to improve the precision on ¶ For this, we used exactly the same partons as in the Herwig dataset and ran the string model in Pythia, modified to only produce pions. hadronization modeling so for precise measurements that are affected by these uncertainties. With improvements in machine learning models, it may also be possible to use these tools to learn more about hadronization itself, which remains a key research topic in nuclear physics.
2023-05-30T01:16:06.518Z
2023-05-26T00:00:00.000
{ "year": 2023, "sha1": "29a7246cd90a3af10a2f1cb2b8e4da57cd3a49ac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "29a7246cd90a3af10a2f1cb2b8e4da57cd3a49ac", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
218788953
pes2o/s2orc
v3-fos-license
Paths to Internationalisation of Small and Medium-sized Enterprises in Ukraine Objective: The defining objectives of the study are the following three. First, to classify the level of real development of internationalisation in Ukrainian SMEs. To accomplish this goal, the authors summarise the theory and methodology defining the field of internationalization. They also present their own empirical studies on Ukrainian small and medium-sized enterprises (SMEs). Second, they analyse the main forms, motivational factors and barriers to the internationalisation of Ukrainian SMEs. Finally, they discuss Beata Glinkowska-Krauze, University of Lodz, Faculty of Management, Narutowicza 68, 90-136 Łódź, Poland, e-mail: beata.glinkowska@uni.lodz.pl, ORCID: https://orcid.org/0000-00026915-3297. Iegor Chebotarov, Luhansk Taras Shevchenko National University, Department of Economics, Marketing and Entrepreneurship, 1 Gogol Square Starobilʹsʹk, Luhansk Region 92703, Ukraine, e-mail: iegor.chebotarov@gmail.com, ORCID: https://orcid.org/0000-0001-5963-7637. Viacheslav Chebotarov, Luhansk Taras Shevchenko National University, Department of Economics, Marketing and Entrepreneurship, 1 Gogol Square, Starobilʹsʹk, Luhansk Region 92703, Ukraine, e-mail: vena.lnu@gmail.com, ORCID: https://orcid.org/0000-0003-1131-9116. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License (CC BY-NC-ND 4.0); https://creativecommons.org/ licenses/by-nc-nd/4.0/ Beata Glinkowska-Krauze, Iegor Chebotarov, Viacheslav Chebotarov 78 the development of strategies and mechanisms for activating the internationalisation of SMEs in Ukraine. Research Design & Methods: The study used qualitative research, including a survey of the literature and our own empirical research (detailed interviews with Ukrainian entrepreneurs, survey questionnaire, and experimental assessments of scientists from Ukraine and Poland on this subject). Findings: The present condition of the internationalisation of Ukrainian small and medium-sized enterprises is assessed, and its obstacles and motives have been identified. In the context of the European Union’s experience in regulating small and medium-sized businesses, the most promising strategies and sectors of the economy have been classified. It is in these sectors that Ukrainian small and medium-sized enterprises should gain the greatest competitive advantages. Lastly, mechanisms for promoting the internationalisation of Ukrainian SMEs are defined. Implications / Recommendations: In modern conditions the internationalisation of Ukrainian small and medium-sized enterprises is at an early stage. It does currently extend substantively beyond export-import operations. In Ukraine, enterprises that internationalise tend to do so upon or near their inception. The main approach to stimulating the internationalisation of Ukraine’s SME sector is to develop effective state regulatory policy adapted to the contemporary norms and standards of the European Union. The results of the study have shown that Ukrainian SMEs can gain their greatest competitive advantages in the global market in industries including agriculture (primarily, “eco” and “green”), alternative and renewable energy production and IT technologies. Contribution: Research fills a research gap in the field of internationalisation of Polish and Ukrainian small and medium-sized enterprises. Introduction and Research Methods The evolution of the market system explicitly demonstrates that the internationalisation of business structures across countries, as a voluminous and multi-level process of those structures interacting commercially, involves the formation of necessary prerequisites, primarily within domestic national markets. The signing in 2014 of the Agreement on Deep and Comprehensive Free Trade Areas (DCFTA) between the European Union and Ukraine opens up wide possibilities for integration activities for enterprises in Ukraine, including small and medium-sized enterprises (SMEs). However, for the practical implementation of the integration, Ukrainian enterprises must themselves meet certain parameters of modern European economic policy (and of the modern world economy as a whole). That is, enterprises in Ukraine must meet not only the conditions of the Small Business Act for Europe, but also the current Strategy of united Europe, "Europe 2020". The internationalisation of Ukrainian SMEs is an imperfect process due not only to the insufficient resource potential of such enterprises, but also to the state's inability as the supreme institutional subject of the economy. Consequently, the main direction of stimulating the internationalisation of Ukrainian small and medium-sized enterprises and increasing its socio-economic impact is the development of an effective state regulatory policy governing SMEs adapted to the EU's current norms and standards. This is the hypothesis of this paper. The article foregoes presenting the well-known theory on the development of SMEs, but does identify, using empirical studies conducted by the authors, the real problems and prospects of internationalisation of Ukraine's SMEs. The objectives were the following. First, to classify the level of real development of internationalisation in the country's small and medium-sized business sector. This will be accomplished by briefly summarising the theoretical and methodological material existing in literature and presenting the authors' own empirical studies of Ukrainian small and medium-sized enterprises. Second, the article examines the main forms, motivational factors and barriers of internationalisation of Ukrainian SMEs. Finally, it discusses the initial basis for developing prospective strategies and mechanisms behind the activation of internationalisation for Ukraine's SMEs. It should be noted that problems facing the study of economics in Ukraine, due to a combination of factors, has only began to receive systematic research. Organisational, economic, financial, and managerial aspects of the internationalisation of SMEs in Ukraine remain relatively insufficiently explored. This can be attributed to the lack of real applied empirical elaborations and predictive proposals. Issues surrounding the adaptation of legislation, technical norms and quality standards into the practice of Ukrainian enterprises have been studied by R. Džabraělov and V. Nověkova (2015, pp. 36-39), Epifanova (2009, pp. 211-215), G. Leh, M. Ělʹčišin and O. Turkalo (2011, pp. 224-229) and V. Lyashenko, A. Tolmachova and O. Kvilinskyi (2016, pp. 155-164). In other research, Orekhova and Koshelenko have considered the development of SMEs in the conditions of internationalisation of the Ukrainian economy (Orekhova & Koshelenko 2010, pp. 170-176). In this context, attempts to clarify the harmonisation of Ukrainian legislation with the norms and standards of the European Union should be noted (Glinkowska & Chebotarov 2016, pp. 153-164). Examples of a systematic analysis of the internationalisation prospects of small and medium-sized businesses in Ukraine in modern conditions are rare. On a positive note, however, they are based on a study of Polish-Ukrainian cooperation (Glinkowska 2018), which profiled the entrepreneurial culture of managers in Poland and Ukraine (Glinkowska & Chebotarov 2018, pp. 63-74;2019, pp. 75-86). The main methods used for the paper were qualitative research methods. It looks at the history of SMEs in Ukraine as well as the motivational factors and barriers to internationalisation of Ukrainian small and medium-sized enterprises in modern conditions. In addition to these qualitative factors, empirical research was also conducted. It was based on a questionnaire done at Ukrainian small and medium-sized enterprises that have already gone through the process of internationalisation, as well as detailed interviews with Ukrainian entrepreneurs and experimental evaluations among scientists from Ukraine and Poland on this subject. This article was prepared on the basis of developments of the Center for Research Cooperation Poland-Ukraine. The Center was created in 2016, in a collaboration between the Department of Management of the University of Lodz (Łódź, Poland) and the Department of Economics, Marketing and Entrepreneurship of the Luhansk Taras Shevchenko National University (Starobilʹsʹk, Ukraine). The basis of the cooperation was a framework agreement between the respective universities, signed in 2014. The empirical research was carried out directly by the authors in Ukraine in the years 2015-2019, and included the following components. First, a field study has been conducted on a sample of 75 small and medium--sized Ukrainian enterprises that already cooperate with foreign entrepreneurs. Overall, 2.6 times more Ukrainian SMEs were interviewed, yielding a sample rate of 197. However, only 75, or 38.1% of the 197 overall, had real experience in international relations. The rest (61.9%) only had plans to expand internationally. While they had concluded protocols of intent with prospective partners and/or had made or received certain commercial offers, they had not yet begun to implement them. In keeping with "experimental purity", the sampling index 75, rather than 197, is used in the calculations below. An analysis of the data of the Main Directorates of State Statistics Services for the regions in which empirical studies were conducted showed that in these regions the number of SMEs officially participating in foreign economic activity was only 549. That is, about 13.7% (75 of 549) of the sample size, a relatively high representativeness. The SMEs surveyed (by industry sectors) were in agriculture, trade, construction and installation, tailoring and automotive servicing. In terms of their resource parameters and their approach to business, the enterprises selected for the empirical research are typical of their industries. They had been on the market for between 3 and 7 years. In terms of location, the enterprises covered all the large regions of Ukraine: the southeast -Luhansk and Donetsk regions (both of which are under control of Ukrainian government and aren't yet directly involved in the military conflict in the Donbas); central Ukraine, including Kyiv, and west Ukraine, including the Lviv region. To ensure validity, the 75 enterprises represent the same four regions of the country in approximately equal proportions. Thus, the empirical research covers all of Ukraine's main region, thus enabling us to take into account, in addition to the economic characteristics of SMEs in Ukraine, the complex of the country's institutional features. There were three main methodological instruments used in the research: an ad hoc survey questionnaire form, a standardised interview questionnaire form, which gave interviewees the opportunity for free expressions and comments, and detailed interviews conducted with several groups of experts in Ukraine. The first group of experts consisted of representatives of various levels of state powers (district -poviat, region -voivodeship and Verkhovna Rada of Ukraine -Parliament), whose responsibilities include implementing state regulatory policy associated with entrepreneurship. The second group of experts comprised business coaches and analysts specialising in small and medium-sized business projects (in particular ones associated with the leading Ukrainian school of business at the Kyiv-Mohyla Academy as well as international philanthropic organisations providing grant support to small and medium-sized business in the Luhansk and Donetsk regions -Mercy Corps and FAO -The UN Food Organisations). The third group of experts comprised representatives of the scientific community, whose research interests included the development of SMEs. In particular, the Polish-Ukrainian seminar "Theoretical and practical problems of management in Ukraine in the context of the implementation of the European integration course" was held on the basis of the Institute of Industrial Economics of the National Academy of Sciences and the Academy of Economic Sciences of Ukraine. The official data and reports from the State Statistics Service of Ukraine and the information provided in the documentation of the Kyiv and Luhansk oblasts on the status of the issues selected for research was investigated (eg. the state of exports and imports, foreign direct investment, number of international projects carried out, etc.). The basic research question was, how do SMEs enter foreign markets? What are their strategies and motives and what are the barriers to internationalisation? The SME Sector in Ukraine: How It Functions and the Role It Plays in the Country's Economy The regulatory and legal basis for conducting business in Ukraine by small and medium enterprises is the Commercial Code of Ukraine, adopted in 2003 (https:// ips.ligazakon.net/document/view/T030436?bl=, accessed: 13 February 2020). The classification criteria adopted in Ukraine concerning employees and revenue volume are the same as in Poland and the EU. Hence, in this respect (as well as in relation to the establishment of criteria for medium and large businesses), the legislation of Ukraine has already been harmonised with the legislation of the European Union (Glinkowska & Chebotarov 2016, pp. 148-158). No solid studies have been done on the internationalisation of enterprise activities in Ukraine (their strategies, forms, motives, barriers) or on models for the internationalisation of small and medium-sized businesses. Filling this gap, this study presents a classification of the actual condition and key problems of SMEs in Ukraine in the context of developing internationalisation policy. It also identifies the most important prerequisites and mechanisms for the practical implementation of such policy. Entrepreneurship is an important and integral component of a market economy. It mobilises resources and accelerates the pace of development. The level of entrepreneurship in each country depends on the number and condition of enterprises. SMEs are the core of most market economies, which in many economically developed countries produce approximately 50% of GDP (GUS: https://stat.gov. pl/obszary-tematyczne/podmioty-gospodarcze-wyniki-finansowe/, accessed: 13 February 2020). This may portend a positive climate for investing in Ukraine. Entrepreneurs in Ukraine face a number of barriers to the development of their own businesses. The barriers are mainly related to a lack of equity and relatively high interest rates on loans (up to 24%), making it difficult to finance foreign capital in the form of loans. There is also a lack of significant support from the government and an unfavorable economic and political situation. Two of Ukraine's oblasts (regions), Donetsk and Lugansk, face the most difficult straits, mainly due to the military operations being conducted there. However, the unstable operating conditions SMEs face is why about 80% of them end their operations with one to two years of setting up shop (Chukhray 2013, pp. 45-60). According to data from the official website of the State Statistics Service of Ukraine, almost 40% of small enterprises are barely profitable or entirely unprofitable. Enterprises in the construction industry and general industry are the least profitable. The employment rate in small enterprises has also fallen from year to year (State Statistics Service of Ukraine, http://www.ukrstat.gov.ua/, accessed: 13 February 2020). There are a number of reasons for the fall: the introduction of new technologies, high taxes, the robust "shadow economy" and the low profitability of enterprises and the related need to reduce costs. It is also worth noting that a fair number of small and medium-sized enterprises are known to artificially lower the profitability they declare. In terms of successes, relatively small companies operating in agriculture, trade and the service industry are the most profitable. Furthermore, no deterioration in the indicator related to declining profitability of small and medium-sized enterprises. Since the global crisis (2008 to the present), product sales value and average annual salaries have improved while small enterprises have generally increased operating profitability (State Statistics Service of Ukraine, http://www.ukrstat.gov.ua/, accessed: 13 February 2020). The robust grey zone, corruption and the monopolisation of the economy by large state enterprises together function is both crutch and boon to small and medium-sized business in Ukraine. These obstacles to running a business are often in practice motives for finding better conditions for doing business beyond Ukraine and internationalisation business. The Definition and Contemporary Forms of Internationalisation in Small and Medium-sized Business Internationalisation is one way an enterprise can develop. Going international is often a response to a lack of sufficient opportunities to develop business activity within a home market. Often, the economic opportunities are better abroad. Internationalisation comes in response to the existence of specific barriers in a country, the availability of specific motives, development of individual action strategies. In this paper, we have identified the main problems with and necessary prerequisites for the internationalisation of small and medium-sized enterprises in Ukraine in the mid-term perspective. One of the aims of this study was to compare definitions of internationisation used in the subject literature. Table 1 breaks down the results of the answers offered to the questionnaire. Clearly, internationalisation is defined in multiple ways. Most often, however, the process is understood as one of the internationalisation of operations, then as the export and import of goods and cooperation, international cooperation and the expansion of activities to foreign markets. Similarly, there is a multitude of views in the literature, lending to the complexity of the issue. A dominant one is internationalisation understood as any economic activity undertaken by an enterprise abroad (Rymarczyk 2004, p. 19), or as a company's commitment to international activity (Johanson & Vahlne 1977, p. 26;Przybylska 2005, p. 73), or export and import of products / raw materials or transfer of production outside of the home country (Pietrasieński 2005, p. 15). Internationalisation is certainly a dynamic process (Glinkowska & Kaczmarek 2016a, pp. 20-27;Welch & Luostarinen 2013, p. 95). It applies to both domestic companies with assets located only in one and those with assets in two or more countries (Pierścionek 2011, p. 359). As Jarosiński points out, this is a process in which an enterprise enters into relations with other entities to pursue its strategic goals (Jarosiński 2013, p. 19), and is beneficial for the enterprises. Internationalisation of business activities 27 Exporting and importing goods indirectly (in one's own country with intermediary companies) and directly (with companies from abroad in one's own country and abroad). 22 Cooperation between enterprises from different countries (international cooperation) 18 Expanding into foreign markets (host markets) 4 Conducting business on foreign markets (direct foreign investment) 4 Source: the authors, Ukraine, 2019. The essence of internationalisation is, therefore, conducting business in cooperation with enterprises from abroad as well as through direct foreign investments. Enterprises operating in Ukraine apply various forms of internationalisation. The term form here should be understood as the method of organising cooperation (co-production, cooperation), commencing operations on foreign markets or with entrepreneurs from outside the home country (Wach 2008, pp. 50-53). Table 2 lists the different forms. The enterprises surveyed most often use exports (indirect and direct -58) production co-production (22), followed by import (indirect and direct -32). Imports and exports are relatively simple forms that do not require large financial outlays, bringing immediate positive economic results. Cooperation and strategic alliances require well-developed contracts, trust and close common effort within the framework of joint projects. They too often yield positive economic results, but only after some time, making them less attractive for enterprises with limited equity. In addition, more complex forms require greater company involvement abroad and a higher scope of oversight (Rymarczyk 2004, p. 156), both of which require more time and financial resources. For Ukrainian small and medium-sized businesses, the lack of sufficient capital for FDI or the development of activities on its own market weighs heavily. Ukrainian companies face fierce, unformalised competition at home, due mainly to the fact that many Ukrainian companies engage in unregistered -grey zone -activities. The low operating costs such companies enjoy are reflected in low market prices they take for their products and services -prices well below what registered entities operating legally are able to take. This leaves the latter companies less attractive but more motivated to seek opportunities outside of their domestic market. Export and import are also sometimes the basis for the development of cooperation or a strategic alliance. Motives for and Barriers to Internationalisation for Ukrainian SMEs Motives and determinants of and barriers to internationalisation should be considered in the light of the following: -operating and development conditions on the home market (Glinkowska & Kaczmarek 2016b, p. 125), -the possibilities of functioning and developing on a foreign, or host country, market (Glinkowska & Kaczmarek 2016b, p. 125), -the strength, quantity and quality of assets under control, -the number, strength and quality of the company's weaknesses. This requires the company to carry out a SWOT analysis -an analysis of strengths, weaknesses, opportunities and threats in the environment (Gierszewska & Romanowska 2017, p. 113) of both home and host countries -is an essential step in addressed the above issues. Such analyses are advisable for both the company as a whole and the products they offer. The SWOT analysis carried out in the process of studying Ukrainian enterprises and further processing the materials on the results of surveys of experts / analysts showed the following. Among Ukrainian SMEs, several weaknesses dominate. These include the lack of own capital and lack of modern advanced technologies (for resources, marketing and management, that owners of small and medium-sized businesses in the course of our survey have especially emphasised). The companies surveyed listed strengths including high-quality products and / or services, low production costs and traditional recipes. These strengths were of great importance to the entrepreneurs. The threats included: minimal external financing, the large "grey zone" on the home market, unfavourable legal regulations for small and medium-sized businesses, excessive appreciation of large companies at the expense of their smaller counterparts, the low incomes in Ukraine, hostilities and the attendant lack of favourable conditions for trade in some regions (Donetsk, Luhansk). On foreign markets, on the other hand, threats include complicated regulations, the need to make costly adjustments to meet the requirements of a given market and high financial penalties for legal irregularities. Opportunities on markets of host countries are primarily seen in liberalised conditions for trade and globalisation processes. In the light of the SWOT analysis, Table 3 contains the most important motivations for the internationalisation among Ukrainian SMEs, while Table 4 breaks down the barriers. The analysis of the data in Table 3 shows that the main motivations for the internationalisation of activities for small and medium-sized enterprises in Ukraine are: increased revenues (75 responses), limited opportunities to sell their products in the domestic market (67), opportunities to develop operations outside one's home country (52), and unfavourable legal regulations on one's home market (48). The motives for the internationalisation of Ukrainian small and medium--sized businesses result primarily from the inability of entrepreneurs to develop their own business in the Ukrainian market and the related lack of opportunity to turn a decent profit. Ukrainian small and medium-sized enterprise entrepreneurs are open to international cooperation, but lack information on the legal, economic and cultural opportunities to develop operations in host markets. They also lack adequate equity, and fear that potential partners may be unreliable, thus hindering their foreign expansion. The present research leads to the conclusion that the entrepreneurs do not believe ignorance of a given foreign language or transport costs are crucial barriers. Prospective Strategies and Mechanisms for Activating Small and Medium-sized Business in Ukraine The strategy concerns a well-thought-out, relatively permanent action plan and guidelines for dealing with the implementation of organisational goals and the shaping of organisational identity (Obłój 2007, pp. 325-337). Research we carried out in 2015-2019 allowed us to conclude that small and medium-sized companies in Ukraine usually use four basic types of strategy (multiple choices possible): competing on price (75); competing on quality (59); competing with the use of technology (39); specialised activities (31). A single company can deploy all of these strategies, though they most often compete in price and quality. A low-price strategy is possible thanks to low production costs (lower than in other countries, eg. Western Europe). Competing with technology is understood here as support for activities based on traditional technologies (including hand-made products) and old recipes. A main strength for enterprises which produce mainly food supplies would be a lack of changes being made both to their technologies and recipes. Strategies such as competing costs, knowledge, competences, or business diversification have marginal significance in the cases analysed for this paper. Such strategic behaviors may result from the desire to locate one's own products / services in countries with a higher level of technology. Doing so allows a company to develop and benefit from a higher margin on its product. 30 of the enterprises surveyed for this research indicated as much. The enterprises rarely make comprehensive strategic analyses of their own operations, lacking the time and money to do so. They cannot afford to reorganise structures in order to separate positions or organisational units dealing with foreign expansion. They are dominated by large companies and often by small and medium-sized companies operating in the black. Based on the results of the research done in the sample of small and medium--sized Ukrainian enterprises, conclusions can be drawn as to the basic and most frequent ways they internationalise. It seems that the aspects that have the greatest influence on the way of entering the foreign market are: motives and barriers to internationalisation as well as forms and strategies of this process. The surveyed enterprises do not use stages in their internationalisation attempts. In general, small businesses in Ukraine are not subject to phases of internationalisation. The same is true for medium-sized companies, but here the phase is a more frequent phenomenon. After the initial stage of production and commercial activities, a significant part of Ukrainian small and medium-sized enterprises begins to look for opportunities to enter the foreign market. Again, this is mainly due to the lack of sufficient opportunities to develop entrepreneurial activity on the home market combined with the illegal operations of small and medium-sized businesses and the lower operating costs they enjoy. Thus, they exhibit features of the born global model (global from the beginning, born as global). This means that during the initial phase, the company's first foreign contacts are established (mostly through acquaintances, friends and family members). Fast and early internationalisation is bolstered by the skills and charisma of business owners and managers. Small and medium-sized companies relatively often establish cooperation with companies from countries neighbouring Ukraine, often with the character of commodity turnover (export-import) and production cooperation. These are enterprises that usually do not have enough capital to create FDI, and their activity is often based not on innovative projects, but on products and services that compete on price. However, they do not rule out strategic alliances as a complex form of internationalisation and are not afraid of the quality of their own products. The research carried out by the Center for Research Cooperation Poland-Ukraine, was done in part to offer practical recommendations. Using our preliminary results, in 2018 we presented our proposals to the Starobelsk State District Administration of the Luhansk region and to the Verkhovna Rada of Ukraine Committee on State Building, Regional Policy and Local Self-Government of the Verkhovna Rada of Ukraine. This was called to optimise the state regulatory policy in the field of small and medium entrepreneurship and attract international investments in the country's economy. The proposals were accepted for implementation into the state's regulatory policy. From these institutions, the Center for Research Cooperation Poland-Ukraine received proposals for the continuation and deepening of the research. These concerned the adaptation in Ukraine of EU policy on regulating activities of small and medium-sized enterprises; improving interaction between the local, regional and state authorities in relation to small and medium-sized businesses; suggesting recommendations on the cross-cultural communications between the business communities of Poland and Ukraine. Research Conclusions The results of the study allow us to draw the following conclusions. The theoretical, methodological and applied practical research suggests that the internationalisation of Ukrainian small and medium-sized enterprises is at an early stage. It does not go much beyond export-import operations. This conclusion, based on the presented theoretical and empirical studies, is confirmed by a general assessment of the level of development of integration processes that are characteristic of modern Ukrainian economic science. At the same time, a certain lack of practical information about the potential host country and the absence of sufficient common equity are the main obstacles faced by small and medium-sized Ukrainian companies seeking to internationalise. A lack of knowledge of foreign markets is yet another significant barrier to entering other host markets. The unfavourable conditions for conducting business in Ukraine (especially for small and medium-sized businesses) is the main determinant pushing companies to go international and seek opportunities beyond Ukraine. Entrepreneurship and the determination of business owners (and managers) also play a role. When small and medium-sized Ukrainian companies internationalise, they tend to do so early in their existence, or even immediately. This is a response to the lack of opportunities available in Ukraine and their inability to compete with their grey-market counterparts. The development of effective state regulatory policy governing and guiding SMEs on the contemporary norms and standards of the European Union will be the main stimulant of internationalisation and the main boost to its socio-economic impact. The results of this study have shown that Ukrainian small and medium-sized enterprises can gain their greatest competitive advantages in the global market in industries including agriculture (primarily, "eco" and "green"), alternative and renewable energy production and IT technologies. The potential capacity of the world economy (and of united Europe in particular) suggests that integrating Polish and Ukrainian small and medium-sized enterprises would be expedient. Both the proximity of the national business cultures of Poland and Ukraine, and the profiles of modern Polish and Ukrainian manager, would also be conducive to successful collaboration between the two countries. Ukraine state regulatory policy governing SMEs would be well advised to adopt a set of measures and mechanisms to stimulate internationalisation. Such measures could include the following four: form a favourable institutional environment for micro-enterprises and SMEs; consider the needs of SMEs; simplify access to financial resources; optimise SME tax policy. If state regulatory policy is improved, and creation of favourable institutional prerequisites for the internationalisation of Ukrainian SMEs are created, the prospects for the various forms and avenues of SME economic activity developing may reach a qualitatively new level. The same may be said for the sector as a whole.
2020-04-30T09:09:11.030Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "b67e26dd1941551537bb046e6733848220612992", "oa_license": null, "oa_url": "https://zeszyty-naukowe.uek.krakow.pl/article/download/1933/1450", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07896014e5b6504883dd54b97d31cd19e91e51ba", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
258063392
pes2o/s2orc
v3-fos-license
Rapid-kinetics degron benchmarking reveals off-target activities and mixed agonism-antagonism of MYB inhibitors Attenuating aberrant transcriptional circuits holds great promise for the treatment of numerous diseases, including cancer. However, development of transcriptional inhibitors is hampered by the lack of a generally accepted functional cellular readout to characterize their target specificity and on-target activity. We benchmarked the direct gene-regulatory signatures of six agents reported as inhibitors of the oncogenic transcription factor MYB against targeted MYB degradation in a nascent transcriptomics assay. The inhibitors demonstrated partial specificity for MYB target genes but displayed significant off-target activity. Unexpectedly, the inhibitors displayed bimodal on-target effects, acting as mixed agonists-antagonists. Our data uncover unforeseen agonist effects of small molecules originally developed as TF inhibitors and argue that rapid-kinetics benchmarking against degron models should be used for functional characterization of transcriptional modulators. Introduction Transcription factors (TFs) establish cell states by directly interpreting the cis-regulatory code of the genome. 1,2 TF dysregulation by mutation or aberrant expression underlies numerous diseases, and is a hallmark of cancer. 3,4 In principle, the highly specific roles of TFs in enforcing developmental and disease phenotypes make them ideal targets for drug development. [5][6][7] However, direct drugging of TFs remains challenging despite extensive efforts. 5,6 A central problem in these efforts is a lack of detailed mechanistic understanding of TF function and, as a result, no clear consensus on how the functional output of a TF should be measured for drug characterization purposes. 5 Recently, direct gene-regulatory functions of TFs have been established in pre-steady state assays where rapid TF degradation is coupled with measurements of genome-wide transcription rates. [8][9][10][11][12][13] These studies have demonstrated that TFs have narrow direct transcriptional programs and that long-term TF deprivation (e.g. after a CRISPR/Cas9-mediated knockout) leads to significant secondary effects obscuring a direct functional readout. 8,11 We reasoned that the specificity and on-target activity of TF inhibitors would be best evaluated in a rapid kinetics system where their immediate transcriptional effects are benchmarked against targeted TF degradation ( Figure 1A). 2 In particular, since expression of many TFs rapidly changes in response to various external stimuli 3,5,14-16 , a rapidkinetics assay is necessary to distinguish the direct effects of an inhibitor on its target TF from the secondary effects resulting from a potential change in expression of the target TF. Focusing on MYB, an oncogenic TF driver of multiple cancers [17][18][19][20][21][22][23][24] and an emerging therapeutic target 5,25-27 , we engineered a chemical degron model and established the direct gene-regulatory functions of MYB in a nascent transcriptomics assay. By benchmarking the nascent transcriptomics signatures of six MYB inhibitors against the degron we uncovered their off-target effects and unexpected mixed agonism-antagonism of their on-target activities. A degron model reveals direct gene-regulatory functions of MYB MYB is a critical transcriptional dependency of acute myeloid leukemia (AML) 11,18,28,29 , where it has been a longterm focus of therapeutic efforts [30][31][32][33][34][35][36][37] , making AML a relevant context for functional characterization of MYB inhibitors. We therefore began by engineering a chemical degron model of MYB in AML cells. We fused the Cterminus of MYB with an FKBP12 F36V (dTAG) domain and a fluorescent tag by a homozygous knock-in of the FKBP12 F36V -mScarlet-coding DNA sequence into the endogenous MYB locus in MV411 cells ( Figure 1B,C). The resulting fusion protein was nearly completely degraded after a 1-hour treatment with dTAG V -1, a highly specific VHL-engaging PROTAC 38 ( Figure 1D). As expected, degradation of MYB resulted in a profound loss of cell viability, consistent with the effects of a genetic MYB knockout in AML cells 11 ( Figure 1E). To establish MYB's direct generegulatory functions, we measured genome-wide rates of nascent mRNA synthesis by thiol (SH)-linked alkylation metabolic sequencing of RNA (SLAM-seq) 39 after a 1-hour MYB degradation. Defining direct targets as those genes which displayed significant changes in transcription rates (FDR<0.05) 8 , we detected 450 genes directly regulated by MYB ( Figure 1F). Of these, 319 genes were downregulated and 131 genes were upregulated, indicating that MYB acts as a transcriptional activator and repressor of these genes, respectively. Degron benchmarking establishes target specificity of MYB inhibitors In parallel, we performed SLAM-seq in AML cells treated with six agents reported as MYB inhibitors: MYBMIM 31 , celastrol 34 , naphthol AS-E phosphate 40 , mebendazole 32,41,42 , plumbagin 33 and all-trans retinoic acid (ATRA) 43 ( Figure 1A, Table 1). Although each is described as an inhibitor of MYB, the agents act through different mechanisms. MYBMIM, celastrol, plumbagin and naphthol AS-E phosphate inhibit MYB function by disrupting its interaction with the critical co-activator p300 and are therefore expected to cause an immediate dysregulation of the MYB target genes. In contrast, ATRA acts indirectly by decreasing MYB expression and thus its immediate transcriptional effects are expected to be distinct from the effects of acute MYB loss. Although the direct target of mebendazole has not been identified, it appears to target MYB for proteolytic degradation with prolonged exposure and may act through additional mechanisms. 42 For comparison, we treated MV411 cells with the BET bromodomain inhibitor JQ1 44 , which has been reported to indirectly inhibit MYB by interfering with its expression and function 45 . In addition, given that MYB appears to directly activate the expression of MYC 28 ( Figure 1F), we included in the comparison two MYC inhibitors, KI-MS2-008 46 and MYCi361 47 . MYBMIM, a cell-permeable peptidomimetic 31 , was applied for 30 minutes while all other inhibitors were applied for 1 hour prior to SLAM-seq. The MYB inhibitors displayed a dramatic variability in the number of dysregulated genes, varying from 19 (naphthol) to 1123 (MYBMIM; Figure 2A). Consistent with a prior report 8 , JQ1 caused widespread and bimodal effects, altering the transcription rates of >2000 genes in both directions. On pairwise overlap, the MYB inhibitors captured a relatively minor portion of the direct MYB program (between 5-155, or 1-34%, of the 450 direct MYB targets), compared with 43% of the MYB program captured by JQ1 ( Figure 2B). Nonetheless, the MYB inhibitors displayed a stronger specificity for MYB target genes compared to JQ1, because they elicited much narrower responses ( Figure 2C). Surprisingly, ATRA, which, as expected 43 , directly inhibited MYB transcription, also demonstrated a strong enrichment for primary MYB targets ( Figure 2C), perhaps due to cooperation between MYB and the retinoid receptors at the target gene level 48 . The MYC inhibitors were also enriched for the MYB targets but displayed extremely narrow programs (Figure 2A,C). Overall, approximately half of the direct MYB program (249 of 450, or 3 55% target genes) was captured by any combination of MYB inhibitors, while 86 targets were affected by 2 or more MYB inhibitors ( Figure 2D,E). We considered the possibility that, because the definition of a primary MYB target depends on the significance cutoff, some genomic effects of the MYB inhibitors may be falsely classified as offtarget if the same genes fell just below the significance level in the degron SLAM-seq. We found 1610 genes whose transcription rates were affected by at least one MYB inhibitor but were unchanged after MYB degradation ( Figure 2D). A significant majority of these genes (1171, 72%) displayed less than a 25% net change in the transcription rate after a near-complete MYB degradation ( Figure 2F), thus likely representing bona-fide off-target effects of the MYB inhibitors. We conclude that, while MYB inhibitors display strong enrichments for primary MYB targets, they do not attenuate the entire MYB transcriptional program, and significant portions of their activities appear to be offtarget. MYB inhibitors act as mixed agonists-antagonists Having established target specificities of the MYB inhibitors, we sought to characterize their on-target functional outputs. Contrary to the effects of MYB degradation, the MYB inhibitors generally activated more genes than they repressed ( Figure 2A). In addition, we observed generally weak correlations between the MYB degron and inhibitor responses, both transcriptome-wide and in the space of the confirmed direct MYB targets ( Figure 3A,B). Plotting the transcriptional responses to MYB inhibitors against the degron-induced changes at overlapping target genes revealed that the inhibitors displayed bimodal effects on the transcription of MYB-regulated genes, further activating subsets of genes that were repressed by MYB degradation, and vice versa ( Figure 3C). We further reasoned that the bimodal activities of the inhibitors may be distributed unevenly across ontologically defined groups of inhibitorresponsive genes. Indeed, utilizing comparative pathway enrichment analysis we detected distinct MYB-regulated pathways where the action of MYB was either agonized or antagonized by the inhibitors ( Figure 3D,E). We conclude that MYB inhibitors augment, rather than reverse, the gene-regulatory functions of MYB at a subset of MYBregulated genes and pathways, thus acting as mixed agonists-antagonists. Discussion TFs control gene expression through complex interactions with chromatin, which, in addition to DNA, includes numerous cooperating TFs, cofactors and structural proteins 1,2 . The molecular mechanisms linking TF binding to DNA to a change in gene expression remain poorly understood, but it is clear that a TF's functional output depends exquisitely on the local chromatin context 2,8,9,11 . Although TFs have been historically characterized as activators or repressors, many function as both, depending on the local presence of cooperating factors 46,49,50 . The combinatorial nature of these interactions and absence of simple rules governing TF output significantly complicates evaluation of small-molecule inhibitors and makes elusive such fundamental pharmacodynamic parameters as specificity and efficacy. Indeed, the functional effect of a hypothetical small-molecule TF modulator may vary dramatically across the expressed genome even if it does not engage in any off-target interactions. In addition, chemical TF inhibition may not be functionally equivalent to TF loss, depending on the structural role a TF may play in shaping the local chromatin environment (for example, competing with other TFs for DNA binding) 51 . We illustrate these concepts by a comparative analysis of six MYB inhibitors versus chemical MYB degradation, where we use the degron as a proxy model of MYB inhibition with near-absolute specificity and efficacy. Although all of the evaluated MYB modulators demonstrated some effects on primary MYB targets, acting with more specificity than BET bromodomain inhibition, they were unable to capture the entire MYB program and displayed significant off-target activities. We speculate that this apparent lack of specificity may be partly driven by their indirect actions on MYB and the complexity of the MYB and p300/CBP interactomes, rather than simply by engagement of unintended targets or poor on-target efficacy. Indeed, four MYB inhibitors disrupt the interaction between MYB and the co-activators p300/CBP (celastrol, plumbagin, MYBMIM and naphthol AS-E phosphate; Table 1). Of these, plumbagin appears to bind the trans-activator domain of MYB 33 , while the other three inhibitors bind to the KIX domain on the surface of p300/CBP and thus modulate MYB activity indirectly 31,34,40 . The KIX domain of p300/CBP interacts with a number of other TFs 52 , and therefore the "off-target" effects of the KIX-binding inhibitors may in fact represent on-target modulation of p300/CBP in a MYB-independent manner. Conversely, while 4 the interaction with p300/CBP is essential for the oncogenic properties of MYB 53,54 , it does not represent the full repertoire of MYB interactions 22,30,55,56 , and any p300/CBP-independent activities of MYB will be preserved after p300/CBP inhibition. Functional agonism, antagonism and mixed agonism-antagonism are basic parameters initially developed in receptor pharmacology where they are typically established by comparing drugs with natural receptor ligands. 57 While TF inhibitors are typically developed as antagonists, in principle they can augment the local output of a TF, whether it be to activate or repress the transcription of a target gene. For example, some inhibitors of MYC act by stabilizing MAX homodimers on DNA and can be thought of as MAX agonists 46 . Our data uncover unexpected activities of MYB inhibitors as context-dependent agonists. Although p300 and CBP are typically classified as coactivators, they may also function as corepressors 58,59 . We speculate that p300/CBP restrain MYB function at a subset of target genes and inhibiting their interaction with MYB may result in increased MYB activity. Surprisingly, ATRA, which regulates gene expression through binding to the nuclear retinoid receptor 60,61 , demonstrated a strong enrichment for primary MYB targets and similar functional ambivalence, suggesting a potential interaction between MYB and the retinoid receptor at the enhancer/promoter level. Importantly, the mixed agonism-antagonism of MYB modulators differentially affects distinct MYB-regulated pathways, indicating that any therapeutic use of these molecules would need to be tailored to the specific parts of the MYB program that drive the disease phenotype. In conclusion, we demonstrate that the rapid kinetic resolution of TF degron models allows for a more precise characterization of the target specificity and efficacy of TF-directed inhibitors. Our observations indicate that benchmarking of TF modulators against degron models in nascent transcriptomics assays should be considered as an important criterion in their functional characterization. was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Figures The copyright holder for this preprint (which this version posted April 7, 2023. ; 7 (F) Genes whose transcription rates were affected (SLAM-seq FDR <0.05) by at least one MYB inhibitor but unaffected by targeted MYB degradation (n = 1610) are depicted according to the absolute fold change and unadjusted p-value in the SLAM-seq assay performed after MYB degradation. . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 7, 2023. ; https://doi.org/10.1101/2023.04.07.536032 doi: bioRxiv preprint was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 7, 2023. ; 9 (C) SLAM-seq responses of MYB target genes to MYB degradation vs. inhibitor treatment, demonstrating mixed agonist-antagonist effects of MYB inhibitors on the transcription of MYB target genes. (D and E) Heatmaps of predicted activation scores (z-scores) of top enriched upstream regulator (D) and canonical (E) pathways among the genes affected by MYB degradation and inhibitor treatment. Fifty pathways with top zscores in the MYB degron dataset are visualized. Data points reflect pathways with significant enrichments (BHadjusted p-value <0.05, calculated by internal IPA function) and are colored according to the activity z-scores, predicting pathway inhibition (z-score <1) or activation (z-score >1). Western blotting Whole-cell lysates were prepared in RIPA buffer (Boston Bio-Products BP-115-500) with protease inhibitor cocktail (ThermoFisher 23225). Lysates were boiled in Laemmli buffer (BioRad 1610737), separated by SDS-PAGE, and . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 7, 2023. ; https://doi.org/10.1101/2023.04.07.536032 doi: bioRxiv preprint transferred and blocked using standard methodology. HRP-conjugated anti-mouse and anti-rabbit IgG secondary antibodies were used for imaging (BioRad 1706515 and 1706515) with an enhanced chemiluminescence substrate (PerkinElmer NEL104001EA) according to manufacturers' instructions. Targeted TF degradation MV411 cells were modified by CRISPR-HDR to express a C-terminal FKBP12 F36V (dTAG) fusion of MYB. A donor DNA construct, including the knock-in cassette and ca. 400 homology arms, was commercially synthesized (Genewiz, Burlington, MA) and cloned into the pAAV-MCS2 plasmid vector obtained from Addgene (Watertown, MA). rAAV packaging was performed at the Boston Children's Hospital Viral Core. MV411 cells were electroporated with Cas9/sgRNA complexes targeting the HDR insertion using a Lonza SF Cell Line 4D Nucleofector (Lonza V4XC-2032). RNP complexes were formed by mixing 8.5 µg of TrueCut Cas9 Protein v2 (Invitrogen A36499) and 120 pmol sgRNA. 0.3´10 6 cells were washed with PBS and resuspended in 20 µL of SF Cell Line solution (Lonza). Ten µL of crude rAAV lysate was added to the cells immediately after electroporation 63 . After a 5-7 day incubation period the cells were sorted for mScarlet fluorescence. Single clones were then obtained by single-cell dilution microwell plating and screened for bi-allelic donor insertion by PCR. Clones were validated by Western blotting and Sanger sequencing. TF degradation was induced by adding 500 nM of dTAG v -1 as previously described 38 and followed by FACS measurement of mScarlet fluorescence and Western blotting. SLAM-seq For thiol (SH)-linked alkylation metabolic sequencing of RNA (SLAM-seq) 39 2.5´10 6 MV411 cells per replicate were incubated with 500 nM dTAG V -1, or DMSO, for 1 hour. For inhibitor experiments, MV411 cells were treated with indicated concentrations of inhibitors (Table 1) for 30 min (MYBMIM) or 1 hour (all other inhibitors). All experiments were done in at least 4 replicates. Metabolic labeling was performed by adding S 4 U to a final concentration of 100 µM for an additional hour. Cells were flash-frozen and total RNA was extracted using Quick-RNA MiniPrep (Zymo Research) according to the manufacturer's instructions except including 0.1 mM DTT to all buffers. Thiol modification was performed by 10 mM iodoacetamide treatment followed by quenching with 20 mM DTT. RNA was purified by ethanol precipitation and mRNA-seq was performed as described above. A modified version of the slamdunk pipeline was used for SLAM-seq processing (available at https://github.com/jkobject/slamdunk). . CC-BY-NC-ND 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
2023-04-12T13:12:09.054Z
2023-04-07T00:00:00.000
{ "year": 2023, "sha1": "585b9b12f2469c75639627c64dcc160dcc5319ef", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10104119", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "274d84419d0141ff8e72c2b6f810e9844a1d70d3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
15646019
pes2o/s2orc
v3-fos-license
Emergency resuscitative thoracotomy performed in European civilian trauma patients with blunt or penetrating injuries: a systematic review Purpose Emergency resuscitative thoracotomy (ERT) is a lifesaving procedure in selected patients. Indications are still being debated, but outcome in blunt trauma is believed to be poor. Recent reports from European populations, where blunt trauma predominates, have suggested favorable outcome also in blunt trauma. Our aim was to identify all European studies reported over the last decade and compare reported outcomes to existing knowledge. Methods We performed a systematic literature search according to PRISMA guidelines (January 1st, 2004 to December 31st, 2014). The “grey literature” was included by searching Google Scholar. Qualitative comparison of studies and outcomes was done. Results A total of 8 articles from Europe were included originating from Croatia, Norway (n = 2), Denmark, Iceland, the Netherlands, Scotland, and Switzerland. Of 376 resuscitative thoracotomies, 193 (51.3 %) were for blunt trauma. Male:female distribution was 3.5:1. The collectively reported overall survival was 42.8 % (n = 161), with 25.4 % (49 of 193) blunt trauma and 61.2 % (112 of 183) penetrating injuries. When strictly including those ERTs designated as done in the emergency department for blunt mechanism (n = 139) only, a total of 18 patients survived (12.9 %). Survival after EDTs for penetrating trauma was 41.6 % (37 of 89). Neurological outcome (reported in 5 of 8 studies) reported favorable neurological long-term outcome in the majority of survivors, even after blunt trauma. None referred to Glasgow Outcome Score. Heterogeneity in the studies prevented outcome analyses by formal quantitative meta-analysis. Conclusion The reported outcome after ERT in European civilian trauma populations is favorable, with one in every four ERTs in the ED surviving. Notably, outcome is at variance with previously reported collective data, in particular for blunt trauma. Multicenter, prospective, observational data are needed to validate the modern role of ERT in blunt trauma. Introduction Emergency resuscitative thoracotomy (ERT) may serve as a lifesaving procedure for selected trauma patients presenting in extremis with pending or already witnessed cardiopulmonary collapse. Since the 1960s, the pendulum for resuscitative emergency thoracotomy has swung from conservative to a more aggressive approach, but the use, indications and risks are still debated [1][2][3][4]. Collective evidence and consensus have suggested that outcome is best for penetrating trauma patients with pending (or witnessed) cardiac arrest, with most favorable outcomes reported for patients with an isolated stab wound to the heart [2,[5][6][7]. Notably, the majority of studies published stem from largevolume institutions in North America or South Africa, Abstract Purpose Emergency resuscitative thoracotomy (ERT) is a lifesaving procedure in selected patients. Indications are still being debated, but outcome in blunt trauma is believed to be poor. Recent reports from European populations, where blunt trauma predominates, have suggested favorable outcome also in blunt trauma. Our aim was to identify all European studies reported over the last decade and compare reported outcomes to existing knowledge. Methods We performed a systematic literature search according to PRISMA guidelines (January 1st, 2004 to December 31st, 2014). The "grey literature" was included by searching Google Scholar. Qualitative comparison of studies and outcomes was done. Results A total of 8 articles from Europe were included originating from Croatia, Norway (n = 2), Denmark, Iceland, the Netherlands, Scotland, and Switzerland. Of 376 resuscitative thoracotomies, 193 (51.3 %) were for blunt trauma. Male:female distribution was 3.5:1. The collectively reported overall survival was 42.8 % (n = 161), with 25.4 % (49 of 193) blunt trauma and 61.2 % (112 of 183) penetrating injuries. When strictly including those ERTs designated as done in the emergency department for blunt where penetrating trauma represents a predominating injury mechanism and for which surgeons and systems are readily trained to deal with these injuries [2,8]. Contrary to the experience in regions with a high incidence of penetrating trauma, the predominant mechanism seen in the majority of European hospitals is related to blunt injuries, with even busy centers receiving far less than 10 % penetrating trauma. Blunt trauma victims are generally believed to have a poor outcome if cardiac arrest follows or circulatory collapse is pending. An ERT is believed to be rarely indicated in blunt trauma with cardiac arrest as outcomes are dismal in the vast majority [2]. However, with the maturation of systems across Europe, an increasing reported series of resuscitative emergency thoracotomies from European populations have accumulated, even with successful experience from pre-hospital employment in one series [9] and accumulating experience also from the recent wars [10]. However, collectively, little is known about the outcome of resuscitative ET in the modern civilian European trauma populations besides anecdotal reports. Thus, we wanted to systematically review the reported experience and outcomes of ERT use in these civilian trauma populations and compare the results from the reported literature. The aim of the study was to give a systematic overview of reported indications, outcomes and reports based on civilian European publications over the past decade. Secondly, we wanted to evaluate the standardized reported variables, factors and outcomes related to these reports. Search methods and inclusions and exclusion criteria We performed a systematic literature search according to the PRISMA guidelines [11] of the worldwide literature in PubMed/MEDLINE and EMBASE databases using the keywords and/or medical Subject headings (MeSH) terms "emergency thoracotomy", "emergency department thoracotomy", "resuscitative thoracotomy", "urgent thoracotomy", "Trauma", "Europe". The study was limited to the time period of January 2004 to December 2014 to present updated experiences and avoid dated reports from the past. Additional searches of other databases (EMBASE, etc.) were performed by a trained hospital librarian. The international database of prospectively registered systematic reviews (PROSPERO) was queried for potential planned or ongoing systematic reviews. Titles and abstracts of studies were scrutinized for relevance. We included any report published in the English, German, or Scandinavian (Norwegian, Danish, Swedish) languages, and considered studies in French, Spanish or other European languages if sufficient information was obtained in English abstract form or, if further detailed information could be retrieved by contacting the authors. The references of the identified fulltext articles were further hand-searched to retrieve additional studies. The 'grey literature' was searched using Google and Google Scholar to identify studies published outside the PubMed index, including the European Journal of Trauma and Emergency Surgery. We excluded publications on the following criteria: case reports on only one or, only a couple of cases; reports of few cases (<5 ERTs); reports from military medicine (e.g. the war in Afghanistan and Iraq); and, reports on pre-hospital resuscitative thoracotomy. Based on small population samples, the high likelihood for heterogeneity and risk of bias in the included studies, we deferred formal meta-analysis and, rather, focused on the descriptive data and outcomes retrieved in the included reports. Definitions used and data collected for each study There is no standard definition to Emergency Resuscitative Thoracotomy (ERT) in the literature and investigators use interchangeably the terms 'resuscitative thoracotomy', 'emergency thoracotomy', 'emergency department thoracotomy', 'urgent thoracotomy' and other combinations thereof, either focusing on the location of the procedure or the urgency of its indication. We scrutinized the definitions used in each paper as located either in emergency department or operating room, and as emergent or urgent in nature, and included papers that clearly reported on the procedure as part of the resuscitative or emergency management process of the trauma patient, either located in the emergency department (ED) or the operating room (OR). For consistent reading, we chose the term 'ERT' throughout the paper. Demographic data such as geography, number of patients, age and gender were recorded for included studies. Mechanism of injury (MOI) was defined as either blunt or penetrating. Location of major injury (LMOI) was limited to the major anatomic area of injury as cerebral, abdominal, thoracic, pelvic and/or multiple. Injury severity was obtained from each study as reported by the injury severity score (ISS) [12] and probability of survival (Ps) as per calculation by TRISS methodology [13], if given. Survival was obtained for each study for the entire cohort or, alternatively, for the subcohorts, as reported. The following measures were further searched for in each paper: reported and, if yes, by what means reported (either as "good/poor") or by any formal score, such as Glasgow Outcome Score (GOS) [14]. Data presentation and statistical analyses Data are presented in a descriptive manner and as reported in the respective studies. We assumed upfront that studies would be of a very heterogeneous nature and thus did not plan to perform a formal meta-analyses, as this would not be substantiated based on the very high risk of bias. Results The search result is shown in Fig. 1. For the study period, we found a total of 8 studies [15][16][17][18][19][20][21][22] that originated from Europe and for which the defined inclusion criteria were fulfilled. The study descriptives, patient demographics and main findings are reported in Table 1. A further two publications were found from Norway and Ireland [23,24]. Upon author contact, we were informed that 5 resuscitative emergency thoracotomies were performed, with no survivors in Tromsø, Norway. Contact with the group from Dublin failed. Due to a lack of detailed information from these sites, we could not include these studies in the collective presentation. A register study from Germany reported outcome on a subset of patients having emergency thoracotomy, but with no specific details to the group as such, and was thus excluded from the qualitative assessment [25]. The 8 studies reported a total number of 183 penetrating and 193 blunt resuscitative emergency thoracotomies, varying from 9 cases in Reykjavik, Iceland to 121 cases from Zurich, Switzerland. Among the included studies (Table 1), four studies [15][16][17]20] accounted for 88 % of the patients included. For studies reporting gender, the male:female ratio (271 males, 78 females) was 3.5:1 (Table 1). Mean age varied with more than two decades among studies, from lowest at 31 years to highest at 51 years. Five studies reported the selected method of ET [15][16][17][18]20]. Four studies from Denmark [16], Norway [19], the Netherlands [17] and Switzerland [15] included both emergency department and operating room thoracotomies and stated these were done for resuscitative purposes. In the Edinburg study [22], 6 patients had ERT in the ED and a further 10 in the OR. For the latter 10 procedures, no further descriptions were given and these 10 cases were thus excluded. Vital signs (systolic blood pressure, respiratory rate, pulse) were reported in 6 studies, and revised trauma score (RTS) in three studies from Iceland [18], Norway [19], and the Netherlands [17]. MOI and LOMI MOI is reported in Table 1 and Fig. 2. Studies reporting LOMI, or related location of severe injuries are depicted in Table 2. Three studies reported specific injuries found during operation, without giving the LOMI [15][16][17]. Indication for ERT All studies described an indication for ERT, but with variable degree of information and whether or not this was done according to a pre-stated institutional protocol. In the Croatian cohort, all were done for thoracic or suspected cardiac injuries, but with no other description available. The indication for each ERT that was performed was described in the other studies. The studies from Scotland and the Netherlands both included only penetrating injuries, of which the majority to the chest and with some form of physiological compromise. For blunt trauma, motor vehicle collisions and falls predominated, while stab wounds were more frequent than gunshots wounds for penetrating trauma (data not shown). Johannesdottir, Ferris and Soreide describe the indication as either suspected tamponade, cardiac arrest; and, witnessed loss of SOL or suspected exsanguination [18,19,22]. Pahle [20] stated the indications as either "unresponsive patient with penetrating injury who has shown SOL during transport or at the scene"; "exsanguinated patients without immediate response to fluid resuscitation"; and "obviously large abdominal bleeding and decreasing blood pressure with no response to fluid resuscitation before laparotomy", but with no further specification to the distribution in specific patients. Kandler et al [16] describe 'penetrating trauma with pulseless electric activity (PEA) within the last 5 min, unstable patients with ongoing intrathoracic bleeding, and as means of clamping the descending thoracic aorta as a step in the initial resuscitation', as indications for ERT [16]. Van Waes et al. [17] give the most detailed indication for ERT: (1) Loss of SOL on arrival ED, but present at scene; (2) failure to respond to resuscitation with SBP <60 mmHg, or pericardial tamponade and SBP <60 mmHg; (3) Hemothorax on chest X-ray and initial output >1500 mL or ongoing output of >200 mL/h for 2-4 h after insertion of the tube; (4) Hemothorax on chest X-ray, with <1500 mL, but CTA findings prompting surgical intervention; (5) Massive air embolism. Lustenberger et al. [15] used the indication 'non-recordable blood pressure on ED admission, loss of SOL in the ED or immediately before hospital arrival and exsanguination from trauma without immediate response to fluid resuscitation'. Prehospital factors and transport time Time variables were reported in different ways and for different time intervals across the studies. Hudorovic [21] noted a difference in median prehospital transport time in survivors (median 150 min, range 15-180 min) compared to non-survivors (median 220 min, range 30-220 min). Pahle et al. [20] did not find a significant difference in survivors and non-survivors, when assessing time from injury to arrival in ED; non-survivors arrived within a median of 40 min (IQR 18-84 min) compared to survivors who arrived in ED a median of 45 min (IQR 25-95 min) after injury (P = 0.477). The same non-significant findings occurred for those with penetrating injuries in the same study, with transport time to ED being almost reduced by half to 20 and 27 min for non-survivors and survivors, respectively [20]. This time interval matched with the penetrating cohort from the Dutch group having a prehospital transport time at median 24 min (IQR 15-32) overall, but with significant difference in transport time between patients who went on to have ERT in the emergency department compared to the operating room (median 13 min (IQR 2-23) compared to 33 min (IQR 18-35; P = 0.006). Of note, the median time until actual ET was performed was 68 min (IQR 42-128) for the whole group; with less time passing to ERT in the ED (median 25 min (IQR 15-107)) compared to the operating room ERTs (median 79 (IQR 52-155) for a P = 0.037 [17]. In the Swizz study [15], the median time from ED admission to the start of ERT in the ED was 5 (range 2-17) min, and the median time from ED admission to the start of ERT in the OR was 25 min. Sign of life Six out of 8 studies reported SOL [15][16][17][18][19][20] (Table 1). According to one publication from Norway [19], 7 of 10 (70 %) patients had SOL at scene, but only 4 out of 10 patients (40 %) had SOL in the ED. In the second study from Norway, 86 of 109 (79 %) had SOL at injury site, but SOL in the ED is not given [20]. In Danish cohort, 19 of 21 (90.5 %) had SOL in ED [16]. In the Dutch publication 55/56 (98 %) had prehospital SOL, and 50/56 (89 %) showed SOL in hospital [17]. In the Iceland study, 6 of 9 (66.7 %) had SOL, but it is not described in detail if this was in ED. However, the 3 patients without SOL all died [18]. In the Swizz study [15], SOL is reported "at scene" 46/49 EDT (94 %), "en route" 40/49 (82 %) and "on admission" 33/49 (67 %) for those procedures that were performed in as EDT, but not mentioned for procedures performed in the OR. The Croatian report states "absence of SOL at the hospital was a herald of mortality", but mentions no distribution of SOL from their data [26]. The Scottish study does not comment on SOL [22]. Location of ERT Regarding the procedures performed in the OR, the Danish report [16] stated that these were included if considered part of the immediate resuscitative process, but did not give any criteria for performing the ERT in the ED or OR. Similarly, the Swizz study stated that EDT was performed for patients in extremis, while those with pending shock or who were deemed to tolerate transport would be performed in the OR as the immediate resuscitative procedure [15]. The same indications for ED and OR were noted in one Norwegian study [19]. The Dutch group [17] performed the procedure in ED if systolic blood pressure (SBP) was <60 mmHg (50 % of cases), presence of cardiac tamponade or, loss of SOL in the ED (41 %). Procedures taken to the OR had SBP >60 mmHg (but <100 mgHg), initial chest output >1500 mL blood, or ongoing chest tube output >200 mL/h (together 50 % of cases in OR), pericardial tamponade with SBP >60 mgHg (27 %). Most procedures were performed as an anterolateral thoracotomy, followed by sternotomy (Table 3). Survival The collectively reported overall survival for all 376 ERTs is presented in Fig. 2, with breakdown in MOI and location of where performed (ED or OR) with survival rates. For blunt trauma survival was 25.4 % and for penetrating injuries 62.2 % (Fig. 2). Three out of the 5 articles reporting on blunt trauma ERT had survival rates above 10 % [16,18,20], ranging from 12.2 % [20] to 60.0 % [18]. When strictly including only those ERTs designated as done in the ED and for blunt injury (n = 139), the survival was 12.9 % (n = 18). As noted in Table 1, only a few studies reported probability of survival and with considerable spread in the estimates. Neurological outcome in survivors Neurological outcome was reported in 5 of 8 studies [15,17,18,20,26], most with favorable neurological long-term outcome in survivors, even in blunt trauma survivors. This was reported in both low volume and high volume studies. Among the 8 publications representing a total of 161 survivors after ERT none of the investigators reported neurological outcome using the Glasgow Outcome Scale (GOS) [14] or similar objective measures. Rather, a qualitative designation as "poor" or "good" neurological outcome or 'neurologically intact' or 'without neurological impairment' was used in most studies. For 34 survivors, no statement on neurological status or outcome was made. Thus, outcome was available in 127 (78.9 %) of survivors, of which 86.6 % had a satisfactory or good neurological recovery. In 17 survivors, the outcome was designated as persistent neurological impairment or inability to live an independent life. One study [26] reported neurological impairment in all survivors (n = 10) with none living independent lives after survival from ERT. Discussion This systematic review found 8 studies on ERT for trauma in European civilian trauma populations. Collectively, these accumulated survival rates are higher than previously reported in the literature-even for blunt trauma performed in the emergency department survival was 12.9 %. The findings are at variance with the perceived standards and outcome reported by major reports, reviews and guidelines previously [2,5,27,28]. These studies report a much higher survival rate than previously reported and may indeed point to a less futile outcome for blunt trauma victims with pending cardiac arrest, witnessed loss of SOL and even who undergo cardiopulmonary resuscitation than previously argued for in most studies [2-4, 27, 28]. The role of ERT continues to stir considerable debate [5][6][7][27][28][29][30][31]. Proponents have even suggested taking the procedure to the pre-hospital field for selected patients in extremis [9,29]. While all agree that blunt mechanism is in principle associated with a disfavorable prognosis, there are some reports on favorable outcome in a few selected patients, thus arguing for a not completely nihilistic view even in those with blunt trauma [28,31]. However, it is clear that more knowledge is needed in this area. Further, one may envision the indication of resuscitative emergency thoracotomy for trauma to change with the increasing availability of resuscitative endovascular balloon occlusion of the aorta (REBOA) technique [32]. However, this is currently performed only in some specialized centers. Moore et al. [6] reported duration of prehospital CPR as a reliable means to establish futility for ERT. Notably, in a large German registry study of over 10,000 trauma patients, a limited number of 757 patients had documented prehospital cardiopulmonary resuscitation with on-scene or en route performed closed chest compression due to traumatic arrest [25]. The rate of emergency thoracotomy in this cohort was 10.2 % (n = 77) for a reported survival rate of 13.0 % (95 % CI 5.5-20.5), including both penetrating and blunt trauma [25]. From a registry perspective, this falls within the cumulatively reported range of 12.6 % survival for blunt injury ERT in this study (for those performed in ED alone). The overall frequency by which lifesaving or hemostatic emergency surgery procedures are performed among injured patients is hard to tell from the worldwide literature. However, another updated German nationwide study of almost 13,000 trauma patients found immediate lifesaving procedures (performed during resuscitation) in 5.5 % of patients [33]. Among the 713 patients that had interrupted resuscitation for an emergent procedure, this most frequently was done as a laparotomy (51 %), followed by craniotomy in 20 %, and only 10 % had thoracotomy (and 9.3 % had pelvic surgery) [33]. In the German report, 70 procedures were emergency thoracotomies and lethality was 50 %. Notably, of the 70 ETs, 16 were for cardiac injury, of which only 5 survived (32 %) [33]. In a study from Northern Norway (n = 142) [23], <3 % of all patients required hemostatic surgery on admission, and only 5 patients had an emergency thoracotomy, with no survivors. In another study from Dublin [24], 5 % of penetrating trauma victims had a thoracotomy, although no further data was provided on indication nor on outcome in this report. However, these reports indicate that emergency salvage procedures are not commonly performed in European trauma populations [23,24,33]. The number of relevant articles from European trauma hospitals on ERT is limited, as demonstrated by this systematic review. Our search only found 8 articles that fulfilled the criteria. Consequently, we cannot rule out a publication bias that may skew the results. Four studies accumulated 88 % of the entire cohort and may bias the results accordingly. However, the studies are from diverse geographic regions, with variable volume and difference in trauma systems. Also, as mentioned above, they match with experience reported from nationwide registry, such as the German Trauma registry database. The report from Iceland on 60 % survival for blunt trauma included few (n = 5) patients. In contrast, Pahle and Lustenberger, respectively, had 82 blunt and 39 blunt cases, with 12.2 and 7.7 % blunt trauma survivors after ERT, respectively. This could point to a higher rate of survival in mature trauma system. Also, it may reflect an aggressive approach to emergency thoracotomy where one may entertain the possibility that some patients would have survived without being exposed to the ERT procedure. Accordingly, and as a flipside to that coin, hospitals with a more restrictive approach will have fewer survivors after ERT, in particular for blunt trauma. These nuances cannot be discerned from the available data, but certainly warrants future validation in other series before any firm conclusions can be made. There are some limitations to this study that warrants mentioning. One is the selected reports available from a few centers of variable size and geography. While some reports stem from larger European urban areas such as Oslo, Copenhagen, Zurich and Rotterdam, there is a paucity of similar reports from larger, metropolitan regions in Europe where a large number of trauma patients are treated. As mentioned, a publication bias may thus exist. For example, little is known about ERT performed in major civilian trauma patients in the UK (London, Birmingham), Germany (Berlin, Munich), the Netherlands (Amsterdam, Utrecht), Spain (Barcelona, Madrid), Italy (Rome, Milan) or France (Paris, Marseilles), to mention but a few. It is thus not known if the findings from the selected studies are representative of Scandinavia, the British Isles or mainland Europe, as such. Further, there is considerable variation in the reports, with inconsistent data reporting among the studies. Of notice, all the reported studies were retrospective in design. Thus, use of other definitions, variable data records and non-recorded or missing data may introduce bias in the reported studies. Despite these limitations, this is the first study to collectively review the experience and outcome of ERT in European civilian trauma patients in the 21st century. It calls into questions the somewhat nihilistic view expressed toward resuscitative emergency thoracotomy for blunt trauma. Further studies are warranted to explore the generalizability of the findings, in particular for blunt trauma. Conclusions In this collective, systematic review of European studies, half of procedures were for blunt trauma, with survivors after ERT in the ED collectively reported at 12.9 and 41.6 % for blunt and for penetrating injuries, respectively. There was no firm indication that neurological outcome were dismal, but further studies need to confirm these results, as considerable variation to previous collective reports exists. A protocolized, multicenter, prospective observational study should be launched to address these questions and arrive at better answers for correct indications and related outcomes of ERT in severely injured trauma patients.
2017-08-02T07:42:19.497Z
2015-08-18T00:00:00.000
{ "year": 2015, "sha1": "bbd79aca765fec216c740b3306ae667c08233203", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00068-015-0559-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bbd79aca765fec216c740b3306ae667c08233203", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8042915
pes2o/s2orc
v3-fos-license
Drug Target Prediction Based on the Herbs Components: The Study on the Multitargets Pharmacological Mechanism of Qishenkeli Acting on the Coronary Heart Disease In this paper, we present a case study of Qishenkeli (QSKL) to research TCM's underlying molecular mechanism, based on drug target prediction and analyses of TCM chemical components and following experimental validation. First, after determining the compositive compounds of QSKL, we use drugCIPHER-CS to predict their potential drug targets. These potential targets are significantly enriched with known cardiovascular disease-related drug targets. Then we find these potential drug targets are significantly enriched in the biological processes of neuroactive ligand-receptor interaction, aminoacyl-tRNA biosynthesis, calcium signaling pathway, glycine, serine and threonine metabolism, and renin-angiotensin system (RAAS), and so on. Then, animal model of coronary heart disease (CHD) induced by left anterior descending coronary artery ligation is applied to validate predicted pathway. RAAS pathway is selected as an example, and the results show that QSKL has effect on both rennin and angiotensin II receptor (AT1R), which eventually down regulates the angiotensin II (AngII). Bioinformatics combing with experiment verification can provide a credible and objective method to understand the complicated multitargets mechanism for Chinese herbal formula. Introduction Coronary heart disease (CHD) remains the single leading cause of death for adults worldwide [1]. Effective prevention and therapy for CHD poses a major challenge to the entire medical community. There exists a strong demand to continue searching for both safe and efficacious products to combat this emerging health epidemic. Traditional Chinese medicine (TCM) has fought against CHD and its related diseases for more than 1000 years and has accumulated thousands of herbal formula as well as clinical literatures, it has been considered to have huge potential as an information source and starting point for the development of CHD products [2]. Meanwhile, more and more patients all over the world take TCM as a complementary and alternative avenue to treat CHD. However, how herbal formula work and what are their drug targets are still unclear by now. Many studies have focused on active monomer of herbs to explain their therapy mechanism [3], but apparently there are significantly different characteristics between active monomer and herbal formula as whole. Active monomer may have a clear target, such as receptors, enzymes, ion channels, transmembrane signal transduction molecules, mostly acting on single-target, but Chinese herbal formula composed of diverse, complex components, its comprehensive pharmacological effects is accumulated by many active monomers through multichannel and multitargets [4]. How to determine the multitargets from such a complex biological process is a challenge to TCM. Coronary heart disease (CHD) is now a heavy burden on the society and families in both industrialized and developing countries, and some herbal formula present a definitely clinical effect on it, so it presents a better example and context for investigating the efficacy and the drug targets in TCM. The ancient TCM Qishenkeli (QSKL), prepared from a basic formula of six Chinese herbs (Radix Astragali Mongolici, salvia miltiorrhiza bunge, Flos Lonicerae, Scrophularia, Radix Aconiti Lateralis Preparata, and Radix Glycyrrhizae, etc.) is widely produced in China in accordance with the China Pharmacopoeia standard of quality control [5] and is commonly used in routine treatment of CHD of clinical practice in China. It contains largescale epidemiological survey in the randomized controlled clinical trials proved that it has a definite effect on improving heart function [6], while a lot of studies are carried out to investigated in active monomers among them and made great progress, for example, Astragalus Polysaccharide (APS, monomer of Radix Astragali Mongolici) is found has effect on cardiac chymase activities [7], tanshinone IIA (monomer of salvia miltiorrhiza bunge) is found in cardioprotective effects and attenuating myocardial hypertrophy [3], but as mentioned before, monomer pharmacological effects cannot present overall efficacy of the whole formula, studies involved all the compounds are rarely carried out. In recent years, people develop some bioinformatic methods to infer drug target interactions [8][9][10][11][12][13]. These methods provide opportunities to reveal the underlying molecular mechanism of TCM. Recent advances on the databases cataloging chemical components of herbs and the interactions between drugs and targets enhance the feasibility of predicting the herbs drug targets. DrugCIPHER-CS is an efficient drug target prediction method which is recently presented by Zhao and Li [14], and in this paper, we use it to predict the potential targets of QSKL's compositive compounds. This method is based on the principle that (i) drugs with similar chemical structure tend to bind functionally related proteins and (ii) functional relationship between the proteins can be measured by their distance in the protein interaction network. For a query drug, each protein in the protein interaction network will be assigned a score by DrugCIPHER-CS which describes the importance of the protein to the activity of the drug, and proteins with high scores will be hypothesized as this query drug's potential targets. This paper presents an idea that multi targets for herbs should be investigated by combing bioinformatics and experimental verification to finally determine drug targets. Firstly, herbal components are investigated by data mining from database; secondly, bioinformatics is applied to predict the drug target for all compounds based the principle of that similar structural has similar function, then bioinformatics including GO function analysis are used to look for the pathway that the proteins belong. Finally, experimental verification is taken to confirm how and what the herbs work on the body, thus to provide a credible method to investigate the complicated multitargets mechanism for herbs. Drug Targets Prediction. In this paper, we use drug-CIPHER-CS to predict drug targets of QSKL's compositive compounds. DrugCIPHER-CS recently presented by Zhao and Li [14] achieves good prediction performance and can infer drug targets in the genome wide scale. This method is based on the hypotheses that (i) drugs with similar chemical structure usually bind functionally related proteins and (ii) functional relationship between the proteins can be measured by their distance in the protein interaction network. Given a set of known drug-(drug-space) target (target-space) interactions, for a query drug and a candidate target gene, drugCIPHER-CS will measure the likelihood of their interaction based on the correlation between the query drug's structure similarity vector with the drug space and the candidate gene's functional similarity vector with the target space. For a query compound, drugCIPHER-CS will prioritize the proteins in the protein interaction network (i.e., candidate proteins) according to the order of the decreasing drug target interaction likelihood, and the candidate proteins with high likelihood will be hypothesized as the potential drug targets (Please refer to paper [14] for more details of DrugCIPHER-CS). Degree and betweenness Centrality in the Protein Interaction Network. A protein's degree is defined as the number of its direct interaction partners in the protein interaction network. The betweenness centrality of protein n is computed as where σ st denotes the number of the shortest paths between protein s and protein t in the protein interaction network, σ st (n) denotes the number of the shortest paths across protein n between protein s and protein t, and N is the total number of proteins in the protein interaction network. Both degree and betweenness centrality can measure a protein's topological importance in the network. The larger a protein's degree/betweenness centrality is, the more important the protein is in the protein interaction network. CHD Model Preparation. CHD is induced by direct coronary ligation as described before [24]. Briefly, Sprague-Dawley (SD) rats are anaesthetized with pentobarbital sodium (1%, 50 mg kg −1 intraperitoneally). The trachea of each rats is intubated per orally with a plastic tube connected to a respirator (Kent Scientific 325, China) set at a stroke Evidence-Based Complementary and Alternative Medicine 3 volume of 3 mL kg −1 , respiratory ratio: 2 : 1, and a rate of 80 strokes min −1 . After left thoracotomy and exposure of the heart, the left anterior descending coronary artery (LAD) is ligated with a 5-0 polypropylene suture (Surgipro, CT, USA) directly proximal to its main branching point. Control groups are made following an identical procedure but without the actual tying of the polypropylene suture. Thereafter, the thorax is closed and as soon as spontaneous respiration is sufficient, the rats are extubated and are allowed to recover under a heated lamp. They are fed a standard diet and water and are maintained on a 12-hour Lightand-dark cycle. After ECG testing, rats that averaged QTinterval prolongation in three precordial leads are included in the study. The QSKL group is treated for 28 days by daily oral gavage with total daily dosages of 508 mg/kg of the concentrated QSKL (Beijing university of Chinese Medicine, Beijing, China) dissolved in water. The control and model groups receive the same volume water via oral gavage as the QSKL vehicle. At the end of the study, all animals are anaesthetized using pentobarbital sodium following an overnight fast. Blood samples are collected via abdominal aorta puncture, place on ice, and allow to clot. After centrifugation, serum is collected, aliquoted, and stored at −80 • C until analysis of each indicator within a short period of time. Echocardiographic Assessment of LV Function. Echocardiography is used to detect Left ventricular end-systolic diameter (LVEDs), Left ventricular end-diastolic diameter (LVEDd), ejection fraction (EF), fractional shortening (FS), and other indicators. A PST 65A sector scanner (8-MHz probe) is used, which generates two-dimensional images at a frame rate ranging from 300 to 500 frames/s. LV dimension (LVD) is measured by M model, and fractional shortening (FS%) is calculated by the following equation: NC, USA). P < 0.05 was considered statistically significant. Preparation and Dose Consideration of Concentrated Results are presented as mean values with their standard deviation. Drug Target Prediction and Analyses. In order to reveal the underlying molecular mechanism of QSKL, we firstly use bioinformatic method to infer the targets of its chemical components. By use of literature curation, we determine QSKL's 231 compositive compounds. Then we use drugCIPHER-CS method [14] to infer their potential targets (Supplementary Table 1 avaliable online at doi: 10.1155/2012/698531). drugCIPHER-CS published recently by Zhao and Li achieves good performance for predicting the targets of drugs and can infer targets in the genome-wide scale [14]. For each compositive compound, drugCIPHER-CS prioritizes its candidate targets according to the order of the decreasing possibility being targeted by the compound. When we choose top 1% candidate targets, we obtain 3725 candidate target genes for 207 compositive compounds which have clear chemical structures. Average, one target gene is shared by 6.5 compounds. When we choose top 0.1% predicted targets, we obtain 639 target genes. Average, one gene is targeted by 3.6 compounds. As shown in Figure 1, there are 510 protein interactions between these 639 top 0.1% candidate targets (Figure 1). By comparing with the known cardiovascular diseaserelated drug targets (i.e., the known targets of drugs whose ACT code uses "C" as the first level) in DrugBank [15], we find both top 0.1% and top 1% candidate targets are significantly enriched with known cardiovascular disease-related targets (upper-tailed P value of hypergeometric cumulative distribution is 2.03E − 10 for top 0.1% and 2.05E − 08 for top 1% candidate targets). And the corresponding enrichment extent of top 0.1% candidate targets is higher than that of top 1% targets. After obtaining the potential targets for the QSKL's chemical components, we analyze the enriched KEGG biological pathways [25] (version: 2009.11) among these potential targets. In total we find 16 significantly enriched pathways among top 0.1% candidate targets (Table 1), including the pathways of neuroactive ligand-receptor interaction, ami-noacyl-tRNA biosynthesis, calcium signaling pathway, glycine, serine and threonine metabolism, Renin-angiotensin system, and so on. The importance of Neuroactive ligandreceptor interaction in the development and progress of cardiovascular disease processes such as CHD is well known, The key protein in this pathway such as Adrenergic receptor, Angiotensin receptor, Calcitonin receptor-like, Neurotensin receptor are closely related to the cardiac function. The pathway of Aminoacyl-tRNA biosynthesis plays a important roles in cardiovascular angiogenesis [26], The relationship between calcium signaling pathway and CHD is confirmed, and calcium antagonists have been widely used in clinical to inhibit extracellular calcium influx, reducing the concentration of intracellular calcium and lower myocardial contractility [27]. Glycine, serine, and threonine metabolism mainly provide the ATP for myocardial contractility [28]. Reninangiotensin system plays a central role in the deterioration of cardiovascular function [29]. Also, we research the functional distribution of these candidate targets ( Table 2). The significantly enriched gene ontology (GO) functional annotations [30] (version: 20111103) of these targets include cellular amino acid metabolic process, biosynthetic process, small molecule metabolic process, cellular nitrogen compound metabolic process and circulatory system process, indicating the QSKL intervening in these pathological progresses. These enriched pathways and GO functional annotations provide important clues for understanding the molecular mechanism of QSKL. In addition, by checking the degree and betweenness centrality of these candidate target genes in the protein interaction network, we find these candidate targets are significantly depleted with the proteins with the highest degree or betweenness centrality (Table 3). And the depletion extent for top 0.1% candidate targets is larger than that for top 1% candidate targets. That is, these QSKL's candidate target genes do not tend to be topologically the most important in the protein interaction network. This result is consistent with Hase et al.'s conclusion that known human drug targets tend to be less connected nodes in the network [31]. The TCM with multiple chemical components targets multiple less-connected nodes, which may produce greater synergetic efficacy and fewer side effects. control group, suggesting a steady CHD model is established. After treated by QSKL for 28 days, the EF value recovers by 37.62% compared with model group (Figure 2). Predicting Pathway Validation. The importance of neurohormonal activation in the development and progress of cardiovascular disease processes such as CHD is well known, and the renin-angiotensin system plays a central role in this [32].The chronically activated renin-angiotensin aldosterone system (RAAS) is believed to contribute significantly to the deterioration of cardiovascular function, Inhibitors of it have been routinely used to treat patients with CHD [29]. In this paper, RAAS are selected as example and context to validate predicting pathway. Critical indicators in RAAS pathway are detected to test the accuracy of the predicting pathway, we carry out series experiments to validate them including Elisa, IHC, and westernrblot. The western blot of renin shows that at the end of the study, the serum renin in model group increases by 45% (P < 0.05) compared with control, after treated by QSKL for 28 days, the level of renin shows a 22.76% reduction compared with model group (P < 0.05), which had no statistical significance when compared to the control (Figure 3(a)). Both Elisa and IHC results show that the levels of Ang II in model group upregulated by 27.88% compared with control (P < 0.05), after treated by QSKL for 28 days, a 16.59% reduction are detected in QSKL group compared with model (P < 0.05),which almost return to the level of the control (Figures 4 and 5, Table 4). AT1R is thought to be a better target to cure the CHD. The AT1R in model group up regulated by 59.00% compared with control. In QSKL group, its level decreases by 42.12% compared with model, which has no significant difference with control (Figures 3(b), 4, and 5). The level of serum aldosterone (ALD) in each group does not show any significant difference. Discussion At present, monomer in herbs is usually applied to explain the pharmacological efficacy of a whole Chinese herbal formulation. In fact, it did not present the multitarget characteristic of the multi component Chinese herbal formulation. If the multi targets can be predicted according to chemical structure of its composition through the bioinformatics, and experiments to verify the results, things will be go easy and concise to confirm herbs pharmacological mechanisms. With the development of high-throughput drug screening and structural analysis technology, the chemical compositions of formulation are gradually revealed, mature database of the chemical composition of Chinese herbs are gradually established, and the identification of the chemical structure makes it possible to predict drug targets by investigating the relations between the drug and the biomarkers proteins. As the development of system biology, bio formations technique becomes more and more mature. Its advantages are very applicable to the complex correlativity study of compound in herbs and the drug targets. In this paper, we take drugCIPHER-CS to predict the target of QSKL which has been used for treating CHD effectively for thousand years. Five pathways were predicted as a main way that the QSKL may act on. RAAS was selected to elaborate the pharmacological mechanism of QSKL. After experimental verification, more than one target was verified including renin, Ang II, AT1, which can elaborate the characteristic of the milt-target of Chinese herbal formulation. The chronically activated renin-angiotensin-aldosterone system (RAAS) is believed to contribute significantly to the deterioration of cardiovascular function. In the pathway, angiotensin II has critical roles including the regulation of blood pressure, vasoconstriction, increasing aldosterone secretion, amplifying sympathetic activity, increasing sodium retention, as well as lots of other actions. It is considered a factor in virtually every form of CHD, and it is applied as a therapeutic target in hypertension and chronic heart failure. Numerous researches focus on its inhibitors to provide clinical drug for CHD. Among them, Antagonists to AT1R and angiotensin-converting enzyme inhibitors (ACEI) have been routinely used to treat patients with CHD [33,34] the bradykinin [35], moreover, patients levels of angiotensin II have a tendency to return to pre treatment levels after long-term ACEI treatment [36]. Since ACEI do not seem to have complete protective effects against the detrimental effects of Ang II, AT1-receptor blockers may offer advantages relative to ACEI by effectively blocking the AT1-receptor, which mediates all known detrimental effects of Ang II. The AT1R mediates the majority of classical biological functions of Ang II [37] and plays a critical role in the control of regulation of blood pressure, vasoconstriction, increasing aldosterone secretion, amplifying sympathetic activity, and so forth. All the AT1-receptor antagonists in routinely clinical use are extremely well tolerated. Since AT1R blockers for the treatment of cardiovascular disease seem very promising, indeed, the AT1R has been regarded as an important target for cardiovascular treatment. In our research, the QSKL can significantly down regulated the level both Ang II and AT1R, indicating a same efficacy as AT1 agonists. Besides, the QSKL can lower the RAAS activation form the very beginningthe renin. Renin is an aspartyl-protease enzyme produced and activated within the juxtaglomerular (JG) cells of the afferent arteriole in the kidney. Through Angiotensin I, it can activate Ang II which is the primary biologically active hormone of the renin-angiotensin system as referring before. Renin secretion is the critical rate-limiting step in the activity of the entire system [38]. Because of this, QSKL regulating renin secretion are of particular interest and importance in understanding its collaboration effect with Ang II as well as understanding therapeutic targets for CHD. ALD seems not to change, which is consistent with the published papers [39]. "ALD breakthrough" is thought to be its important mechanism. To sum up, this paper presents an idea that the study of multi target for Chinese herbal formula are carried out based on the known chemical composition of herbs both by bioinformatics and experimental verification. We take the research of QSKL effect on CHD as an example. And the results show it can act on CHD in multi targets, especially in renin and AT1, eventually decrease the level of the Ang II, which can treat CHD efficiently. From this, a credible and objective method can be provided to understand and confirm the complicated multi targets mechanism for Chinese herbal formulation. But some problems still exist. For example, in predicting drug targets, the distribution and metabolism of herbal formulation in the body are not taken into consideration in our research; we presume all components of herbal formulation compounds are absorbed and utilized; improvement should be made in our future work. Author's Contribution Y. Wang, Z. Liu and C. Li contributed equally to this work.
2014-10-01T00:00:00.000Z
2012-03-08T00:00:00.000
{ "year": 2012, "sha1": "ac187875dc0a75699c2f128ed66497a5a897c497", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2012/698531.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59bccfa42d1b5fb3f38cabc9320de6c8c58557fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
622038
pes2o/s2orc
v3-fos-license
Marine-Sourced Anti-Cancer and Cancer Pain Control Agents in Clinical and Late Preclinical Development † The marine habitat has produced a significant number of very potent marine-derived agents that have the potential to inhibit the growth of human tumor cells in vitro and, in a number of cases, in both in vivo murine models and in humans. Although many agents have entered clinical trials in cancer, to date, only Cytarabine, Yondelis® (ET743), Eribulin (a synthetic derivative based on the structure of halichondrin B), and the dolastatin 10 derivative, monomethylauristatin E (MMAE or vedotin) as a warhead, have been approved for use in humans (Adcetris®). In this review, we show the compounds derived from marine sources that are currently in clinical trials against cancer. We have included brief discussions of the approved agents, where they are in trials to extend their initial approved activity (a common practice once an agent is approved), and have also included an extensive discussion of the use of auristatin derivatives as warheads, plus an area that has rarely been covered, the use of marine-derived agents to ameliorate the pain from cancers in humans, and to act as an adjuvant in immunological therapies. Introduction Rather than discuss the agents that are currently in use from marine sourced organisms, which will be covered in another review in this journal, we will discuss agents that are from marine or marine-derived sources that are either in clinical trials, or are in advanced preclinical status. Obviously we will not be covering all such agents, as some are known only by a code number without any other information being available, whilst others are in "preclinical status" according to the authors of a paper or communication, but in truth, most of these are simply reports of some in vitro activity against cell lines or have some preliminary data on in vivo activity in rodents. We will also avoid using the source organism as the method of classification as it is now becoming quite evident that the majority of compounds reported from the marine environment are in fact produced by, or in concert with, single-celled organisms ranging from protists (frequently dinoflagellates) to bacteria, including a very significant number of as yet uncultured organisms. We will mention some of the materials that have been approved for use in one or more countries that are in fact in clinical trials in others, or are now being used in conjunction with other drug moieties as these are very common occurrences with antitumor agents once they are approved. For example, although not a marine-derived agent, taxol ® is still in clinical trials, usually as part of a multi-drug regimen more than 20 years after it was approved for use by the US Food and Drug Administration (FDA) for treatment of refractory ovarian cancer. We have organized this review in a manner that is the reverse to what most authors would do, in that we will commence with agents that have been approved but are still in clinical trials, followed by agents in stages of clinical development (nominally Phase I to III), rather than start with preclinical agents and work forwards. Since a number of the agents that are in clinical trials are very close relatives to approved materials, we have elected to group these agents after the "approved parent", so that the similarities and differences can be more easily seen, thus giving the full "chemical lineage" in certain cases below. In addition, we have elected to commence with compounds from marine sources that could be considered as "adjuvant therapies" though, with one exception, not in the immunological sense. Tetrodotoxin (Tectin ® , Phase III; Figure 1, 1) One of the most unusual agents at this stage is a very well known "marine toxin", the highly substituted guanidine-derivative, tetrodotoxin (1) [1][2][3]. Although this is not a formal anti-tumor agent, it is in fact in Phase III trials as an agent (Tectin ® ) against inadequately controlled pain related to cancer by WEX Pharmaceuticals in the USA, together with a Phase II trial under the same company, again in the USA, against the neuropathic pain resulting from chemotherapy-induced peripheral neuropathy. Although there was debate in years gone by over the actual source of this agent, there is now little doubt that it is produced by a commensal microbe, though which one(s) is still open for debate [4]. The synthesis of the compound and other derivatives has been published from a variety of chemists with an excellent recent review by Nishikawa and Isobe giving the highlights of their methodologies and covering some of the early history of this class of toxins [5]. XEN-2174 (Phase II; Figure 1, 2) This compound, a very slight modification of the naturally occurring χ-conotoxin MrIA, was originally isolated from C. marmoreus and then optimized by medicinal chemistry [6]. Unlike the other conotoxins either approved or in various levels of testing, this particular agent is a modified 13-residue peptide and is a noncompetitive inhibitor of the neuronal norepinephrine transporter (NET) [7]. This molecule, a 27 residue peptide with three internal CYS-CYS bonds, is similar to the well-known pain treatment ziconotide, and is currently in Phase I trials sponsored by Relevare Pharmaceuticals (previous name was CNSBio) for treatment of pain related to cancer. It is a calcium channel blocker and was originally identified by researchers at the University of Queensland. Although initial experiments used the intrathecal route (as with ziconotide) [8], the current protocol uses systemic administration [9]. Immunological Use of Keyhole Limpet Hemocyanin (KLH; Phase I-III) KLH has been used for many years as a classical immunoadjuvant, and had been approved in countries from Austria to South Korea, mainly for treatment of bladder cancer [10]. Two recent publications gave results from Phase III trials, the first being in metastatic breast cancer where it did not demonstrate any increase in median life span [11], but in the other Phase III trial in bladder cancer, using mitomycin as a comparative agent, there were indications that KLH had a positive effect on disease progression [12]. Currently, the ClinicalTrials.gov web site [13] lists Phase III (NCT01480479) and Phase II (NCT01498328) trials using KLH in its adjuvant status against relapsed glioblastoma, and Phase I trials in conjunction with KLH as part of a vaccine against high risk neuroblastoma (NCT00911560) and fallopian tube, epithelial ovarian and peritoneal cancers in patients following a first remission (NCT01248273). Approved Marine-Derived Antitumor Agents Still in Clinical Trials (and Close Chemical Relatives) 3.1. Cytarabine (Phases I to IV; Figure 2, 4) As mentioned in a news interview in the early 1990s and then formally in a review by the authors in 2000 [14] this agent, though not found in a marine environment as "Ara-C" can trace its chemical lineage back to the discovery of bioactive nucleosides that contained arabinose rather than ribose or deoxyribose. Though we were not the first to recognize the importance of such substitutions, as Suckling [15] in a review in 1991 reported on the chemistry involved in the syntheses of these and other such arabinose-linked nucleosides with common or uncommon bases, we were perhaps the first to formally link the discoveries of the marine-sourced natural arabinoses by the Bergmann group to the "design" of this agent [16][17][18]. So Ara-C can be legitimately considered to be a marine-derived agent, since without the arabinose, it would simply have been a normal component of nucleic acids. Even today, there are 840 trials listed in the NIH (National Institutes of Health, Bethesda, MD, USA) clinical trials database (ClinicalTrials.gov), with 240 of them being open studies that are recruiting, covering a large number of cancers and ranging from Phase IV down to Phase I. In the corresponding European database, 43 clinical trials covering the same phases, but with some overlap, are listed. As with other well-known approved drugs, it is still in use, more than 40 years after its initial approval, with an interesting recent paper questioning the use of high dose cytarabine therapy during remission in adults of acute myeloid leukemia [19]. ET743 (Trabectedin; Yondelis ® ; Phases I to III; Figure 2, 5) This compound may well be considered the "poster child" for marine-derived antitumor agents, as it is currently the only molecule in use as an antitumor agent that is identical to one of the compounds originally isolated from E. turbinata. The stories around the discovery and development of this compound using materials from in-sea and on land aquaculture, followed by the semi-synthesis from a precursor molecule from a marine microorganism, cyanosafracin B, have been told by many authors over the years. These ranged from the initial reports of bioactivity in this organism in 1970 by Sigel et al. [20], the initial identification of the series by Holt in his PhD thesis in 1986 [21], to the simultaneous publications from the laboratories of Rinehart at the University of Illinois (Urbana Champaign, IL, USA) [22], and Wright at Harbor Branch Oceanographic Institution (Fort Pierce, FL, USA) [23] in 1990 of the structure of ET743. This work was followed with the thorough discussion given by the investigators at PharmaMar (Madrid, Spain) in 2009 [24], demonstrating the power of both semi-synthesis and optimization of processes to obtain active drug principles. The molecule was approved in the EU (European Union) in 2007 for treatment of advanced soft tissue sarcoma and in some of the EU countries for treatment of recurrent platinum-sensitive ovarian cancer when coupled to liposomal doxorubicin in 2009, but the corresponding U.S. FDA (Food and Drug Administration) application was withdrawn. The commonalities and differences in the pharmacological response of trabectedin and its close relatives, Zalypsis ® and lurbinectedin (vide infra) have been discussed recently with respect to their experimental effect upon the Fanconi anemia pathway. Martinez et al. [25] demonstrated that these three agents inhibited the Fanconi anemia pathway in the cell lines tested, increasing their sensitivity to mitomycin C, in contrast to mitomycin C which always activated that pathway in the same cell lines. The authors suggested that as a result of these findings these three agents might be useful clinically in "Fanconizing" cancer cells in order to gain sensitivity against other anti-tumor drugs. In another paper the same year, Romano et al. [26] reported that in in vitro and in vivo models, no relationship was found between the in vitro cytotoxic potency and in vivo antitumor activity in syngeneic mouse models, suggesting that there might well be a host response in these models. In addition, the pharmacokinetics differ, even between the quite similar trabectedin and lurbinectedin in humans, and as expected due to the differences in structure, Zalypsis ® has been shown to differ in pharmacokinetics in humans [27]. As of the end of October 2013, there were 15 studies found for ET743 in the NIH clinical trials database, 12 at Phase II and three at Phase I, all being listed as completed, with cancer types covering breast, prostate, soft tissue sarcoma and osteosarcoma, plus general carcinomas. Searching the corresponding EU Clinical Trials Register, 19 trials were listed ranging from 2005 to 2013 with three being Phase III trials not found on the NIH site. These were two trials against refractory ovarian cancer with liposomal doxorubicin, and the third was for patients with translocation-related sarcomas. Again these listings demonstrates that once a compound has been approved for treatment of one type of cancer, it will be placed in clinical trials for many others, either individually or as part of a new drug regimen. A discussion of the probabilities of ET743 and its congeners being produced by as yet uncultured microbes associated with the source tunicate was recently published by Giddings and Newman [28] which should be consulted for further details. This compound, another variation on the basic structure of the dimeric isoquinoline alkaloids, was derived from the structure of jorumycin, a compound isolated from the skin and mucin of the nudibranch Joruna funebris [29], and renieramycin J from a species of the marine sponge, Netropsia. Zalypsis ® was synthesized by workers at PharmaMar (Madrid, Spain) using methodologies related to the ET743 synthesis from safracin B [30]. The initial report of the molecular pharmacology of this agent was described by Leal et al. in 2009 [31] and even though it has a close resemblance to ET743, it does not activate the DNA damage checkpoint response. Currently both the NIH and EU clinical trials sites show three clinical trials at Phase II/I levels with one in Spain showing as still continuing. There are eleven reports to date in the literature with recent results from Phase I studies being reported in 2012 from work in the UK [27]. These were then followed by further reports in 2013, where objective responses, mainly stable disease, were seen in a small number of patients [32,33]. In contrast, also in 2013, a report was published demonstrating a lack of response and termination of the Phase II trial of this compound in a heavily pretreated population with advanced and/or metastatic endometrial or cervical cancers [34]. Lurbinectedin (PM-01183; Phases I-II; Figure 2, 7) This compound is another variation on the basic structure of the dimeric isoquinoline alkaloids, but has a tetrahydro-β-carboline moiety instead of the tetrahydroisoquinoline present in ring C, and binds in the DNA minor groove [35]. The compound was shown to have different pharmacokinetics in patients and also like trabectedin, to attenuate nuclear excision repair (NER). It also demonstrated synergy with platinum-based agents in vitro thus suggesting a possible treatment regimen since it also demonstrated activity against platinum-resistant cell lines [36]. Two Phase II clinical trials with lurbinectedin are shown on the NIH clinical trials site, one recruiting and one approved but not yet recruiting, with two Phase I trials recruiting and one approved but not yet recruiting. On the European clinical trials site, one Phase II trial corresponding to the" not yet recruiting trial" listed on the NIH site in the USA, is on-going in Spain, and the other is a two year old trial against metastatic pancreatic cancer in Spain and the UK. Eribulin (Halaven ® ; Phases I-IV; Figure 2, 8) The story of the discovery of this compound (a totally synthetic variation on halichondrin B has been given in a variety of formats over the years, from the chapter by the Eisai scientists in Woburn, MA that showed the progression from the synthesis of halichondrin B to the initial synthesis of eribulin [37], to two recent papers on the industrial methodologies that enabled the production of this molecule, certainly the most complex synthetic drug to date [38,39]. As with the other approved compounds mentioned earlier, Halaven ® is currently shown as being in 28 trials that are recruiting patients with 27 being Phase I or II or I/II. The one Phase III trial is a physician's choice model with Halaven ® being one of the three drugs to choose from. In addition, of the other 43 trials shown, 21 are active but not recruiting with the majority being at Phase I or II, though two are at Phase III and one (Phase IV) is a post market surveillance. One trial in the list was terminated with no reason given. In addition, a new liposomal formulation of eribulin mesilate is currently in a Phase I clinical trial (NCT01945710) in the United Kingdom under the auspices of Eisai. The geographic areas of these trials effectively cover the world, but the majority are either in the USA or Europe. Summation of the figures on the map in the Clinicaltrials web site always gives a higher figure as a significant number of trials cross geographic boundaries within the one trial. Brentuximab Vedotin (Adcetris ® ; Phases 0 to IV; Figure 2, 9) This immunoconjugate with a "warhead" derived from dolastatin 10, monomethylauristatin E (vedotin; Figure 2, 10), a secondary metabolite from a Symploca species of cyanophyte, was approved in the USA in 2011 for treatment of CD30 positive lymphoproliferative disorders such as Hodgkin's lymphoma. This combination was the second immunoglobulin-warhead combination to be approved for leukemias following the initial approval of Mylotarg ® by the FDA in 2000. Subsequently Mylotarg ® was withdrawn in the USA in 2010 due to concerns about the productʼs safety that were raised by a confirmatory study conducted after approval, as patients on the preparation and also receiving chemotherapy had a higher death rate and no objective increase in life when compared to a group using just chemotherapy. This combination is still in use in other countries. A relationship to marine sources for the "warhead endiyne molecule" was established when investigators at the Scripps Oceanographic Institution (La Jolla, CA, USA) showed the presence of endiyne cryptic clusters in marine bacteria of the genus Salinospora [40]. Adcetris ® is the product of extensive work by Seattle Genetics (Seattle, WA, USA), first in optimizing the vedotin warhead (10) and then developing the linkers that couple the antibody to the compound [41]. Some of these, discussed later, are designed to release the warhead (vedotin) by simple hydrolysis of a linker bond, whereas others require the enzymatic digestion of the antibody, releasing the warhead plus appendages. It was approved by the FDA in 2011 and subsequently approval was given in the EU late in 2012 and launched in the UK in early 2013, all for CD30 positive leukemias. Full explanations of the methodologies used and the utility of this agent against a variety of lymphomas have been published in the last three years and should be consulted by the interested reader [42][43][44][45]. In addition, a recent report from Takeda (Osaka, Japan) shows the strategy that this company is adopting, including the further development of this agent [46]. Currently, this agent is in 37 trials mainly in the USA from Phase 0 to Phase IV where the latter trial is listed as recruiting on the NIH clinical trials site. Six more are listed in the EU clinical trials site covering Phase II to Phase IV. Glembatumumab Vedotin (Phase II) This is monomethylauristatin E (MMAE) linked to a fully human monoclonal antibody CR011 (an anti-CG56972) via a stable valine-citrulline dipeptide linker. It was targeted against patients with unresectable melanomas at stage III or IV who have failed one cytotoxic chemotherapy regimen and has expanded to include metastatic breast cancer as well. The combination has a variety of names during its early days including CDX-011, CR-011 and CR011-vcMMAE, so searching for data can be a trifle challenging. The initial report of the use of this combination was given by investigators from CuraGen in 2006 [47], followed by a report of xenograft activity in 2007 from the same group [48]. The value of the monoclonal's target in triple negative breast cancer was described in 2010 by Rose et al. [49], with the corresponding details in melanoma described in 2012 by a group from the People's Republic of China [50]. Currently three completed studies at the Phase I/II levels are reported in the NIH clinical trials database with one preliminary report of clinical activity in breast cancer patients [51]. ABT-414 (Phase I-II) This is an antibody-drug conjugate (ADC) linking the anti-Epidermal Growth Factor Receptor (EGFR) antibody ABT-806 to another variation on auristatin; in this case, monomethylaurisatin F (Figure 2, 11) is used in place of the "E" variant. The ADC was designed to bind to a unique epitope of EGFR that is usually not accessible when EGFR is expressed at physiological levels. However, the ADC binds when tumors express EGFRde2-7 (EGFRvIII) and in other tumors with amplified EGFR or excessive EGFR activation under "normal wild-type conditions" [52]. Abbvie (North Chicago, IL, USA), which is the renamed Abbott Pharmaceutical Division, recently instituted two human clinical trials as trials in mice using human wild-type EGFR-overexpressing tumors gave complete regressions and "cures" [52]. Phase I studies where patients must have a solid tumor type likely to over-express (EGFR), are underway evaluating the safety, pharmacokinetics and efficacy of ABT-414, with a Phase IIa expansion (NCT 01741727) at the maximum tolerated dose (MTD) where patients, accepted by invitation only, must have squamous cell Non-Small Cell Lung Cancer (NSCLC). The other trial at the Phase I level (NCT01800695) is a study evaluating the safety and pharmacokinetics of ABT-414 in subjects with glioblastoma multiforme in combination with radiation plus temozolomide or temozolomide alone; the study is currently recruiting patients with this particular disease. PSMA-ADC (Phase II) This ADC is a fully human monoclonal antibody against prostate specific antigen that is coupled via the valine-citrulline dipeptide linker to mono-methylauristatin E (MMAE) and was designed to undergo release via proteolysis by human cathepsin B. The initial report demonstrating activity in prostate cancer cells and in xenografts was published in 2006 [53]. This was followed in 2011 by a report showing expanded activity against androgen sensitive and insensitive cell lines in xenografts [54]. Since there are now reports of the PSMA antigen being present in glioblastoma multiforme, this ADC is in a Phase II trial currently recruiting patients with this specific cancer, in addition to the Phase I and II trials against prostate cancer. All three trials are listed as current in the NIH clinical trials database, but no trials are shown in the EU database at this time. DCDT-2980S (Phase II) This ADC from Genentech (Roche, San Francisco, CA, USA) is a humanized IgG1 antibody directed against the CD22 epitope linked to sulfhydryl groups on the antibody via a maleimide derivative. This derivative is the same as that used in Adcetris ® , releasing monomethylauristatin E on protease cleavage. Since the CD22 epitope is not expressed in rodents, trials for safety were performed in cynomolgus monkeys and demonstrated adequate safety in primates plus efficacy in xenografts [55]. Currently there is one Phase II trial recruiting and one active trial in the NIH clinical database and no records in the EU equivalent. These trials are in leukemias, not solid tumors. DCDS-4501A (Phase II) This is also from Genentech/Roche, and is an ADC with monomethylauristatin E linked to an anti-CD79b monoclonal. It is currently in the same Phase II trial as DCDT-2980S as an alternative treatment against follicular B cell lymphoma, and is also in an ongoing Phase I trial against various lymphomas in a dose escalation study. As with the earlier Roche agent (Section 3.4.4), no trials are listed in the EU database. Enfortumab vedotin (Phase I) This combination, a fully human IgG1k antibody linked to monomethyl auristatin E via a cleavable valine-citrulline linker is also known under the code names AGS-22MSE and AGS-22ME and is currently undergoing Phase I evaluation under the aegis of Agensys (Ashburn, VA, USA), Seattle Genetics and Astellas Pharma (Tokyo, Japan). It should be pointed out that Agensys is a subsidiary of Astellas Pharma and the philosophy behind the approach from their standpoint was reported by Yanagita and Takenaka in 2012 [56]. The only record at the moment of the initial development of this agent is in an abstract of the 2011 AACR Meeting in Orlando, Florida [57]. Vorsetuzumab Mafdotin (SGN-75; Phase I) This ADC, from Seattle Genetics, has monomethylauristatin F linked to the humanized anti-CD70 monoclonal antibody 1F6 through a maleimidocaproyl linker that is non-cleavable, so the release has to rely upon invagination and then proteolytic digestion [58]. This ADC is currently being evaluated in Phase I trials against relapsed and refractory non-Hodgkin's lymphoma and also metastatic renal cancer where the cancers express the CD70 epitope. Currently the NIH database shows one completed Phase I trial and one recruiting for renal cell carcinoma, but no trials in the EU database. There are reports in the conference literature on some of the findings, but as yet, no peer-reviewed reports. SGN-19A (SGN-CD19A; Phase I) This is another Seattle Genetics ADC where a humanized anti-CD19 antibody is linked to monomethylauristatin F through a maleimidocaproyl-valine-citrulline linker. Currently there are two Phase I trials in the NIH database at the Phase I level that are actively recruiting for trials in lymphomas including Burkitt's lymphoma. One presentation at the 2013 AACR meeting is the only published report of progress at the moment [59]. BAY 79-4620 (3ee9/MMAE; Phase I) This ADC is monomethylauristatin E linked to an antibody against the human carbonic anhydrase IX, and was made using the Seattle Genetics techniques as described for the anti-CD30 ADC (now known as Adcetris ® ) by Francisco et al. in 2003 [60]. Two Phase I trials are listed in the NIH database with one showing completed (determination of MTD in patients with advanced solid tumors) but the other, again an MTD-based study, was terminated for safety reasons. The differences between the two trials from the database descriptions appeared to be the frequency of treatment, three weeks in the completed trial versus two weeks in the terminated one. No data in the EU database. AGS-16C3F (AGS-16M8F; Phase I) This ADC is a fully human IgG2k monoclonal antibody directed against the AGS-16 antigen and conjugated to monomethylauristatin F (MMAF) via a noncleavable maleimido-caproyl linker. This particular ADC is directed against renal and liver carcinomas as a result of the AGS-16 antigen. Details of the initial discovery and results of cell line and xenograft testing were presented at the 2010 Genitourinary Cancers Symposium by Gudas et al. [61]. Currently there is one Phase I trial against renal cancer recruiting according to the NIH database, and one completed Phase I looking at the safety of dose escalation. DMUC-5754A (RG-7458; Phase I) This ADC is a monoclonal antibody against the epitope MUC16 linked to monomethylauristatin E and it is targeted against ovarian carcinomas, but no further details other than a report in an abstract at the 2013 AACR meeting [62], are available at the present time. Currently the NIH web site shows that Genentech is recruiting patients for a Phase I trial against both ovarian and pancreatic cancer. This ADC is a humanized IgG1 monoclonal antibody directed against the NaPi2b epitope linked to monomethylauristatin E. No information as to the method of linkage is available and the only report is in an abstract at the 2013 ASCO meeting [63]. Phase I trials against non-small cell lung cancer and platinum resistant ovarian cancer are actively recruiting patients according to the NIH database. A1-mcMMAF (PF-06263507; Phase I) This ADC is monomethylauristatin F linked via a maleimidocaproyl linker to a humanized monoclonal antibody directed against the 5T4 tumor antigen. The ADC was prepared using the basic techniques described by Doronina et al. [41], and demonstrated potent antitumor activity in in vivo xenograft models and exhibited no overt toxicities when delivered to cynomolgus monkeys [64]. Currently Pfizer is recruiting patients for a Phase I trial against advanced solid tumors according to the NIH database. DMOT-4039A (Phase I) This ADC is a monoclonal antibody identified as MMOT-0530A that was raised against an un-named antigen that is overexpressed in pancreatic and ovarian cancer, conjugated to monomethylauristatin E. Currently the ADC is in two Phase I clinical trials with one in the USA and The Netherlands recruiting patients with unresectable pancreatic or platinum-resistant cancers (NCT01469793), whilst the other one (NCT01832116), also directed against the same cancers and actively recruiting patients in The Netherlands, is designed to use PET imaging using 89 Zr-linked to the MMOT antibody as the imaging agent, followed by use of the ADC thus permitting an assessment of the imaging and the subsequent response to therapy. RG-7600 (Phase I) This ADC, which from the comments on the Genentech web site [65], is in Phase I studies against ovarian cancers, does not have any other information in the literature. However, since the web site states that it is in collaboration with Seattle Genetics, the warhead may be an auristatin derivative. No details as to current trials can be found in the NIH database. DEDN-6526A (RG-7636; Phase I) As with RG-7600, the only information is from the Genentech web site where this ADC is listed as being in Phase I against unresectable melanoma. Since Seattle Genetics is also involved, the warhead may be an auristatin derivative, and the antibody may well be directed against endothelin ETB receptors. One trial that is recruiting patients is shown in the NIH database (NCT01522664). DSTP-3086S (RG-7450; thio-antiSTEAP1-MC-vc-PAB-MMAE; Phase I) This is another of Genentech (Roche) ADCs using, in this case, an antibody against a humanized anti-STEAP1 IgG1 antibody modified via determination/modification of reactive thiols according to the patent application by Bhakta and Junutula [66], and coupled to monomethylauristatin E. The antibody is directed against the six-transmembrane epithelial antigen of prostate 1, hence the STEAP1 acronym, and was evaluated as both the basic ADC with monomethylauristatin E and the thio-modified antibody, with the decision being to go with the thio modified version from the PK determinations [67,68]. Currently there is one Phase I study recruiting patients with metastatic castration-resistant prostate cancer (NCT01283373) with a preliminary report showing some clinical responses given at the 2013 ASCO Meeting [69]. MLN-0264 (Phase I) This is an ADC composed of a fully human monoclonal IgG antibody (5F9) directed against guanylyl cyclase C (GCC) and is conjugated to monomethylauristatin E via a cleavable linker (licenced from Seattle Genetics). The antibody is directed against gastrointestinal tumors that express GCC. This is a first in class drug candidate with the initial report of the rationale being given in November 2012 at the 14th EORTC-NCI-AACR meeting in Dublin, Ireland [70]. A report at the next conference in the series was given in 2013, demonstrating activity both alone and in conjunction with gemcitabine in xenograft models of pancreatic cancer [71]. The compound is in a Phase I trial (NCT01577758) and is currently recruiting patients with GI cancers expressing the required antigen. RG-7598 (Phase I) As with RG-7600 (Section 3.4.15), the only information is from the Genentech web site where this ADC is listed as being in Phase I against multiple myeloma. Since Seattle Genetics is also involved, the warhead is probably an auristatin derivative. No trials can be found in the NIH database as of early November 2013. SGN-LIV1A (Phase I) This ADC is being developed by Seattle Genetics and is an anti-LIV-1 humanized monoclonal antibody linked to monomethylauristatin E. The LIV-1 epitope is also known as SLC39A6 or ZIP6, and is a member of the zinc transporter family. It was first identified as an estrogen-inducible gene in breast cancer derived cell lines. It is a downstream target of STAT3, and promotes the epithelial to mesenchymal transition important in the malignant progression of breast cancer to the metastatic form. It is expressed in subtypes of metastatic breast cancers (ER+/HER2−; HER2+ and triple negative). However, in healthy human tissues, its expression is limited to four hormonally-regulated organs. Both in vitro and in vivo models demonstrated significant delay of tumor growth on treatment with the ADC [72]. This ADC is in a Phase I study that is currently recruiting patients (NCT01969643) with metastatic breast cancer to determine safety and any antitumor activity during the trial. AGS-15E (AGS-15ME; Phase I) This is an ADC with the fully human IgG2 monoclonal antibody (AGS15) whose target is SLITRK6, conjugated to monomethylauristatin E (MMAE) through the maleimidocaproyl-valine-citrulline linker from Seattle Genetics. The target of this antibody, SLITRK6, is a member of the SLITRK family of neuronal transmembrane proteins, and was discovered by Agensys using suppressive subtractive hybridization on biopsies from bladder cancer patients [73]. Immunohistochemical (IHC) analysis of SLITRK6 expression was evaluated in various human cancers including bladder, using a SLITRK6-specific antibody M15-68(2)22. This mouse monoclonal antibody demonstrated that 90% of 452 human bladder transitional cell carcinomas from in situ, advanced primary and metastatic tumors express this epitope. In addition, the same expression was seen in some lung, breast and glioblastomas as well. In normal tissues, expression is significantly restricted [73]. Currently a Phase I trial (NCT01963052) is actively recruiting patients with metastatic urothelial cancer. Preclinical Auristatin-Linked ADCs The following ADCs are currently in advanced preclinical trials but the current information that is available is minimal. CDX-014 (CR-014-vcMMAE) From the code name, this is a valine-citrulline-linked monomethylauristatin E linked to a fully human anti-TIM1 monoclonal antibody CR-014. This antibody was developed to selectively target TIM 1 (Kd = 2.7 nM), a type I transmembrane protein expressed on the surface of ovarian and renal carcinoma cells with poor expression in normal tissues. HuMax-CD74 This auristatin-linked ADC in preclinical development uses HuMax-CD74, an antibody that targets HLA class II histocompatibility antigen gamma chain (CD74). This epitope is expressed in a wide range of hematological malignancies and solid tumors, and is being developed by a collaboration between Genmab (city, state, country) and Seattle Genetics [74]. TF-011-vcMMAE is an ADC under development for the treatment of solid tumors. It is composed of a human tissue factor (TF) specific antibody (TF-011), linked to a protease cleavable valine-citrulline (vc) linker and monomethylauristatin E (MMAE). TF is aberrantly expressed in many solid tumors, and its expression has been associated with poor prognosis [75]. Aplidine (Ptilidepsin, Aplidin ® ; Phase II-III; Figure 3, 12) This compound is currently the only non-approved marine-derived agent in Phase III clinical trials for cancer. Its history is quite convoluted as it was originally identified from an extract of the Mediterranean tunicate Aplidium albicans. It was first reported in a patent from the Rinehart laboratory at the University of Illinois in Champaign-Urbana during the time that that laboratory was working on the synthesis of didemnin B (the first direct from the sea compound to go into human clinical trials against cancer) [76]. Subsequently, with the withdrawal of didemnin B due to toxicity and immunosuppressive effects which may have been exacerbated by the then current drug regimens (a bolus at the MTD), PharmaMar began developing aplidine. This time the agent was made by total synthesis using data from the Rinehart patents [77,78], and then further developed by Jou et al. [79]. The compound could also be made by modification of didemnin A with a patent covering this process being published in 1995 [80]. Further information as to synthetic methodologies of aplidine and other related compounds can be found in the excellent publication from the Joullie group published in 2012 [81]. The mechanism of action of this agent is still not fully identified but involves multiple regulatory pathways as can be seen in the discussion in the review by the PharmaMar group in 2012 [82], but the compound does demonstrate activity in a variety of cancers. These include a recent report of activity in pediatric medulloblastomas, with demonstrable partial responses and disease stabilization but the small patient number did not allow an efficacy determination [83]. Currently, this agent is shown as being in one Phase III trial in the NIH clinical trials database and in five Phase II trials covering a variety of leukemias and liposarcomas. What is also of interest is the recent report that didemnin B has been produced by fermentation of a free-living marine-sourced microbe, with the complete genomic cluster identified and production of didemnin B confirmed by "MALDI-TOF interrogation" of the metabolites in real time [84]. To date, no report of aplidine from this source has been published but it is highly likely that a microbe is the producer of either aplidine or that there is a simple oxidation of didemnin B to produce aplidine, since the only difference is the presence of a carbonyl in the side chain of aplidine instead of a C-OH in the same position in didemnin B. Kahalalide F (PM-92102, Phase II; Figure 3, 13) This compound, a depsipeptide originally isolated from the sacoglossan mollusk Elysia rufescens and then subsequently isolated from the alga Bryopsis pennata upon which it feeds, was also found in an Indian Ocean Elysia but from a different species [85]. There is a possibility that the material is actually produced by a commensal microbe on the alga but this has not yet been definitively proven. The compound was licensed to PharmaMar by the University of Hawaii at Manoa (Honolulu, HI, USA) and then PharmaMar developed synthetic methods to produce the compound in bulk using peptide chemistry techniques. It entered clinical trials against prostate cancer, malignant melanoma and non-small cell lung cancer, with a mechanism of action that involved oncosis (ischemic cell death) [86]. At this moment in time, it appears that PharmaMar does not have any current clinical trials underway from inspection of the NIH database, but in the EU Clinical Trials Register, a Phase II trial on non-small cell lung cancer is still ongoing in Spain under the EudraCT number 2004-001253-39. The compound was "out-licensed" for the treatment of psoriasis (a proliferative disease) to Medimetriks Pharmaceuticals (Fairfield, NJ, USA) in 2009 for other than oncology and neurology indications. The original discoverer of this group of compounds published an interesting paper on selected kahalalide F analogs with antitumor and antifungal activities in 2011 [87], and recently in the middle of 2012, PharmaMar stopped development of the kahalalide derivative 1-[N-[(4S)-4-methyl-1-oxohexyl]-D-valine]-kahalalide F which they were developing as an antitumor agent under the generic name elisidepsin and trade name of Irvalec ® [82], even though it had activity in gastroesophageal tumors. This appeared to be a business decision due to the very low numbers of potential patients. Figure 3, 14) This compound, a simple modification of the terrestrial and also marine fungal metabolite halimide [88], entered Phase II clinical trials under Nereus Pharmaceuticals (San Diego, CA, USA). Two completed trials under that sponsor are shown in the NIH database, one at Phase I and the other at Phase I/II. However, no work/reports have appeared recently and with reports that Nereus Pharmaceuticals is no longer operative the fate of this compound is uncertain [89]. A; NPI-0052; Phase I, Figure 3, 15) The story of this compound from its discovery from the marine actinomycete, Salinispora tropica and its identification as a proteasome inhibitor has been covered extensively in the scientific literature. The reports include work-up to give cGMP product from the first marine-medium based large-scale fermentation, through synthesis by a variety of chemists both in academia and companies, and identification of the biosynthetic cluster in the producing organism(s) [90][91][92][93][94]. Marizomib ® (Salinosporamide Currently one clinical trial is shown in the NIH database as recruiting patients for a study in multiple myeloma (NCT00461045). What is of note is that the site now lists the sponsor as Triphase Research and Development Corporation, a CRO rather than Nereus Pharmaceuticals, and the other three completed Phase I trials in the NIH database no longer show Nereus but TriPhase. This may well confirm information about the current status of Nereus Pharmaceuticals [89]. PM-060184 (Phase I; Figure 3, 16) Recently, workers at PharmaMar reported the isolation and then total synthesis of two novel polyketides from the Madagascan sponge Lithoplocamia lithistoides [95]. These two novel agents PM050489 and PM060184 demonstrated sub-nanomolar in vitro activity in human cancer cell lines, also potent antimitotic activity, and specifically, demonstrated a new biochemical mechanism when interacting with tubulin [96]. The compounds also demonstrated potent in vivo activity in different animal models and one of the two, PM-060184 is currently in Phase I clinical trials (NCT01299636). Thus, even today, more than 25 years after the identification of the MOA of Taxol ® novel tubulin interaction mechanisms are still being discovered. Conclusions In an entirely different aspect of marine pharmacology, the work described in the first section of this review, with "toxins" controlling cancer-related pain, is a beautiful example of where agents considered to be deadly poisons to humans, tetrodotoxin and the Conus peptides, are now leading to potential drugs for use in humans. In a considerable number of papers covering the topic of natural product-based antitumor drugs (irrespective of the natural source), compounds are quoted as being in clinical trials, even though no new trial has been reported in the previous four-plus years, and earlier trials are listed as completed. From this perspective, there are two papers, one in this journal in 2010 [97], and a very recent and truly comprehensive compilation of clinical trials of a large number of marine-related drugs and drug candidates [98], that could be considered to have partially covered the topic from a marine source perspective. In these two papers, however, a significant number of the agents discussed are no longer in clinical trials. In contrast to this approach, we have checked for current or recent antitumor trials in all of the compounds above (except for the three listed as preclinical) and have demonstrated that, although the number of marine-derived agents in active clinical studies en route to approval is small compared to earlier years, all listed have current or recent clinical trials shown in either the NIH or the corresponding EU clinical trials databases. We used the qualifier "small" above even though there are 24 ADCs (5 ADCs at Phase II, 16 ADCs at Phase I and 3 ADCs in advanced preclinical status) that are using either auristatin E or auristatin F as their warheads. If one is being conservative, then these are composed of only two different basal structures with the other differences being in the monoclonal antibody and the method of linkage. We would also be remiss in not pointing out that none of these ADCs would have been made except for the discovery of the dolastatins and subsequent syntheses in the middle to late 1970s [99]. With the number of marine-sourced compounds in the literature now over 25,000, there are many other agents just "waiting in the wings" for their chance of stardom; it is our task as marine natural product chemists to find them and, ultimately, to develop them as either drugs or leads thereto. We would be remiss in not mentioning that nowadays it is recognized that the probable source of most of the agents that we have described are single celled organisms, ranging from eubacteria through to eukaryotes such as fungi and protists. As yet, we are not aware of any published compounds from the archaea that have antitumor activity, but they may well occur in due course. Conflicts of Interest The authors declare no conflicts of interest.
2016-03-22T00:56:01.885Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "53a8125b6f2bf65cbd52d5d639be4b4303ececab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/12/1/255/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53a8125b6f2bf65cbd52d5d639be4b4303ececab", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271510543
pes2o/s2orc
v3-fos-license
Precarious Oceans and Vulnerability: Micropolitics of Care in Romesh Inoperative grammatology of post(g)locality followed by the incremental desires of neoliberal elites to marketize abundant oceanic resources scattered across the world renders the oceans extremely vulnerable —an appalling phenomenon which at once lays bare the vulnerability of the oceans conditioned by the strands of ‘precariousness’ and at times calls for the actualization of ‘micropolitics of care’—an ethically sound exercise which seems to be able to hold the oceans back from being economically subjected to the predatory ‘faces’ of contemporary neoliberal precarity. In this context, Romesh Gunesekera’s Reef is critically taken up to examine the rapid disappearance of coral reefs along with the illegal marketing of endangered marine species like dolphin so as to make readers aware of how the ocean stands at risk and moreover to put literary emphasis on the enactment of ‘micropolitics of care’ which seems to be able to effectively take on the wicked designs of contemporary neoliberal precarity for the greater sake of planetary consciousness. Introduction The face is a politics.(A Thousand Plateaus, Deleuze and Guattari 181) 'x explains y, signed z'.(Dialogue II, Deleuze 19) In continuation with what Michel Foucault aptly reflects in "Societies Must Be Defended" that power politics plays an instrumental role in redefining the rights to live and die in the contemporary times: "It is the power to make 'live' and 'let' die . . .And then this new right is established: the right to make live and to let die" (241), it can further be worked out in the context of post(g)locality that with the advent of neoliberal economic regime in post-1990s, the question of precarity once again gets 'resurfaced' as the contemporary form of Neoliberalism steadily switches from 'care-mentality' to newer structures of 'precarity'-an important sociopolitical turn that allows the 'logic of precarity' to perform an 'abstract machine' to produce 'conditions' for the advancements of diverse economic 'territorializations'.Whereas, noted economist Guy Standing explains the 'precariat' as a "new social class" (Hogg 1) which stands checkered by various structures of 'precarity' in the form of an economic (in)security, Judith Butler ingeniously couples the 'logic of precarity' with an "inescapable vulnerability" (Hogg 1) that finds home in tangible 'organization', 'distribution' and 'hierarchization', owing to the workings of contemporary Neoliberalism.Following these subtle observations, one may argue that the operative logic of precarity stands in tandem with the functional becomings of Late Capitalism which rides the scopes of contemporary neoliberalism to let 'precarity' find new 'faces' and 'surfaces' in the rigid structures of contemporary Neoliberalism.Interestingly, the 'becomings' of precarity, in the times of contemporary Neoliberalism, do not just stand confined in the lived experiences of impoverished human beings dwelling in the margins of the societies; rather, it has profound bearings on the silent negotiations of nonhuman agents on the Earth in the sense that greedy contemporary neoliberalists prefer to marketize abundant nonhuman resources of the Earth, thereby producing newer 'conditions' for 'precarity' to function as an 'abstract machine'.In other words, 'abstract machinic' functioning of 'precarity' allures 'human' agents of contemporary Neoliberalism to hold the nuanced 'negotiations' between nonhuman entities by territories and strata, thereby imposing the structures of governmentality on the geokinetic unfolding of the Earth.In short, the pervasive presencing of precarity lays bare the sheer vulnerability of the oceans which constantly grapple with the exploitative onslaughts of contemporary Neoliberalism to let the Earth find 'lines of flight' 1 on the one hand and on the other hand, it calls for the urgent need of the actualization of 'care-mentality' to safeguard the existing marine resources from the inevitable extinction.This article seeks to elucidate why the territorializing precarity of contemporary Neoliberalism needs to be discarded to restore the 'micropolitics of care' on the 'plane of consistency' 2 so that new forms of human 'negotiations' 3 with the nonhuman entities like the oceans can be worked out both to stave off the challenging 'faces' of precarity at bay and to enunciate the vulnerability of oceans afresh. In order to contextualize contemporary Neoliberal precarities and its relationship with the vulnerability of the ocean, Romesh Gunesekera's masterpiece, that is, Reef (1995) is taken into account, which exposes how the mushrooming growth of global cultural tourism industry, in the post-1990s Sri Lankan context of post(g)locality, impacted the pristine marine and its adjacent coastal lives equally, thereby resulting in the steady rise of blue trafficking of endangered marine species across the world and exacerbating the vulnerability of the ocean to the designed precarities of neoliberal elites.This article is split up into three interconnected segments-whereas the opening segment makes a modest attempt to expose the operations of precarity in connection with the question of vulnerability, the second segment puts the spotlight in the nuanced interactions between precariousness and vulnerability in the context of the ocean, aiming at foregrounding the need of 'micropolitics of care' to safeguard marine resources, and the third segment takes up Gunesekera's Reef to contextualize the actualization of 'micropolitics of care' in the looming 'faces' of neoliberal precarity to forge 'human' negotiations with the 'nonhuman' ocean anew. Offsetting Precarity: Making Sense of Vulnerability I wish to present precarity as a condition of vulnerability relative to contingency and the inability to predict.(Ettlinger 320) In "Performativity, Precarity and Sexual Politics" (2009, 2), Judith Butler explains that the notion of precarity implies a "politically induced condition in which certain populations suffer from failing social and economic networks of support and become differentially exposed to injury, violence, and death", thereby implying that whereas the responsible government of a nation-state is expected to reduce the 'conditions' of precarity to allow people 'live' and 'let' them die, practitioners of contemporary Neoliberalism choose to exploit the 'abstract machinic' 4 potentials of precarity to render people on the tenterhooks so that biopolitical measures could be taken up to restrict the 'free' economic movements of human beings.In other words, contemporary neoliberal government tries to maximize the 'conditions' of precarity to exploit the economic uncertainties of marginalized human beings who, in turn, are subjected to biopolitical measures for sheer exploitation.In short, precarity turns out to be a neoliberal instrument to make marginalized people exposed to various 'surfaces' of uncertainty which contributes to the conditional co-becomings 5 of vulnerability along with the changing 'faciality' 6 of precarity.Whereas Butler finds precarity as a "politically induced condition of maximized vulnerability" (2), Liam Conwell distinguishes the 'language of precariousness' from the 'language of precarity' in that whereas the "the former term can be interpreted as a vernacular description of the conditions of uncertainty that are endemic to contemporary capitalism, the latter is a more immediately political concept that signals both the conditions of uncertainty and a subjective consciousness of these conditions that can be actuated as a site for politics" (30).Taking Conwell's standpoint into account, it can be argued that there exists an onto-epistemological difference between 'precarity' and 'precariousness' whereas the notion of 'precarity' is socio-politically charged up, 'precariousness' could well be understood as a descriptive metaphor for referring to the 'conditions' of uncertainty.In other words, 'precariousness' could also be figured out as an ontological given that cuts across the existential becomings of all human and nonhuman entities residing on the Earth.It is true that political configuration of 'precarity' happens to be a biopolitical strategy to maximize the heterogeneous orders of uncertainty for largely the economic exploitation of 'human' beings, onto-epistemological unfolding of precariousness needs to be critically investigated to bring out the multiplicities of vulnerability.Whereas the biopolitical 'logic of precarity' stands attuned to a model of dense hierarchization and stratification, chancing upon the extremities of economic uncertainty, Deleuzean reading of 'precariousness' shows that the deterritorial becomings of precariousness get shaped up by the way it makes 'alliances' with the political, social and cultural 'forces' of reterritorialization and gets 'unfolded' in tandem with the aleatory movements of singularities.Therefore, an epistemic shift from the 'facilities' of 'precarity' to the 'surfaces' of precariousness needs to be carried out to elucidate how the wave of 'precariousness' in actuality functions as an 'abstract machine' to hold together the coordinates (what Deleuze and Guattari call 'concrete elements' in A Thousand Plateaus) of 'precariousness' assisted by human and nonhuman entities (what Deleuze and Guattari call 'personae' in A Thousand Plateaus).In Dialogues II, Deleuze and Parnet hold that "there is an AND between the two, which is neither the one nor the other, nor the one which becomes the other, but which constitutes the multiplicity .Taking the viewpoint of Deleuze and Parnet into account, it can tenably be contended that an understanding of the mediatedness of 'precariousness' has to be inclusive of its 'rhizomatic' 7 liaisons with the orders of stratification. Whereas contemporary neoliberalism attempts to biopolitically govern the 'subjects' by means of the capitalistic production of the 'conditions' of precarity, 'precariousness' can well be understood as a 'zone of singularities ' 8 -(event)ual variabilities of which negotiate the Event and their nuanced 'negotiations' get precipitated in "history" (The Logic of Sense, Deleuze 53).Following the singular 'unfolding' 9 of 'precariousness', it can be put forward that 'precariousness' at once 'connects' human beings with their nonhuman counterparts and at times 'disconnects' the former from the latter inasmuch as it refuses to be reduced to codes, strata and territories.In short, 'precariousness' is a matter of an experience and cannot be readily quantified in terms of 'points of reference' as it is intensively charged up with a sort of chaosophical immanentism. 10In What Is Philosophy?, Deleuze and Guattari pertinently reflect: Chaos is defined not so much by its disorder as by the infinite speed with which every form taking shape in it vanishes . . .Chaos is an infinite speed of birth and disappearance . . .Science approaches chaos in a completely different, almost opposite way: it relinquishes the infinite, infinite speed, in order to gain a reference able to actualize the virtual . . .Philosophy proceeds with a plane of immanence or consistency; science with a plane of reference.( 118) Deleuze and Guattari mean to argue that unlike science, philosophy resorts to a 'plane of immanence' to hold the chaosophical multiplicities back from being caught up in a "freezeframe" (What Is Philosophy? 118) and it is by working out the notion of 'actual and its virtual coordinate', it can be argued that human and nonhuman beings seem to ceaselessly 'confront' the actualities of 'precariousness' and its virtual coordinates in order to make thoroughfares amidst the oddities of Life. In this regard, it can be argued that aleatory singularities of 'precariousness' provide conditional support in the form an 'abstract machine' to let the 'intensities' 11 of vulnerability find tangible manifestations in the domain of exteriority.Here, one may be reminded of Gediminas Lesutis who explains the configurational overlapping between 'precariousness' and vulnerability in the following terms: "precariousness, as an ontologically shared vulnerability of a human body, is mediated, negotiated, and constituted into precarity -a spatially engendered condition of everyday life" (22), thereby suggesting that 'precariousness', as it were, lays down a chaosophical 'plane of immanence' to drive vulnerability to find material forms of manifestation.Lesutis aptly holds that it is the 'precariousness' that is indeed politically turned into rigid segmentarities of 'precarity' supported by "extractive capitalism" (22) so as to govern the economic lives of marginalized human beings.Whereas Lesutis figures out 'precariousness' as a 'shared' condition and precarity as 'a condition of life', it is contended after Deleuze and Guattari that whereas 'precarity' turns out to be a 'performative' reference to the 'striated space' 12 , 'precariousness' could be held as a 'metonymic' reference to the 'smooth space' 13 which always slips into 'alliances' to move towards the new 'orders' of reterritorialization.In A Thousand Plateaus, Deleuze and Guattari elucidate that an act of reterritorialization "must not be confused with a return to a primitive or older territoriality, it necessarily implies a set of artifices by which one element, itself deterritorialized, serves as a new territoriality for another" (174).Whereas, in 'striated space', Deleuze and Guattari argue that "lines or trajectories tend to be subordinated to points" and in 'smooth space', "it is the opposite: the points are subordinated to the trajectory" (A Thousand Plateaus, 478).Whereas Lesutis understands vulnerability as "an externally imposed condition" (28) backed up with the strands of precarity and then proceeds to hold vulnerability as an "ontological constraint" (28), it can be put forward on the contrary that 'precariousness' is replete with differential 'repetitions' of singularities which ceases vulnerability to be an 'ontological constraint' and instead, allows it to forge new 'alliances' with possibilities on the plane of chaosophical immanentism.Thus, the event of vulnerability is grossly irreducible to codes and strata, blocks and territories. At this point, one may stop and think: does the experience of vulnerability stand limited to marginalized human beings who cannot grapple with the onslaughts of neoliberal precarity?Is vulnerability the actual shared reality of virtual 'precariousness'?In "From 'Social Exclusion' to 'Precarity'.The Becoming-migrant of Labour: An Introduction", Carl-Ulrik Schierup and Martin Bak Jørgensen situate the notion of "precaricity" in the context of "precarisation of city life" (13) and how marginal dwellers suffer neoliberal restructuring of city-life in terms of the actualization of precarity, it can be argued that the material experience of vulnerability cannot be restricted to 'human' lives; rather, needs to be delimited to that of 'nonhuman' beings which silently take on the adverse impact of 'precarisation' played by contemporary neoliberalists.In this regard, one may be reminded of "Vulnerability, Precarity, and the Ambivalent Interventions of Empathic Care" where Vrinda Dalmiya explains vulnerability as "frailties associated with human embodiment" (68) and precarity as "exclusionary political orders" that renders "some more vulnerable than others" (68), thereby implying that 'precarity' is a political instrument that is employed in tune with neoliberal capitalism to rule sections of economically marginalized people by turning them 'more vulnerable than others'.Whereas Dalmiya restricts her observation in the miserable lives of migrant 'care' workers, it is contended that vulner(ability) of nonhuman entities is politically exploited by contemporary neoliberalists to turn them into veritable vulnerables so as to make these subject to structures and strictures of precarity.In continuation with it, one may add up that vulnerables are differenciated embodiment of vulnerability whereas 'precariousness' stands tied to the process of differentiation in the domain of the virtual.Making sense of vulnerability thus stands incomplete if one does not take into account the following excerpt from Dialogues II by Deleuze and Parnet: Every actual surrounds itself with a cloud of virtual images.This cloud is composed of a series of more or less extensive coexisting circuits, along which the virtual images are distributed, and around which they run . . .The actual is the complement or the product, the object of actualization, which has nothing but the virtual as its subject.Actualization belongs to the virtual.The actualization of the virtual is singularity whereas the actual itself is individuality constituted.(148)(149) It means that an actual experience of vulnerability is constitutive of virtual presencing of 'precariousness' which accounts for co-extensive becomings of vulnerability on the plane of chaosophical immanentism.Therefore, the 'logic of precarity' may not be helpful in making sense of the differential 'repetitions' of vulnerability experienced by nonhuman entities which find 'precariousness' as the kinetic impetus of the gradual unfolding of vulnerability. Precariousness, Vulnerability and Ocean: Negotiating Micropolitics of Care The blue humanities name an ocean-infused way to reframe our shared cultural history.In "Toward a Blue Humanity", Ian Buchanan and Celina Jeffery contend that blue humanities as a disciplinary framework needs to be worked out to refashion human-ocean interface as "[blue humanities aims at] historicizing the ocean and making it part of contemporary consciousness in a way-we hope-that will enable environmental activism's bid to 'save' the ocean" (12).Buchanan and Jeffery seem to suggest that blue humanities at once is a modest interdisciplinary exercise to restore our "sense" of "connectedness" (12) and at times offers a critical lens to reexamine the dire impact of ecological catastrophes in the context of the ocean, thereby pointing at how global capitalism renders the ocean as a vulnerable and reducible totality instead of viewing it as a sum of our shared 'precariousness'.This contention can be backed up by referring to The Logic of Sense where Deleuze clearly reflects that "Nature is not collective, but rather distributive, to the extent that the laws of Nature . . .distribute parts which cannot be totalized.Nature is not attributive, rather conjunctive: it expresses itself through "and," and not through "is" . . .Nature is indeed a sum, but not a whole" (267).It means that in the context of the ocean, it is true that 'precariousness' actually provides 'kinetic' stimulus to the deterritorial becomings of the ocean which cannot easily be subjected to the structures and strictures of precarity for long inasmuch as it is governed by aleatory movements of singularities.It is blue humanities thinking that actually helps scholars explore how the oceans across the world stand at risk and urgently need human 'care' to take on the challenges of contemporary neoliberal precarity.In fact, in "Introduction: Science Studies and the Blue Humanities", Stacy Alaimo upholds the enormous role of blue humanities to critically take up ". . .epistemological problems of scale, onto-epistemologies of rapidly altering and utterly entangled lifeworlds, and the urgency of extinction" ( 431 This brilliant exposure at once points at the vulnerability of the oceans conditioned by the strands of 'precariousness' and at times underlines how vicious exploitations of neoliberal human beings in different forms render the oceans vulnerables-a reducible totality that cannot unfortunately lead one to take the measure of the dense mediation of vulnerability embedded in its unfolding 'precariousness'. Here, one may be reminded of Alaimo's insightful reflection in Exposed: "Modes of thinking, being, and acting may arise from a political recognition of being immersed in the material world, as they contend with the conceptual challenges of shifting timescales and traversing geo-capitalist expanses where one's own small domain of activity is inextricably bound up with networks of harm, risk, survival, injustice, and exploitation" (157).This critical reflection reveals how the inability of human beings in 'shifting timescales' results in putting the oceans at risk and more importantly, how the vulnerability of the oceans consequently gets checkered as it stands in tune with that of the becomings of marine and terrestrial lives. Vulnerability of the oceans thus stands interspersed by the heterogenous unfoldings of various marine species which are politically reduced to mere vulnerables to make the movements of the oceans subject to 'strata' and 'territories.'This reductive approach to the vulnerability of the oceans can well be dealt with the actualization of the 'micropolitics of care' inasmuch as the vulnerability of the oceans stands mediated through that of ours.In The World Is Blue: How Our Fate and the Ocean's Are One, Sylvia Earle accounts for the need of 'care' to stop the fates of humans and the oceans from being an easy prey to the exploitative designs of neoliberal precarity: "If the ocean dried up tomorrow, why should I care?"The question, posed by a cheeky Australian reporter in 1976, made me face up to that remote but painful possibility . . .Life can exist in the absence of a lot of things, but as astrophysicist Christopher McKay puts it: "The single non-negotiable thing life requires is water" . . .Earth's life-support system-the ocean-is failing.But who is paying attention? . . .The big question is, what can we do to take care of the blue world that takes care of us? (13-16) Earle rightly asked question-what do we do to 'take care' of the oceans which 'take care' of us?-so as to remind the greedy practitioners of neoliberal precarity that the ocean happens to be the earth's 'life-support system' which is 'failing' nowadays more rapidly than ever before, which calls for the invention of "a new way of thinking" (Earle 14) for the scholars working in the domain of Blue Humanities so as to contend with the awful challenges of neoliberal precarity.Earle has ingeniously suggested in the question asked that the actualization of the 'micropolitics of care' could be brought in to work out new modes and mechanisms of 'care' to make the challenges of neoliberal precarity null and void.In A Thousand Plateaus, Deleuze and Guattari elucidate how micropolitics offers 'minor' openings characterized 'flows' which pull micropolitics out of the structured workings of macropolitics: "the molecular, or microeconomics, micropolitics, is defined not by the smallness of its elements but by the nature of its "mass"-the quantum flow as opposed to the molar segmented line" (217).Taking resort to this powerful contention, one may contend that an act of care tends to take up 'micropolitical' becomings characterized by 'quantum flow' to evade hierarchized and marauding operations of neoliberal precarity and 'unfolds' itself when it stands in differential relationships with some combinatory factors including culture, ethics, politics, and ecology, among others.It means that an act of care takes up 'lines of flight' to unsettle 'arboreal' 15 workings of neoliberal precarity and negotiates political, cultural, social, ethical and ecological factors to re-lay the singularities of the oceans in deterritorial movements of 'smooth space'.Whereas neoliberal precarity employs 'striated' politics to regulate economical-ecological dynamics of the oceans, 'micropolitics of care' could well be engaged in a battle against the former as a veritable 'war machine' to turn the crevices of neoliberal precarity wide open on the one hand and on the other hand to align 'micropolitical becomings' of care with the deterritorial flows of 'smooth space': "we define "war machines" as linear arrangements constructed along 'lines of flight'. Thus understood, the aim of war machines isn't war at all but a very special kind of space, smooth space, which they establish, occupy, and extend" (Negotiations, Deleuze 33).This critical reflection of Deleuze in Negotiations attests to the 'revolutionary' potentials of 'smooth space' which may be worked out as a fluid launchpad to let 'micropolitics of care' take up 'lines of flight' against the regimented maneuvers of neoliberal precarity. Contextualizing Micropolitics of Care in Romesh Gunesekera's Reef The meeting between these two notions, difference and repetition, can no longer be assumed: it must come about as a result of interferences and intersections between these two lines: one concerning the essence of repetition, the other the idea of difference. (Difference and Repetition, Deleuze 27) Romesh Gunesekera's tour de force, that is, Reef, happens to be an elaborate critique of how the practitioners of neoliberal precarity aim at exploiting abundant marine resources including endangered species like dolphin, reefs, and so on to expand the world-wide marauding operations of precarity to the inner recesses of the ocean on the one hand and on the other hand calls for differential practices of 'care' to unsettle the free reign of neoliberal precarity across coastal and oceanic spaces.Garbed in the form of a fiction, Gunesekera exposes the 'vulnerability' of the ocean in the Sri Lankan coastal regions conditioned by strands of 'precariousness' through the tale of Mr Ranjan Salgado who, in spite of being a marine biologist, loses his academic integrity while coming to terms with the adverse changes happening to the ocean every now and then and 'surprisingly' engages himself in thinking of building a 'marine park' to give in the strong pull of neoliberal precarity thereby leaving the ocean at jeopardy.When the fiction begins, Salgado is introduced as a connoisseur of Sri Lankan cuisine: "At night, when alone, he usually liked to eat bread and western food: courses. Small discs of fried meat and creamy mashed potatoes the disappeared without a trace into his body.Corned beef was a favorite.He ate it with a seeni-sambol that burned the roof of your mouth" (8).Along with having considerable interests in Sri Lankan cuisine, Reef uncovers that Salgado prefers to study "mosquitoes, swamps, sea corals and the whole bloated universe" (24) and writes on "transformation of water into rock-the cycle of light, plankton, coral and limestone-the yield of beach to ocean" (24).While working on the mushrooming growth of 'coral business', Salgado exclaims in sheer wonder and awe: "Coral grows about as fast as your fingernails, but how fast it is disappearing?Nobody knows!" (47-8), thereby directly pointing at the 'vulnerability' of the ocean to the g(l)ocal operations of neoliberal precarity.In fact, he stands shocked at finding the free reign of 'bombing', 'mining' and 'netting' in the middle of the ocean which is replete with a number of 'delicate' and endangered species: You see, this polyp is really very delicate.It has survived aeons, but even a small change in the immediate environment-even su if you pee on the reef-could kill it, Then the whole thing will go.And if the structure is destroyed, the sea will rush in.The sand will go.The beach will disappear . . .You see, it is only the skin of the reef that is alive.It is real flesh: immoral . . .Mister Salgado threw up his hands, 'But who cares?' (48) This deep concern of Salgado reveals how delicate 'beings' of the ocean struggle for their existence while taking on the unresponsible and irrational activities of human beings and there exists a shortfall of 'care'-micropolitics of which could be employed as a 'war machine' against the workings of neoliberal precarity.One may also be reminded of another textual example which underlines the explicit manifestation of neoliberal precarity in the form of 'global tourism' and how it really puts the ocean at risk, thereby underscoring the need of 'micropolitics of care' to take on the onslaughts of neoliberal precarity: All they see is pockets full of foreign money.Coming by the plane-load.Don't they realize what will happen?They will ruin us.They will turn us all into servants.Sell out children . . .our country really needs to be cleansed, radically.There is no alternative. We have to destroy in order to create.Understand?Like the sea . . .He let go of me and stared at the ocean turning itself inside out . . .(111) This textual excerpt exposes how neoliberal precarity rides the wings of 'global tourism' to turn coastal people and oceanic species into easy vulnerables so as to reduce 'precariousness' embodied in 'vulnerability' to the discernible vulnerables for the marauding marches of neoliberal precarity.Pitted against this 'straited' politics of neoliberal precarity, 'micropolitics of care' could be employed as a 'war machine' to radically take on the challenges of neoliberal precarity.In other words, the production of new modes of care-mentality needs to be carried out to unsettle and resist the free reign of neoliberal precarity in the grab of 'global tourism' in the context of Sri Lankan coastal regions.It is quite possible to execute inasmuch as the deterritorial movements of the ocean can well be put in tandem with the 'establishment', 'occupation' and 'extension' of 'smooth spaces' by care in the form of a 'war machine'.In other words, 'differential repetitions' of the ocean can tenably be aligned with the 'micropolitical' unfolding of care which denies to settle in a reducible totality and ceaselessly produces itself while being paired up with the combinatory interplay among cultural, political, ethical and ecological heterogeneities.This productive potentials of the 'micropolitics of care', as the text indicates, needs to be engrafted onto the 'differential repetitions' of the ocean so as to empower the latter to effectively take on the reductive workings of the neoliberal precarity. Gunesekera's Reef divulges how the local government is quite involved in letting impoverished locals getting engaged in the practices of illicit exportation of endangered fishes for making livings out of it: 'Someone has caught a dolphin,' the crab-seller said.'They got a dolphin?' 'Yes, they will kill it quickly, Very good money.Someone's lucky day' . . .'Killing . ..' she shook her head to herself.'Why dolphins?'What next?' Outside a man was filling an unmarked van with baskets of dead fish, Small pieces of bleached white coral marked the municipal parking lot.(118) This poignant case at once divulges lackadaisical attitude on the part of the local government in taking apt punitive steps against fish-smugglers and at times exposes a terrible truth: the local government seems to have got overridden by the stands of neoliberal precarity which does not allow the local government to adequately 'take care' of the coastal as well as marine lives so that the practitioners of neoliberal precarity keep on treating coastal as well as marine lives as potential vulnerables.Gunesekera seems to have subtly suggested that unless a new of mode of care-mentality is worked out, the ongoing deterioration of the ocean cannot be brought under check.Interestingly, the adverse influences of neoliberal precarity have not just only incapacitated local government in taking care of the coastal as well as marine lives alike but also have impacted otherwise strong academic and ethical integrity of an individual like Mr Salgado who 'procrastinates' in drawing the obvious conclusion on the palpable impact of neoliberal precarity on the ocean and at one point in the fictional narrative, he, surprisingly, gives vent to the following thought: I used to think that in a month or two, the next year, I would have a chance to turn the whole bay into a sanctuary.A marine park.I used to plan it in my head: how I'd build a jetty, a safe marina for little blue glass-bottomed boats, some outriggers with read sails, and then a sort of floating restaurant at one end . . .It would have been a temple to your gastronomic god . . .I thought of it like a ring, a circular platform with the sea in the middle.(177) This textual excerpt vouches for the insidious inroads of neoliberal precarity into the thoughts of an individual like Mr Salgado and lays down the ground ready for practices of 'micropolitics of care' as a smooth politics against the structured operations of neoliberal precarity.In other words, setting up of a marine park in the middle of the ocean happens to be an extended reflection of neoliberal precarity in the sense that Mr Salgado, in spite of being a marine biologist, indulges himself in thinking of making money by means of putting the oceanic lives at jeopardy-it is as it were that the ocean is understood to be a reducible totality instead of a deterritorial assemblage of heterogeneous ecologies.Practices of 'micropolitics of care' in tandem with the 'differential repetitions' of the ocean could offer new modes of care-mentality which is urgently required to pull off a win over the exploitative reign of neoliberal precarity. Gunesekera's fictional exposure of the increasing vulnerability of the ocean calls for the actualization of 'micropolitics of care' which is at once equipped with the subversive power of a 'war machine' to take on the 'straited' politics of neoliberal precarity and at times can facilitate 'sensible' human beings to adequately take care of the ocean which takes care of us in praxis.Against the practices of "conspicuous consumption" (Gunesekera 135), 'micropolitics of care' may help 'sensible' human beings to strike up a departure from the strong pull of neoliberal precarity and instead, help them inculcating the habit of an ethical engagement with oceanic lives.In this regard, one may be reminded of Eating the Ocean where Elspeth Probyn argues: "I wonder how we can care a bit more, or a bit better, for the entire entangled marine elements that we devour when we eat the ocean . . .can we eat with the ocean?" (7).Here, Probyn argues for the 'sustainable' use of fishes so as to 'care' a bit more to the oceans.This contention could be questioned by referring to Ian Buchanan's rejoinder titled "Must We Eat Fish?": ". . . it is an argument in favour of the death of the ocean" (81).At this point, one may stop and think: how should 'micropolitics of care' figure in addressing the problematic of 'eating the ocean'?This intriguing question could critically be responded by arguing that human beings can neither pragmatically put a stop on the consumption of fishes per se nor can be indifferent to the possibilities of the oceanic life nor can stay aloof from 'eating (up) the ocean' but can productively negotiate 'micropolitical becomings' of care to forge an ethical engagement with the 'differential repetitions' of the ocean and its adjacent coastal ecologies.This contention can further be elucidated this way that coastal people can neither be disengaged from depending on the necessary sustenance from the oceans nor be employed as pliable tool for the territorial expansion of neoliberal precarity.In short, ethical practices of 'micropolitics of care' may come useful in putting up resistance to the appalling 'faces' of neoliberal precarity in general and particularly in reinventing 'lines of flight' to facilitate oceanic ecologies to survive the oddities of contemporary planetary crisis. Conclusion It is quite deniable that it is the unmappability of care that actually allows one to put it in combinatorial interplay with a number of factors-cultural, political, social and ecological, among others-thereby making a modest attempt to unsettle the free reign of neoliberal precarity in the context of the ocean.Being the 'life-support system' of the Earth, it is argued that, the ocean chooses to posit itself in the 'middle' thereby giving material forms to the ongoing geophilosophical tensions of the Earth.Thus, an ethical engagement of the 'micropolitics of care' with the multiplicity of an ocean may help us figuring out the differentiating materiality of an ocean expressed through the extensive transformation of its 'outside'.In Critical Environments Postmodern Theory and the Pragmatics of the "Outside", Cary Wolfe holds that an "ethics of thought" . . .[produces] new concepts by means of the continual confrontation of thought with its own outside" (xix) and following this contention, it can well be understood that in association with 'differential repetitions', 'micropolitics of care' constantly confronts its own 'outside' to slip through the pitfalls of neoliberal precarity and thus stands capable of working as a veritable 'war machine' against the practices of economic territorialization. It is through the subtle 'trans(in)fusion' of the molecules of discontinuity or what Karen Barad holds in Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning "the intra-active ongoing articulation of the world in its differential mattering" (381), the ocean seeks to function as a 'middle' to the 'intensive' dynamism of matter-an important dimension of planetary consciousness-which is still left to be explored. In short, opposed to the arboreal structurality of neoliberal precarity, ethical unleashing of 'micropolitics of care' in alignment with the logic of 'continuity-contiguity' may be useful not just only in mapping the polytonality of the vulnerability of the ocean but also in tracking down the predatory strides of neoliberal precarity across the nonhuman spaces.Blue Humanities 11.Intensities, in this context, could be mapped as 'free singularities' that travel in alignment with the movements of deterritorialization. 12. 'Striated' space refers to a 'metric' spatiality that embraces territorialization.It means that 'striated' space could be understood as "space is [that is] counted in order to be occupied" (A Thousand Plateaus, Deleuze and Guattari 362).'Striated' space is understood to be capable of explaining the "laminar movement of flows" (A Thousand Plateaus, Deleuze and Guattari 370) and turns out to be an embodiment of a homologous space. 13. 'Smooth' space is onto-epistemologically "vectorial, projective or topological" (361) in nature.It is both found as "the space of the smallest deviation" and "a space of contact, of small tactile or manual actions of contact. .." (A Thousand Plateaus, Deleuze and Guattari 371).In other words, it is ". . .a field without conduits or channels, . . .[that] is wedded to a very particular type of multiplicity: nonmetric, acentered, rhizomatic multiplicities that occupy space without "counting" it. .." (A Thousand Plateaus, Deleuze and Guattari 371). In Remapping Energopolitics: Blue Humanities, Geophilosophy and Sri Lankan Minor Writings, Abhisek Ghosal underlines the profound importance of taking blue humanities into account while trying to figure out the nuanced 'interconnectedness' between humans and the ocean: "Blue Humanities thus functions as a fluid epistemic portal to facilitate one to step into the worlds of intersectional ecology" (2).One may also find a Deleuzean reworking of 'blue humanities thinking' in "Blue (Infra)structuralism: Blue Postcoloniality, New Earth and the Ethics of Desiring-production" where Abhisek Ghosal and Bhaskarjyoti Ghosal contend: Blue (infra)structuralism seeks to account for the "smooth space" that an ocean embodies and helps one understand how an ocean works by the principles of "trans(in)fusion" (Ghosh 2021, 2) and "transcorporeality" (Alaimo 2010, 2).An ocean is made up of "flows" that seek to "fold" in the process of becoming, thereby producing "fields" or what Deleuze and Guattari call "a plane of consistency" (1987, 190). (207) ( Ocean, Mentz xviii) . . .[Blue Humanities Thinking] explores the diverse physical shapes and phases of water on our planet.(An Introduction to Blue Humanities, Mentz xviii) 14 ) so as to figure out how contemporary neoliberal structures of precarity work together to turn the oceans into easy vulnerables.In Exposed: Environmental Politics and Pleasures in Posthuman Times, Alaimo particularly points at the sheer "exposed" states of the ocean as easy vulnerables for the practitioners of contemporary neoliberal precarity: Atomic testing.Dead zones.Oil "spills."Industrial fishing, overfishing, trawling, long lines, shark finning, whaling.Bycatch, bykill, ghost nets.Deep sea mining and drilling.Cruise ship sewage.BP.Fukushima.Radio active, plastic, and microplastic pollution.Sonic pollution.Climate change.Ocean acidification.Ecosystem collapse.Extinction.The destruction of marine environments is painful to contemplate.(111)
2024-07-28T15:12:48.004Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "9897fd29725912420b08fffb270d38ffb94add6a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.22452/sare.vol61no1.6", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a7ac3be9f0c172914595eedb2ed61ddab6ceacf4", "s2fieldsofstudy": [ "Environmental Science", "Philosophy" ], "extfieldsofstudy": [] }
19025301
pes2o/s2orc
v3-fos-license
De Novo Transcriptome Assembly and Differential Gene Expression Profiling of Three Capra hircus Skin Types during Anagen of the Hair Growth Cycle Despite that goat is one of the best nonmodel systems for villus growth studies and hair biology, limited gene resources associated with skin or hair follicles are available. In the present study, using Illumina/Solexa sequencing technology, we de novo assembled 130 million mRNA-Seq reads into a total of 49,115 contigs. Searching public databases revealed that about 45% of the total contigs can be annotated as known proteins, indicating that some of the assembled contigs may have previously uncharacterized functions. Functional classification by KOG and GO showed that activities associated with metabolism are predominant in goat skin during anagen phase. Many signaling pathways was also created based on the mapping of assembled contigs to the KEGG pathway database, some of which have been previously demonstrated to have diverse roles in hair follicle and hair shaft formation. Furthermore, gene expression profiling of three skin types identified ~6,300 transcript-derived contigs that are differentially expressed. These genes mainly enriched in the functional cluster associated with cell cycle and cell division. The large contig catalogue as well as the genes which were differentially expressed in different skin types provide valuable candidates for further characterization of gene functions. Introduction Inner Mongolia Cashmere Goat (Capra hircus, IMCG) is a diploid (2 = 60) mammal that belongs to the family of Bovidae.It plays an important role in the world animal fiber industry because it can produce high quality underhair (cashmere is the commercial name) and is one of the world's largest breeding groups.Cashmere produced by IMCG, which is of a small diameter (14-18 m) and is soft to touch, is grown from the secondary hair follicle (HF) of the body skin [1,2].Fiber diameter and length determine both the quality and the amount of cashmere produced by an animal.The longer the length and the smaller the diameter of the cashmere fibers, the higher the price becomes.IMCGs exhibit seasonal rhythm and annual cycle of cashmere growth that are controlled by daylength.During the period from the summer solstice to midwinter, when the length of day is reduced, cashmere fiber has a high growth speed; in contrast, it becomes low during the period from midwinter to the next summer solstice [3,4].This photoperiodic characteristic of cashmere fiber growth is convenient for cashmere harvest and formulating a management strategy of cashmere production.During the past decades, many mammalian genomic and transcript sequences have become available, including Homo sapiens, Mus musculus, and Bos taurus which play important roles in understanding HF formation and hair growth.However, only a total of 561 ESTs are pertinent to goat skin or hair follicle indicating that few studies focused on understanding the gene expression pattern of goat skin or hair follicle.On the other hand, almost all the deposited goat skin associated ESTs were sequenced by the traditional approach from randomly picked cDNA clones which did not guarantee that the less-abundant transcripts could be efficiently detected.In addition, we also observed that cashmere fibers of IMCG are mainly produced by the back (BK) and side of the body (BS) skin of the trunk coat, but few are produced by the belly (BL) skin.This indicated that the gene expression patterns of BK and BS skin are different from BL.Therefore, the gene expression patterns and differential gene expression profiling of three skin sites during active hair growth phase (anagen) are important to understanding the underlying molecular mechanism associated with cashmere fiber growth. The advents of the ultra-high-throughput, cost-effective next-generation sequencing (NGS) technologies make the whole transcriptome sequencing and analysis feasible, even in the absence of genomic data [5].In the past few years, NGS has been widely used in RNA sequencing (RNA-Seq) which provided researchers with more information about gene expression, regulation, and networks under specific physiological conditions or developmental stages in both model and nonmodel organisms [6][7][8].It offers accurate quantitative and digital gene expression profiles of sequenced transcripts [5]; moreover, it has very low background noise and a large dynamic range of gene expression levels compared with DNA microarray [5].In the present study, we utilized Illumina/Solexa paired-end mRNA-Seq approach to sequence and de novo assemble the goat skin transcriptome during anagen phase, and further investigated the gene expression profiles of three different skin types based on the assembled contigs, skin from the BL, BK, and BS of the trunk coat, which have a large discrepancy in cashmere production. Goat Skin Tissue Collection and RNA Extraction.Generally, according to the different skin types of trunk coat, skin from the back and body side of IMCG produce either wool or cashmere, whereas belly skin produces mainly wool fiber and few cashmere [9].A breeding, two-year-old female goat was sampled from Yi Wei White Cashmere Goat Breeding Farm at Ulan Town of Erdos in Inner Mongolia Autonomous Region, China.During anagen phase (Nov, 2010), the hair (wool and cashmere) of belly, back, and body sides of the goat were sheared and further shaved.After sterilization with 70% alcohol, full-thickness skin sections of each body part were excised and immediately frozen in liquid nitrogen for storage and transport until RNA isolation.Goat skin tissue collections were carried out in accordance with the guidelines of Inner Mongolian Animal Society Ethics Committee.This study has been checked and approved by Inner Mongolian Animal Society which is responsible for Animal Care and Use in Inner Mongolia Autonomous Region, China.Skin excision was performed after Xylazine hydrochloride anesthesia, and all efforts were made to minimize suffering.Before RNA extraction, each sample was washed with 10 mL PBS (pH 7.2) and 0.5 mM EDTA.Total RNA of each sample was isolated by using a TRIzol plus RNA purification kit according to the manufacturer's protocol (Invitrogen).Total RNA quality and concentration were determined by a 2100 bioanalyzer nanochip (Agilent).Enrichment of mRNA from total RNA was performed with a RiboMinus RNA-Seq Kit according to the manufacturer's instructions (Invitrogen). Paired-End Library Construction and Illumina Sequencing. It has been reported that mRNA fragmentation can result in a more even coverage along the entire gene body, whereas cDNA fragmentation is more biased towards the 3 end of the transcript [8].Therefore, each ploy(A)-enriched RNA sample was chemically fragmented into small pieces by using divalent cations at 94 ∘ C for 5 min.The fragmented RNA was reverse-transcribed into cDNA by using random hexamer primers containing a tagging sequence at the 3 end with the use of a superscript III double-stranded cDNA synthesis kit according to the manufacturer's protocol (Invitrogen).The double-stranded cDNA was subjected to endrepair and further 3 terminal tagging by the addition of 5 DNA adaptors and T4 DNA ligase with overnight incubation at 16 ∘ C for 16 hours.The targeted di-tagged cDNA was purified by polyacrylamide gel electrophoresis (PAGE) and gel excision (200 ± 25 bp).The clean di-tagged cDNA was enriched by limited-cycle PCR amplification (18 cycles) with primer pairs that annealed to the tagging sequences of the di-tagged cDNA.Library purification by PAGE removed any residual nucleotides, PCR primers, and small amplicons.Three independent paired-end libraries were sequenced on a HiSeq2000 system.The initial Illumina short reads from this study have been submitted to the NCBI sequence read archive (SRA, http://www.ncbi.nlm.nih.gov/Traces/sra/sra.cgi/) with the accession number SRA055764. De Novo Assembly of mRNA-Seq Reads, CDS Prediction, and Validation. Due to the fact that the base quality requirement for de novo assembly is more strict than that of the resequencing project, customized Perl scripts were used to remove reads which contained adaptor contamination, low quality bases (>2% base smaller than Q20 per read), and undetermined bases (>2% "N"s per read) from each dataset generated from different skin types.Three datasets were concatenated in a left-to-left and right-to-right manner.Next, the clean high-quality reads dataset was de novo assembled with default parameters by using Inchworm assembler which is a component of Trinity software [10].All of the sequence reads were initially trimmed to 50 bp (nucleotides 21 to 70 of each read) in length and then used in mapping experiments and statistical analysis.Mapping short reads uniquely back to the contigs was performed by SOAPaligner 2.20 with two mismatched bases per read permitted [11].Coding sequence prediction was performed by GENSCAN [12].The contigs which contain two or three predicted CDSs may attribute to a small proportion of false positive discovery of predicting coding sequences from mature transcript sequences.Ten randomly selected putative full length transcripts which did not assign with known protein functions and another ten transcripts that are associated with hair cycling were subjected to perform reverse transcription polymerase chain reaction (RT-PCR) and Sanger sequencing.Primer sequences used for RT-PCR are available upon request.The assembled transcriptome sequences (49,115 contigs greater than 300 bp in length) in this study have been deposited to the NCBI's transcriptome shotgun assembly (TSA) database under the consecutive accession numbers from KA304519 to KA353633.were downloaded from the NCBI RefSeq database (ftp://ftp .ncbi.nih.gov/refseq/).Homology searches against the Swissprot and NR databases were performed by using BLASTX algorithms with an E-value cutoff of 10 −10 .Mouse and Cow RefSeq RNA data were also downloaded as reference sequences for short reads mapping to calculate hit numbers compared with de novo assembled contigs.BLASTX was used to perform KOG and KEGG annotation [13,14], followed by retrieval of the functional proteins and assignment to each of the classification entries (E-value 10 −10 ).Gene ontology (GO) against the NR database was conducted by Blast2GO (E-value 10 −10 ) [15].WEGO [16] and GO terms classifications counter (http://www.animalgenome.org/tools/catego/) were used for assignment of each GO ID to the related ontology terms. Digital Profiling of Differentially Expressed Transcripts and qRT-PCR Validation. According to the AC statistical framework [17], the value of differential expression significance of each transcript-derived contigs between two samples was calculated by using the following equation: 1 and 2 represent the total uniquely mapped reads in sample one and sample two, respectively. is the number of reads mapped to a certain gene in sample one, and represents the number of reads mapped to the same gene in sample two. After the calculation of value, multiple hypothesis testing was performed to correct value by using Phyper function of the R tool (http://www.r-project.org/).RPKM values of each contig were estimated by aligning trimmed reads back to the contigs [18].Both a -value of less than 10 −3 and a RPKM value with at least 2-fold difference between the two samples were used as criteria to determine significant DEGs.KOG enrichment analysis was conducted by hypergeometric distribution test by using the Phyper function in the R software package.Bonferroni correction was further used to adjust the values, respectively.The significantly enriched functional clusters were selected when the corrected value (-value) was less than 10 −3 .Quantitative real-time PCR (qRT-PCR) was performed on the same individual of the corresponding body part skin.Total RNA was firstly treated with DNase I before reverse transcription by superscript III double-stranded cDNA synthesis kit (Invitrogen).Each cDNA sample was used as template for qRT-PCR by using the SYBR Premix Ex Taq II kit (TaKaRa) on a 7300 real-time PCR system (Applied Biosystems), and at least three technical repeats were performed for all the genes within each template.Acetyl-CoA carboxylase 1 (a7431; 102) was used to normalize gene expression quantities between samples.Gene expression fold difference between two samples was calculated by the 2 −ΔΔCt method [19].PCR primer sequences are available upon request. Illumina Paired-End Sequencing, De Novo Transcriptome Assembly, and Ab Initio CDS Prediction.To obtain comprehensive transcripts of skin tissue that provide an overview of gene expression profile during anagen in the cashmere goat, skin tissues from the belly (BL), back (BK), and the side of the body (BS) during anagen were sampled.The skin sections were made to show primary hair follicles and secondary hair follicles (Figure 1).Then, total RNA from each sample was isolated, respectively.Three RNA-Seq libraries were constructed and sequenced by using Illumina/Solexa technology.As a result, a total of approximately 130 million raw reads (65 million paired-end reads, 2 × 100 bp) that represented roughly 13 GB of sequence data were generated from three independent 200 bp insert libraries.The initial read quantities of three libraires were listed in Table To validate sequence assemblies, we randomly selected ten predicted full-length CDSs that did not possess BLASTX hits in the NR database and ten genes which have been demonstrated that are specific to hair cycling and hair growth to perform RT-PCR and Sanger sequencing.These genes include the signaling molecules such as Wnt, insulin-like growth factor 1 (IGF-1), members of fibroblast growth factor (FGF), and their receptors encoding genes such as Frizzled and IGF-1R.The result showed that 19 PCRs are positive and the Sanger sequencing results all showed higher than 97% identities with de novo assembled transcripts indicating the relatively high credibility of sequences assemblies (Table S1 and Table S2 in Supplementary Material available online at http://dx.doi.org/10.1155/2013/269191).To further validate the quality and sequencing depth of the assembled contigs, we mapped the total short reads back to the assembled contigs. The sequencing depth ranged from 2-to 164,789-fold, with an average sequencing depth of 249-fold.Specifically, 87.4%, 87.5%, and 85.9% corresponding to the BL, BK, and BS skin of high-quality reads, respectively, can uniquely be realigned to the 49,115 contigs (Table 1).The relatively small proportion of unmapped reads may be involved in comprising the contigs which are shorter than 300 bp.This suggests that the majority of short reads in our RNA-Seq data were efficiently assembled into relatively larger contigs. Codon Usage and SSR Marker Identification. Examining the codon usage of 23,039 predicted CDSs showed that the most abundant amino acids encoded by triplet codons are nonpolar (hydrophobic) amino acids (44.1%), and then the uncharged polar amino acids (29.5%), while the acidic and basic amino acids accounted for 12.3% and 14.1%, respectively (Table S3).Among the We used MISA (http://pgrc.ipk-gatersleben.de/misa/misa.html) to search the potential simple sequence repeats (SSRs) existed in our assembled transcripts.In this study, the repeat sequence which consists of dinucleotide, trinucleotide, tetranucleotide, pentanucleotide, and hexanucleotide tandem repeats with at least 18 bp in size was considered as an SSR.We found a total of 2,011 transcripts-derived SSRs that represent 158 unique repeat motifs scattered in 1,850 contigs of which 141 contigs contain at least 2 SSRs.The frequency of SSR occurrence is 4.09% and the average distance is 22.6 kb in our assembled 49,115 large contigs.43.6% of the total SSRs are trinucleotide repeats, followed by dinucleotide (26.3%), hexanucleotide repeats (20.4%), and only a small proportion of them are pentanucleotide and tetranucleotide repeats (5.5% and 4.2%), respectively.The AC/GT (24.1%) motif comprises the highest frequency among all the identified SSRs, followed by CCG/CGG (18.4%),AGC/CTG (11.5%), and AGG/CCT (7.9%).The types of SSRs and their occurrence frequencies that we found in the goat are similar to the findings in other mammals but different from the results in plants.For example, the AC/GT repeat type, most abundant in the goat transcriptome sequence, is also very abundant in the human genome and in other vertebrates, whereas this repeat type is rarely observed in rice and sweet potato [20,21].The microsatellites identified in this work will become a valuable resource for goat genetic mapping. Functional Annotation of Transcript-Derived Contigs. To functionally annotate the assembled contigs, a sequence similarity search was performed against Bos taurus RefSeq protein sequences (32,242 sequences), the Swiss-Prot protein database, and the nonredundant (NR) protein database by 2).Among the 22,146 BLASTX-hit possessing contigs, it is worth to note that only 103 (0.47%) contigs corresponding to the NR database top hits match goat itself, which could explain the limited number of goat gene and protein sequences currently available in the public database.Examining the 22,146 (45.1%) contigs with a high similarity to NR, we found that 19,040 contigs also harbored a predicted CDS, demonstrating that many putative CDSs cannot be annotated by known functions and thus may indicate some new genes existed in our assembled contigs.The remaining 26,969 contigs which had no significant hits in the NR database are more likely 5 and 3 -untranslated regions (UTRs) or previously uncharacterized ESTs (or genes specifically expressed in Capra hircus).These transcripts are shorter in average length and relatively less abundant in their sequencing depth compared with those contigs which significantly hit to the NR database (data not shown).However, we also noted that many contigs with high sequencing depth showed no hits with the NR database.In the top 1,000 most reads-abundant contigs, 69 contigs with an average length of 893 bp showed no hits with known proteins. In fact, when we aligned sequenced reads to the 36,442 Mus musculus and 34,573 Bos taurus RefSeq mRNA, only 8,717 (25.2%) and 14,836 (43.0%) sequences were mapped to each, respectively.This suggested that Capra hircus is not phylogenetically close to the other two mammals, even though Capra hircus belongs to the same family as Bos taurus. Since hair (wool and cashmere) mainly consists of highly compressed dead keratinocytes, we elucidated the relative abundance by calculating the values of reads per kilobase per million reads (RPKM) of 49,115 contigs which enabled us examine the expression level of the keratin-encoding genes relative to other genes.Through BLAST searching, we found a total of 126 keratin or keratin-associated protein (KAP) encoding sequences presented in our contig database.As expected, of the total 49,115 contigs, the top ten most abundant exclusively encode keratin or KAP, including K5, K14, KAP3.1, K33B, and KAP1.1.The remaining 116 keratins or KAP associated contigs also showed a greater average abundance than other genes.We also noted that some of the keratin-related contigs exhibited relatively higher amino acid diversity (6 out of 126 with <80% identities and 11 with <90% identities at BLAST matched region) when compared with the NR database top hits, suggesting a series of novel keratin variants may be synthesized in skin tissue undergoing rapid hair (anagen phase) growth.The expression level of keratin or KAP may also give an insight into the promoter efficiency and selection while performing exogenous gene expression in the skin tissue. Functional Classification of Assembled Contigs by KOG, KEGG, and GO. We used BLASTX to search against functional proteins from the KOG (euKaryotic Orthologous Groups) database which is a component of the clusters of orthologous groups (COG) database [13].16,036 contigs had significant hits (-value 10 −10 ), and these were classified into 4 groups and 25 functional clusters.Apart from 4,750 poorly characterized genes, cellular process, and signaling appeared as the largest group, which consisted of three highly abundant clusters including signal transduction mechanisms (3,106 genes), intracellular trafficking, secretion, vesicular transport (906 genes), and cytoskeleton (864 genes).Furthermore, to analyze pathway-based biological activities of genes which were expressed in goat skin, we annotated the 49,115 contigs against the KEGG (http://www.genome.jp/kegg/)protein database.From our contig database, 15,020 (30.6%) contigs were assigned to the 291 KEGG pathways.Among them, 3,948 genes can be further assigned to 23 signaling pathways, including the MAPK, Wnt, Insulin, Hedgehog, TGF-beta, VEGF, and Notch pathways which have been previously demonstrated to play various important roles in HF development and hair shaft differentiation. In addition, gene ontology (GO) (http://www.geneontology.org/)analysis was also performed by using the Blast2GO program to further classify the transcript-derived contigs [15].18,069 contigs were cataloged into three main GO domains with a total of 129,669 GO IDs, and further subdivided into 47 subcategories (Figure 3).Of these assigned GO terms, biological process was the predominant domain followed by molecular function and cellular component.Under the biological process category, we found that cellular process and metabolic process are prominently represented, as they were in the KOG and KEGG classification, suggesting complicated metabolic activities occurred in anagen phase goat skin.A high correlation among KOG, KEGG, and GO classifications may reflect that goat hair growth is mediated by the complicated metabolic processes.of global measurement of differentially expressed genes is to take the quantity of NGS reads as an indicator for calculating transcript abundance [5][6][7].Quantitative measurement of gene expression by NGS technologies has been suggested to be accurate and highly correlated with other methods of detecting gene expression levels, such as qRT-PCR and DNA microarray [5].To identify differentially expressed genes (DEGs) among the three different skin types reflected in our short reads dataset, we mapped the short reads datasets from three libraries to the 49,115 contigs by SOAPaligner with the seed length of 50 bp.After mapping skin type-specific short reads to the reference, we calculated RPKM values for all contigs which can be used to quantify the expression of contigs both within and between samples [18].An AC statistical framework was applied to calculate the value of each transcript-derived contig expression difference by two-sample comparison [17].Then, we performed multiple hypothesis testing by controlling the false discovery rate (FDR, -value) to correct the -value.In this study, both the FDR was less than 10 −3 and the RPKM values were greater than 2-fold (or less than 1/2-fold) different between two samples, then they were considered as a statistically significant DEGs (Table S4).From BL-BK comparison, we observed that 3,532 transcript-derived contigs were upregulated expression from BK when compared to BL skin, and 9,927 were downregulated (Figure S1).Similarly, 3,128 upregulated and 6,811 downregulated contigs were detected in the comparison of BS skin with BL skin.Further analysis revealed that 1,360 transcript-derived contigs were consistently upregulated and 4,973 were downregulated in BK and BS compared to BL skin (Figure 4).However, the number of DEGs was sharply reduced to 5,367, of which 3,338 were upregulated and 2,029 were downregulated from BS to BK skin, indicating that the genes expression patterns between BK and BS are more similar than those two comparisons. Differential Gene Expression The BK and BS skin of the Cashmere goat mainly produce cashmere fiber, whereas BL skin mostly grows wool fiber but few cashmere fibers.Therefore, to investigate differences between these two kinds of skin types, we annotated the 6,333 consistent DEGs and further performed KOG enrichment analysis compared with transcriptome background.We found that the clusters involved in the cell cycle control, cell division, and chromosome partitioning in KOG classification were overrepresented by these DEGs (Table S5).The evidence is that 71 out of 1442 (4.92%) annotated DEGs (KOG database annotation) which derived from 538 counterparts of total 17,594 (2.97%) annotated transcriptome sequences ( < 8.47−6, -value < 2−4).Furthermore, ten significant DEGs that enriched in this cluster were selected to perform qRT-PCR and to investigate gene expression difference among three skin types.Although the exact fold difference for each DEG by qRT-PCR were different from the RNA-Seq method but all the comparison pairs had the similar trends with RNA-Seq approach suggesting the relative high consistency between RNA-Seq and qRT-PCR (Table S6). Discussion To obtain the comprehensive transcripts of goat and its gene expression profiles reflected in different skin types during anagen phase, we sequenced and assembled mRNA from BL, BK, and BS skin of body coat.The assembler used in this work is Inchworm which is a major component of Trinity software [10].Initially, the assembler generated 265,169 contigs over 100 bp (average contig length 299 bp and N50 length 417 bp) corresponding to ∼79.3 MB sequence length, 87,962 contigs of which are longer than 200 bp (average contig length 622 bp and N50 length 1001 bp) representing ∼54.8 MB nonredundant sequence length.When we use these smaller contigs for functional annotation, more redundant hit accessions were obtained, and the functional classification of transcriptsderived contigs would be more redundantly represented in each functional cluster, which gave us biased interpretation and overview of the transcripts functions.In this study, we used the contigs over 300 bp for annotation, which yielded 22,146 hits and represented 17,472 nonredundant accessions, indicating that the large proportion of the contigs belonged to UniGene clusters.On the other hand, approximately 87% of the total short reads can uniquely be mapped to the 49,115 contigs also suggesting that the majority of reads contributed to comprising those larger contigs. As the different skin-regions of the Cashmere goat body coat, cashmere fibers are mainly produced from skin on the BK and BS, with few growing from the BL part.The molecules that are differentially expressed among these skin types may have underlying or potential roles associated with cashmere growth.Through calculating the short reads mapped on each contig from different libraries, we identified 6,333 consistently differentially expressed transcripts from BK and BS responded to BL skin (Figure 4).These DEGs were mainly enriched in the cell cycle control, cell division, and chromosome partitioning in the KOG functional clusters, indicating that the gene expression pattern associated with cell cycle and the cell division between two types of skin are significantly different.For instance, a kinetochore-bound protein kinase, named budding uninhibited by benzimidazoles 1 (Bub1), was identified 4-and 3-fold downregulation from BK and BS compared with BL skin (contig ID a91887;15).This kinase functions in part by phosphorylating a member of the miotic checkpoint complex and activating the spindle checkpoint [22,23].Similarly, we found another important kinetochore protein known as NDC80 (a132791;11) which is responsible for chromosomes segregation during M phase of the cell division that was also identified as a significant downregulated gene from BK and BS compared with BL skin [24].In addition to the genes involved in the spindle checkpoint during cell division, we also noted that some important regulators which directly participate in the regulation of cell cycle progress are differentially expressed.For example, we found that cyclin-dependent kinase 7 (CDK7, a53471;29) as well as its partner, Cyclin-H (a65131;18), which form a complex to directly regulate cycle division were both identified as downregulated DEGs in BK and BS libraries compared within BL part [25].Furthermore, profiling of cell cycle associated DEGs by using qRT-PCR method also showed relatively high consistent result with RNA-Seq.This may suggest that the hair synthesis rates between cashmereand wool-producing skins are significantly different. A large body of literature has focused on revealing the molecular mechanisms of HF initiation, patterning, and hair cycling in mammalian model organisms such as Mus musculus.Many significant studies mainly focused on the function of upstream molecules of the signal transduction pathway such as Wnt/-catenin, TGF-, Eda, Hedgehog, IGF, and their receptors [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42].Generally, conditional knockout or skin tissue-specific overexpression of ligand, receptor, or adaptor molecules from these pathways during the embryogenesis or postnatal stages usually has diverse effects on HF and hair shaft formation.One example is catenin which serves as an important adaptor molecule in embryogenesis.Sustained epithelial -catenin activation by a transgenic approach caused excessive induction and fusion of HF with severely impaired fiber shaft formation [43].However, the factors which directly promote hair growth are rarely characterized.The most compelling molecule discovered to promote hair growth is FGF-5, a secreted protein that when ablated from mice leads to abnormally long hair (∼1.5-fold longer than wild type) by either the elongation of the anagen phase or retardation of catagen initiation [44].IGF-1, another important mitogen associated with HF development, has been reported as an elongation factor of mouse whiskers [31] and was recently demonstrated to promote body hair growth by overexpression of IGF-1 in mouse skin through transgenic approach [30].But this growth was accompanied by an absence of two types of body coat hair and disorientation of a small proportion of HF [30].Nevertheless, we did not identify that these molecules encoding genes were significantly differentially expressed.This may ascribe to the three samples that were all derived from anagen phase of the hair cycle, because many previous studies have demonstrated that these signaling molecules and their receptors function as important regulators during the transition from telogen (resting phase) to anagen, such as Wnt, Sonic hedgehog, and TGF- family members [45][46][47], suggesting that the expression levels of these signals are different between two periods.In our DEG catalogue, only the genes associated with cell division and cell cycle are significantly enriched, which might indicate that the efficiency of hair synthesis is different between two skin types.The function of these enriched DEGs involved in the hair cycling should be further characterized. Conclusions Taking into consideration the read abundance, average contig length, N50 length, and total contig size, our assembled contig catalogue provided a relatively complete and comprehensive dataset which could reflect the goat skin transcriptome during the anagen phase.The identification of numerous genes, including those showing differential expression in three skin types, especially those DEGs which are enriched in the functional cluster, will provide us good launching points and resources for further characterizing gene functions associated with hair growth.Our dataset was generated solely with the use of an Illumina HiSeq2000 platform, demonstrating that this ultra-high-throughput sequencing technology is a suitable tool for investigation of the large eukaryotic organism transcriptome and global measurement of gene expression profile.Finally, the extremely abundant paired-end reads generated from anagen phase will be very useful for subsequent studies, such as comparisons with gene expression patterns from catagen or telogen phase, which will be very helpful for further identifying genes associated with hair follicle development and fiber growth. Figure 1 : Figure 1: Frozen sections of cashmere goat skin stained by hematoxylin.White arrows indicate the primary hair follicles (PHFs) and secondary hair follicles (SHFs) in the sample. Figure 3 : Figure 3: Gene Ontology classification of the transcriptome sequences.18,069 transcriptome sequences can be annotated by the GO database.The classification results are displayed in the three main ontologies: cellular component, molecular function, and biological process. Figure 4 : Figure 4: Venn diagram of shared DEGs between BL versus BK and BL versus BS comparisons. Table 1 : Read number and mapping result from three independent libraries. de novo assemble high quality reads which were generated from three different skin types.De novo assembly mRNA-Seq reads yielded 49,115 contigs over 300 bp comprising 45.4 MB of total sequence length, with an average length of 924 bp and N50 length of 1380 bp.Of the 49,115 contigs, there were 12,892 (26.2%) contigs greater than 1 Kb, 13,768 (28.1%) varying from 501 bp to 1 Kb, and the remaining 22,455 (45.7%) ranging from 301 bp to 500 bp in length (Table2).To identify the protein encoding regions, we used GENSCAN to perform the ab initio prediction of the coding sequence (CDS) of 49,115 contigs.We found that 23,039 putative CDSs were identified from 22,734 (46.3%) assembled contigs.Of the 22,734 CDS-contained contigs, 22,440 have one putative CDS, 283 have two, and 11 contain three CDSs.Further analysis indicated 8,184 out of 23,039 contained a putative full-length CDSs (i.e., containing start and stop codons).Further, 6,889 CDSs contained a start but no stop codon, 3,171 predicted to have a stop but no start codon, and 4,795 have neither.The average length of putative 8,184 fulllength CDSs reached 1,326 bp, while the partial CDSs have an average length of 605 bp.Among 8,184 predicted full-length CDSs, 127 of them cannot be annotated by known proteins. 23,039 predicted CDSs, the average GC content reaches 54.9%, while the maximal GC content is 86.7% and minimum is 31.3%.Scanning the stop codon of 11,355 CDSs (8,184 predicted full lengths CDSs plus 3,171 stop codon containing CDSs) indicated that the stop codon most frequently used in goat is TGA which account for 53.8%, whereas TAG (23.2%) and TAA (23.0%) have the approximately equal utility frequency. Profiling, Functional Enrichment Analysis, and qRT-PCR Validation.A popular method
2018-04-03T05:14:48.585Z
2013-05-20T00:00:00.000
{ "year": 2013, "sha1": "5e4914daa794a039f8112a75afd3ba22cfe6a9d6", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijg/2013/269191.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5e4914daa794a039f8112a75afd3ba22cfe6a9d6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
249745941
pes2o/s2orc
v3-fos-license
The intestinal microbiota influences the microenvironment of metastatic colon cancer by targeting miRNAs Abstract This study aimed to investigate the molecular mechanisms through which the intestinal microbiota and microRNAs (miRNAs) participate in colon cancer metastasis. Intestinal flora data, and the GSE29621 (messenger RNA/long non-coding RNA [mRNA/lncRNA]) and GSE29622 (miRNA) datasets, were downloaded from The Cancer Gene Atlas and Gene Expression Omnibus databases, respectively. Immune-related cells in M1 vs. M0 samples were analyzed using the Wilcoxon test. Furthermore, an lncRNA-miRNA-mRNA (competing endogenous RNA [ceRNA]) network was constructed, and survival analysis of RNAs in the network was performed. A total of 16 miRNA-genus co-expression pairs containing eight microbial genera and 15 miRNAs were screened; notably, Porphyromonas and Bifidobacterium spp. were found to be associated with most miRNAs, and has-miR-3943 was targeted by most microbial genera. Furthermore, five immune cell types, including activated natural killer cells, M1 macrophages, resting mast cells, activated mast cells and neutrophils, were differentially accumulated between the M1 and M0 groups. Enrichment analysis suggested that mRNAs related to colon cancer metastasis were mainly involved in pathways related to bacterial and immune responses. Survival analysis revealed that TMEM176A and PALM3 in the ceRNA network were significantly associated with the prognosis of patients with colon cancer. In conclusion, this study revealed a potential mechanism by which the intestinal microbiota influences the colon cancer microenvironment by targeting miRNAs. Introduction Colon cancer, a common malignancy of the digestive tract, is currently the fourth most commonly diagnosed cancer type and is the second leading cause of cancer-related death among 36 cancer types worldwide (Bray et al. 2018). Colon cancer is a highly heterogeneous disease and a relevant public health issue in a growing number of countries (Siegel et al. 2021). Currently, the standards for clinical treatment and prognostic prediction of survival and recurrence of colon cancer rely mainly on the tumor-nodemetastasis system and on the histopathological criteria established by the American Joint Committee on Cancer (Brierley). In general, early colonoscopy screening and treatment can improve patient outcomes; however, there are no obvious symptoms in the early stages of colorectal cancer, and approximately 15% to 25% of patients present with synchronous metastasis at diagnosis (Bonnot and Passot 2019). Moreover, the prognosis of patients with metastatic colon cancer appears to be much worse than that of patients in the early and intermediate stages (Tjandra and Chan 2007). Therefore, it is important to explore the potential molecular mechanisms underlying colon cancer development. MicroRNAs (miRNAs), small single-stranded non-coding RNA molecules of 18-24 nucleotides in length, play key roles in the modulation of gene expression at the post-transcriptional level; moreover, miRNAs exert important biological effects in animals, participating in processes such as immune system development, immune response (Xiao and Rajewsky 2009) and metabolism (Vienberg et al. 2017). Accumulating evidence has shown that intestinal miRNAs are key factors for the maintenance of a healthy gastrointestinal environment. For example, it has been suggested that intestinal microorganisms and microRNAs may interact to regulate the expression of host genes (Dalmasso et al. 2011, Moein et al. 2019. Moreover, miRNAs exert a wide range of effects on the intestinal immune system and play an important role in the pathogenesis of intestinal diseases (Kalla et al. 2015). Additionally, an increasing number of studies have confirmed the functional role of miRNAs in mediating the communication between intestinal microorganisms and host intestinal epithelial cells (Takeda et al. 2011, Aguilar et al. 2019. However, although these reports have suggested a role of the intestinal microbiota and miRNAs in the development of colon cancer, it is not yet known whether the intestinal microbiota alters the tumor microenvironment by influencing miRNA expression or stability. Therefore, in order to explore the molecular mechanisms through which the intestinal microbiota and miRNAs influence colon cancer progression, this study examined intestinal microflora data from a large cohort derived from The Cancer Genome Atlas, as well as miRNA-seq and long non-coding RNA/messenger RNA (lncRNA/mRNA)-seq datasets. Subsequent correlation analysis, as well as the construction and examination of a lncRNA-miRNA-mRNA regulatory axis ( Supplementary Fig. 1), revealed alterations in the microflora and miRNA expression during the formation of colon cancer metastasis. Data retrieval and processing Intestinal flora data derived from colon cancer tissue samples at different M stages, including 89 M0-stage tissue samples and 13 M1-stage tissue samples, were downloaded from The Cancer Microbiome Atlas (Dohlman et al. 2021) (https://tcma.pratt.duke. edu/) database in The Cancer Genome Atlas. In addition, RNAseq data (log 2 (FPKM + 1)), miRNA-seq data (log 2 (RPM + 1)), clinical information (including age, sex, tumor-node-metastasis stage, tumor stage and site of tumor occurrence) and survival information (overall survival [OS] and OS time) of the corresponding colon cancer samples were obtained from the UCSC-Xena platform (Goldman et al. 2018) (https://toil.xenahubs.net). After being intersected with the intestinal flora samples, RNAseq data from a total of 100 samples, including 87 M0-stage tissue samples and 13 M1-stage tissues samples, were eventually included in subsequent analyses; among these, survival information (OS and OS time) was available for 94 tissue samples. Furthermore, 94 miRNA-seq samples were included in the follow-up study, consisting of 81 M0-stage tissues and 13 M1-stage tissues. Additionally, the GSE29621 (mRNA/lncRNA) and GSE29622 (miRNA) datasets were obtained from NCBI Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo/) (Barrett et al. 2005) for external validation. The two datasets were obtained from the same samples, namely, 65 colon cancer samples with survival information (OS and OS time), of which 46 samples were in the M0 stage and 18 samples were in the M1 stage. For The Cancer Genome Atlas intestinal flora data, preprocessed data with deviation removal were directly downloaded from the database, and colon cancer tissue samples with M stage were extracted for subsequent analysis. For The Cancer Genome Atlas RNA-seq data, log 2 (FPKM + 1) values were downloaded directly, and Ensemble gene IDs were converted to gene symbols according to the annotation information (hg38, gencode.v22.annotation.gene.probemap of the GEN-CODE database) (Harrow et al. 2012) (https://www.gencodegenes.o rg/). For GSE29621 (mRNA/lncRNA) and GSE29622 (miRNA) data, the processed and standardized probe expression matrices were downloaded directly, and the corresponding platform annotation file was downloaded to perform gene symbol transformation of the probes. For different probes corresponding to the same gene symbol, the average value was taken as the gene expression value for subsequent analysis. Analysis of differentially accumulated microflora and miRNAs associated with colon cancer metastasis First, an inter-group t-test in R 3.6.1 was used to compare the relative abundance of each microbial taxon between the M1 and M0 groups. Next, the ggplot package (version 3.2.1) in R 3.6.1 was used to draw a bar chart showing the relative abundance of each microbial taxon in the M1 and M0 groups. Moreover, differential miRNA expression analysis was performed for M1 vs. M0 samples using the classical Bayesian method provided in the limma package (version 3.34.7) in R 3.6.1 (Smyth 2013) (https://bioconductor.org/pac kages/release/bioc/html/limma.html). As reported in a previous study (Bi et al. 2020), intestinal miR-NAs from intestinal epithelial cells or externally derived through the diet interact with intestinal microorganisms and regulate the composition and distribution of intestinal microbial communities. Hence, to screen for closely related miRNAs and intestinal microorganisms that are present during the formation of colon cancer metastasis, the relative abundance and expression values of metastasis-related microbial taxa and miRNAs, respectively, were extracted from each sample. Moreover, the Pearson correlation coefficient R was calculated to obtain genus-miRNA co-expression pairs, with a threshold of |R| > 0.2 and P < 0.05. Identification of metastasis-related immune cells and mRNAs/lncRNAs in the cancer microenvironment The CIBERSORT deconvolution algorithm (Chen et al. 2018) was used to estimate the abundance of infiltrating immune cells in all samples, applying the LM22 dataset provided in the CIBERSORT website as the characteristic gene expression template. After calculating the abundance of 22 infiltrating cell types in each sample, differences in the abundance of immune-related cells in M1 vs. M0 samples were analyzed using the Wilcoxon test (P < 0.05). Violin plots were drawn using the R package vioplot (version 0.3.2). To further identify the mRNAs and lncRNAs closely related to colon cancer metastasis, weighted gene co-expression network analysis (WGCNA) was conducted. Briefly, based on the top 3000 genes showing the greatest variation among samples, the R package WGCNA (Langfelder and Horvath 2008) (version 1.61, https: //cran.r-project.org/web/packages/WGCNA/) was used to identify gene set modules with highly synergistic variation. Subsequently, an inter-group t-test was performed on the genes in the modules, and lncRNAs and mRNAs whose expression differed significantly between M1 and M0 samples were further selected as metastasisrelated RNAs for subsequent studies (P < 0.05). Furthermore, Gene Ontology analysis of biological processes, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis and Enrichment Analysis of Reactome Gene Sets were conducted on metastasisrelated mRNAs using Metascape (Zhou et al. 2019) (http://metasc ape.org) (parameters: minimum overlap = 3; P value cutoff = 0.01; minimum enrichment = 1.5). Validation and Kaplan-Meier survival analysis of the selected RNAs The expression of the molecules in the ceRNA network was validated in the Gene Expression Omnibus database and visualized using ggplot (version 3.2.1). To further assess whether the molecules in the ceRNA network were significantly associated with colon cancer prognosis, the optimal cutoff point was determined using the R package Survminer (version 0.4.3) according to the expression value, survival time and survival state of each RNA molecule. Finally, the R package survival (version 2.42-6) was used for survival analysis and to carry out a log-rank survival test, with a significance threshold of P < 0.05. Differentially accumulated intestinal microorganisms and miRNAs associated with colon cancer metastasis Differences in the relative abundance of each microbial taxon between M1 and M0 tissue samples were analyzed using the intergroup t-test in R 3.6.1 ( The larger the nodes in the figure, the greater the ratio of the number of enriched genes to the total number of miRNA target genes, and colors ranging from blue to red indicate greater and greater P values. lncRNA-mRNA co-expression network (C). Red circles represent upregulated mRNAs, green circles represent downregulated mRNAs, light purple diamonds represent upregulated lncRNAs, dark purple diamonds represent downregulated lncRNAs and gray dotted lines represent co-expression relationships between mRNAs and lncRNAs. ceRNA network (D). Red circles represent upregulated mRNAs, light purple diamonds represent upregulated lncRNAs, blue hexagons represent downregulated miRNAs, green dotted lines represent the co-expression relationships between lncRNAs and mRNAs and gray arrows represent miRNA regulation. Blue T-shaped lines represent lncRNAs competing with miRNAs. Sankey diagram (E). Collinsella, Dialister and Peptostreptococcus at the genus level differed significantly between the groups (P < 0.05). Additionally, 58 miRNAs differentially expressed between M1 and M0 samples were identified. The observation of a volcano plot confirmed that the expression levels of these miRNAs were significantly different between the two groups (Fig. 1A). Subsequently, correlation analysis of 10 distinct genera and 58 unique miRNAs was conducted. As illustrated in Fig. 1B, 16 miRNA-genus co-expression pairs containing eight microbial genera and 15 miRNAs were obtained. Within these co-expression pairs, Porphyromonas and Bifidobacterium spp. were associated with most miR-NAs. Moreover, has-miR-3943 was targeted by both Porphyromonas and Dialister spp. Immunoinfiltrating cells associated with colon cancer metastasis Differences in the relative abundance of 22 infiltrating immune cell types between the M1 and M0 groups were analyzed using the Wilcoxon test. The findings revealed that five immune cell types, namely, activated natural killer cells, M1 macrophages, resting mast cells, activated mast cells and neutrophils, showed differential accumulation between the two groups (Fig. 2). Among these cell types, the relative abundance of infiltrating activated natural killer cells, M1 macrophages and resting mast cells in the M0 group was significantly higher than that in the M1 group, while the relative abundance of infiltrating activated mast cells and neutrophils was lower. Identification of mRNAs and lncRNAs associated with colon cancer metastasis WGCNA analysis of the top 3000 genes yielded 10 distinct gene modules (Fig. 3A). Next, the correlation between the feature vector genes of each module and clinical phenotypes (including OS, OS time, tumor-node-metastasis stage, tumor stage and the abundance of the five immune cell populations that showed differential accumulation in the above screening) was estimated (Fig. 3B). The blue module (544 genes), which was significantly correlated with M stage expression, was selected as the key module, and was observed to be significantly correlated also with OS and the abundance of activated natural killer cells and M1 macrophages. To further identify lncRNAs and mRNAs significantly correlated to the M stage in this module, inter-group differential expression analysis of the genes in the module was conducted. The results showed that the expression of 122 mRNAs and 11 lncRNAs differed significantly between the M1 and M0 groups. Furthermore, enrichment analysis suggested that these mRNAs were mainly involved in 355 Gene Ontology biological processes, 67 KEGG pathways and 24 Reactome Gene Sets. More importantly, 30 functional clusters were identified based on their genetic similarity. As shown in Fig. 3C, pathways connected to bacterial and immune responses were enriched, suggesting that the bacterial community and immunity might play a key role in metastasis formation. ceRNA network construction Target gene prediction for the 15 miRNAs found to be closely related to the intestinal flora during the formation of colon cancer metastasis yielded a total of 63 miRNA-mRNA interacting pairs, including 12 miRNAs and 33 mRNAs (Fig. 4A). KEGG pathway enrichment analysis showed that only eight miRNA target genes were involved in KEGG pathways (Fig. 4B). Furthermore, coexpression analysis of the lncRNAs and mRNAs related to colon cancer metastasis revealed 445 lncRNA-mRNA co-expression pairs, including 11 lncRNAs and 122 mRNAs (Fig. 4C). Finally, based on the above lncRNA-miRNA, miRNA-mRNA and lncRNA-mRNA interacting pairs, a ceRNA network and a Sankey diagram were constructed, which contained three upregulated lncRNAs, six downregulated miRNAs and 17 upregulated mRNAs ( Fig. 4D and E). Validation and Kaplan-Meier survival analysis of the selected RNAs Survival analysis of the RNAs in the ceRNA network was performed. Analysis of the Kaplan-Meier curves revealed that TMEM176A and PALM3 were significantly associated with prognosis. Specifically, patients exhibiting high TMEM176A or PALM3 expression displayed shorter survival times than those of patients showing low expression of these genes (Fig. 5A). In addition, the expression profiles of the RNA molecules in the M0 and M1 groups were verified using Gene Expression Omnibus datasets. The results revealed that most of the mRNAs tended to be upregulated in M1 samples, consistent with the findings in the analyzed dataset. In particular, the levels of GGT7, PRDX5, PTP4A3 and TMEM176B in the M1 group were higher than those in the M0 group (P < 0.05; Fig. 5B). Moreover, the lncRNAs SATB2-AS1 and OSER1-AS1 were upregulated in the M1 group compared with the M0 group (P < 0.05). Discussion The survival of patients with tumor metastasis significantly affects the organization of individualized treatments. In colon cancer, patients with metastasis have worse survival outcomes than those without metastasis, with a 5-year survival rate of only 14.0% (Provenzale et al. 2018). Interactions between miRNAs from intestinal epithelial cells and intestinal microbes are critical for maintaining intestinal health and ameliorating gastrointestinal diseases such as colon cancer (Bi et al. 2020). Hence, the aim of this study was to investigate whether the gut microbiome alters the tumor microenvironment by influencing the expression of miR-NAs that may promote cancer metastasis. Our findings revealed a number of differentially accumulated genera and 58 differentially expressed miRNAs between M1 and M0 tissues, and a total of 16 miRNA-genus co-expression pairs containing eight microbial genera and 15 miRNAs were screened. Within these co-expression pairs, Porphyromonas and Bifidobacterium spp. were found to be associated with most miRNAs, and has-miR-3943 was targeted by both Porphyromonas and Dialister spp. A previous study suggested that oral administration of Porphyromonas gingivalis, altering the gut microbiome and the serum metabolome, is associated with impaired gut barrier function, resulting in endotoxemia and subsequent inflammation of the liver and adipose tissue (Kato et al. 2018). Moreover, Wang and colleagues (2021) have reported that P. gingivalis can promote colorectal carcinoma by activating the hematopoietic inflammasome. miR-3943 is involved in the development of resected gastric cancer and can be used as an independent prognostic biomarker (Woo et al. 2021); however, the role of miR-3943 in colon cancer has not yet been reported. Our findings revealed that has-miR-3943 is associated with colon cancer metastasis and targeted by Porphyromonas spp., suggesting that Porphyromonas spp. in the intestinal tract participate in colon cancer metastasis through the regulation of has-miR-3943 expression. Antitumor immune memory is essential for long-term prevention of tumor recurrence and metastasis. Accumulating evidence suggests that the tumor immune microenvironment plays a cru-cial role in modulating antitumor immunity and is associated with tumor progression (Xia et al. 2021). Moreover, immune cells can respond rapidly to changes in the tumor microenvironment in different diseases (Wu et al. 2020, Liu andLi 2021). The data of this study indicated that five immune cell types, namely, activated natural killer cells, M1 macrophages, resting mast cells, activated mast cells and neutrophils, were differentially accumulated between the M1 and M0 groups. In addition, this study identified 122 mRNAs and 11 lncRNAs related to colon cancer metastasis, and enrichment analysis suggested that these mRNAs were mainly involved in pathways linked to the bacterial and immune responses, suggesting that the bacterial community and immunity might play a key role in the formation of metastasis. In addition, based on the RNAs related to colon cancer metastasis, a ceRNA network containing three upregulated lncRNAs, six downregulated miRNAs and 17 upregulated mRNAs was constructed. Moreover, survival analysis of the RNAs in the ceRNA network showed that TMEM176A and PALM3 were significantly associated with the prognosis of patients with colon cancer. Transmembrane protein 176A (TMEM176A) is located on the human chromosome region 7q36.1, which often displays loss of heterozygosity (Kimmel et al. 2006). Notably, abnormal expression of TMEM176A has been reported to be related to cancer pathology; this also points at TMEM176A as a promising potential therapeutic target for the treatment of some cancer types (Cuajungco et al. 2012). In particular, Gao and colleagues (2017) have reported that TMEM176A methylation is involved in the progression of human colon cancer and can be used as an independent prognostic marker. Paralemmin 3 (PALM3), belonging to the PALM protein family, was first described in Xenopus laevis as Xlgv7/Xlcaax-1 (Cornish et al. 1992), and can act as an adaptor connecting intrinsic membrane proteins to each other, the cytoskeleton, or motor proteins (Hu et al. 2005, Chen et al. 2011. Previous studies have demonstrated that downregulation of PALM3 can improve the survival rate, ameliorate the severity of lung injury and inhibit the production of pro-inflammatory cytokines in rats (Chen et al. 2017); however, its role in colon cancer has not been investigated until now. In conclusion, through the analysis in M1 vs. M0 samples, this study identified several intestinal microbial genera, such as Porphyromonas and Bifidobacterium, that are associated with miR-NAs such as has-miR-3943 in metastatic colon cancer. Moreover, the mRNAs associated with colon cancer metastasis are mainly involved in pathways connected to bacterial and immune responses. Furthermore, TMEM176A and PALM3 in the ceRNA network were found to be significantly associated with the prognosis of patients with colon cancer. Taken together, the findings of this study reveal a potential mechanism by which the intestinal microbiota influences the microenvironment of colon cancers by targeting miRNAs. Supplementary data Supplementary data are available at FEMSLE online. Data availability The data used to support the findings of this study are available from the corresponding author upon request. Conflict of interest statement. The authors declare that they have no competing interests.
2022-06-18T05:07:58.904Z
2022-06-14T00:00:00.000
{ "year": 2022, "sha1": "12a34b4d7982d67889082c0e7090b3d418ef4062", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "12a34b4d7982d67889082c0e7090b3d418ef4062", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
19583519
pes2o/s2orc
v3-fos-license
Ubiquitin binding site of the ubiquitin E2 variant (UEV) protein Mms2 is required for DNA damage tolerance in the yeast RAD6 pathway. Different ubiquitin modifications to proliferating cell nuclear antigen (PCNA) signal distinct modes of lesion bypass in the RAD6 pathway of DNA damage tolerance. The modification of PCNA with monoubiquitin signals an error-prone bypass, whereas the extension of this modification into a Lys-63-linked polyubiquitin chain promotes error-free bypass. Chain formation is catalyzed by the Mms2/Ubc13 conjugating enzyme variant/conjugating enzyme (UEV.E2) complex together with the Rad5 ubiquitin ligase. In vitro studies of this UEV.E2 complex have identified a ubiquitin binding site that is mainly localized on Mms2. However, the role of this site in DNA damage tolerance and the molecular features of the ubiquitin/Mms2 interaction are poorly understood. Here we identify two molecular determinants, the side chains of Mms2-Ile-57 and ubiquitin-Ile-44, that are required for chain assembly in vitro and error-free lesion bypass in vivo. Mutating either of these side chains to alanine elicits a severe 10-20-fold inhibition of chain synthesis that is caused by compromised binding of the acceptor ubiquitin to Mms2. These results suggest that the ubiquitin binding site of Mms2 is necessary for error-free lesion bypass in the RAD6 pathway and provide new insights into ubiquitin recognition by UEV proteins. Ub 1 is a highly conserved, 76-amino acid protein that regulates multiple intracellular pathways. The functions of Ub in cell cycle progression, induction of the inflammatory response, antigen presentation, protein trafficking, and other processes all rely on Ub conjugation to lysine residues of target proteins through the carboxyl group of Ub-Gly-76 (1)(2)(3). This modification frequently results in degradation of the target protein by 26 S proteasomes (4), but ubiquitination can also lead to other functional outcomes (1)(2)(3). Ub conjugation involves the sequential action of activating (E1), conjugating (E2), and ligase (E3) enzymes (2). The E1 and E2 enzymes both form thiol ester adducts with Ub and both intermediates are strictly required for substrate ubiquitination. Most organisms possess a single E1, multiple E2s, and even more E3s. Individual E3s, many of which harbor a zincbinding RING domain, act in conjunction with specific E2s to recognize individual substrate proteins thus imparting specificity in ubiquitination (2,5). Substrates can be modified with one Ub, several single Ubs, or multiple Ubs in the form of an isopeptide-linked polyUb chain. These various modifications, especially mono-versus polyubiquitination, often lead to qualitatively different outcomes (2,3). Further variation in Ub signaling can result from the use of different lysine residues in polyUb chain assembly (6). For example, Lys-48-linked chains signal targeting to proteasomes (7,8), whereas Lys-63-linked chains constitute a nondegradative signal in several pathways, including ribosomal protein synthesis (9), kinase activation (1), and membrane protein trafficking (3,10). Mechanisms governing selectivity in the synthesis and recognition of these and other non-canonical polyUb signals are only beginning to be understood (6). Besides the roles mentioned above, Lys-63-linked polyUb chains are required for a DNA damage tolerance pathway that has been most thoroughly characterized in the budding yeast Saccharomyces cerevisiae. The components of the RAD6 pathway are highly conserved from yeast to humans (11)(12)(13)(14); their actions promote the bypass of DNA photo-or alkylation adducts that would otherwise block the progression of the replicative DNA polymerase. All events in this pathway depend on the Ub-conjugating activity of the Rad6⅐Rad18 (E2⅐E3) complex (14 -16), but the pathway subsequently divides into two branches. One branch, defined by MMS2, UBC13, and RAD5, mediates error-free bypass of DNA lesions (17)(18)(19)(20)(21), possibly through the recruitment of an undamaged template strand (12,13). The other subpathway, defined by REV3 and REV7, facilitates mutagenic bypass through use of the damaged strand as a template for a translesion polymerase (12,13). Lys-63-linked polyUb chains signal selectively in the error-free branch of the RAD6 pathway (11,18,22). Recent studies have identified the essential DNA polymerase processivity factor PCNA as a major target of Ub modifications during DNA lesion bypass (22)(23)(24). Monoubiquitination of PCNA Lys-164, catalyzed by the Rad6⅐Rad18 complex, signals error-prone repair (22,23), probably by promoting the recruitment of a translesion polymerase (25,26). This same Ub modification also serves as the substrate for extension of a Lys-63linked polyUb chain, which promotes error-free lesion bypass (22). Chain formation requires a ternary complex composed of a RING E3, Rad5, and the heterodimeric Mms2⅐Ubc13 complex (18,19,22). Ubc13 is a canonical E2 enzyme, whereas Mms2 is one of a small family of UEV proteins, which resemble E2s but lack the defining E2 active site cysteine residue (27). The Mms2⅐Ubc13 complex functions as an E2 that is specialized for the assembly of Lys-63-linked polyUb chains (18,28). Interestingly, the same lysine residue of PCNA that receives Ub modifications (Lys-164) is also a target for small ubiquitin-like modifier modification (22,23). The functional consequences of sumoylation remain uncertain. It is not yet known how the individual biochemical reactions necessary for chain extension from monoubiquitinated PCNA (or other potential targets of Lys-63-polyubiquitination) are apportioned among the three proteins in the Rad5⅐Mms2⅐Ubc13 complex. The binary Mms2⅐Ubc13 complex synthesizes unanchored Lys-63-linked polyUb chains from free Ub in vitro (18,29). Structure-activity studies and molecular modeling identified a likely binding site for the "acceptor" Ub on the Mms2⅐Ubc13 complex (30), where the acceptor Ub is defined as the molecule that donates Lys-63 to the nascent isopeptide bond. Recent solution structural studies carried out by Ellison and co-workers (31,32) have confirmed that Ub indeed binds to this site in a manner similar to that originally predicted. However, the finding that Rad5 is strictly required for PCNA polyubiquitination in vivo (22) challenges the relevance of this site in PCNA polyubiquitination. Here we show that the side chains of Mms2-Ile-57 and Ub-Ile-44 are required for Ub binding to the acceptor site of the Mms2⅐Ubc13 complex, for chain assembly in vitro, and for DNA damage tolerance in vivo. These results strongly suggest that the acceptor Ub binding site is necessary for chain assembly in vivo. Plasmids and Proteins-Existing centromeric URA3-marked yeast plasmids (18,30) encoding H 10 -Mms2 or Ubc13-HA 3 (the latter including the intron of UBC13) under the control of the respective endogenous promoters were mutated by standard PCR methods. All mutated open reading frames (see also below) were sequenced in their entireties. Sequences of primers used in mutagenesis and cloning are available upon request. URA3-marked 2 plasmids encoding wild type Ub or Ub-I44A were provided by L. Hicke (Northwestern University) (34). Yeast Extracts and Western Blotting-Cells (2 optical density units) from log-phase cultures of SUB413 (with or without the Ub-I44Aexpressing plasmid) were pelleted, washed, and then sonicated in 90 l of alkaline extraction buffer as previously described (40). Aliquots were analyzed by Western blotting with affinity-purified polyclonal antibodies directed against Ub (produced by these authors). Blots were developed colorimetrically by means of an alkaline phosphatase-conjugated secondary antibody (Bio-Rad). Assays-UV sensitivity was assayed in triplicate as previously described (18). Chain assembly was assayed at pH 7.6 and 37°C and monitored by SDS-PAGE (29,30). The standard steady-state assay (detection by Coomassie staining) contained 0.1 M E1, 2 M each Ubc13 and Mms2, and 117 M Ub. To determine initial rates of Ub 2 synthesis, Ub 74 or Ub-Asp-77 (117 M) was used as the acceptor in conjunction with full-length 125 I-Ub (2-6 M). Reactions were terminated during the linear phase of 125 I-Ub 2 synthesis, and this product was quantitated (following autoradiography) by band excision/counting or phosphorimage analysis (18). In some cases enzyme concentrations were adjusted, and/or modified forms of Ub were used, as described in the figure legends. Published procedures were used to produce Lys-63linked Ub 4 carrying the K63R mutation in its distal Ub and an extra residue (Asp-77) in its proximal Ub (29); this chain is chemically inert in the Mms2/Ubc13-catalyzed reaction. The integrity of the Mms2/ Ubc13 interaction was assayed by monitoring the binding of untagged Ubc13 to H 10 -Mms2 immobilized on nickel beads as in our previous work (30). Expression levels of H 10 -Mms2 and Ubc13-HA 3 in yeast cell extracts were monitored by Western blotting with antibodies (Santa Cruz) directed against the His 10 tag (Sc-803) and the hemagglutinin tag (Sc-805), respectively. A Model for Ub Interaction with the Mms2⅐Ubc13 Complex- Structural and modeling studies have identified two Ub binding sites on the Mms2/Ubc13 heterodimer (30,32). One of these sites ( Fig. 1) interacts with the Ub molecule bound to the active site cysteine residue of Ubc13. We define this site, which is entirely located on Ubc13, as the "donor" site. The other site mainly encompasses a concave lower surface of Mms2 (as oriented in Fig. 1). This site binds the Ub that provides Lys-63 to the isopeptide bond. In the discussion below we refer to this as the "acceptor" site. We proposed the existence of donor and acceptor sites based on the results of modeling and site-specific mutagenesis (30). The donor site is a universal feature of E2 enzymes (32,41), but so far, the acceptor site is unique to the Mms2-containing heterodimer. Solution structural studies of Ellison and co-workers (32) have shown that Ub indeed binds non-covalently to the acceptor site, which is primarily located on Mms2. An Mms2-interacting Residue of the Acceptor Ub (Ile-44) Is Important for Chain Synthesis in Vitro and DNA Damage Tolerance in Vivo-Our goal in the present work was to test the involvement of the acceptor Ub site in the biochemical and biological activities of the Mms2⅐Ubc13 complex. To identify residues of Ub that are important for its binding to this site, we began with three residues that comprise a hydrophobic patch on the surface of Ub. The side chains of Leu-8, Ile-44, and Val-70 are important for the recognition of Lys-48-linked polyUb chains by proteasomes and for several other Ub-dependent processes (34,42,43). In addition, NMR results indicate that the surface encompassing these residues contacts the acceptor Ub binding site of the human Mms2⅐Ubc13 complex (31). To test the functional significance of such contacts, each of the three residues was individually mutated to alanine, and the purified mutant proteins were screened in steady-state chain assembly assays with the yeast Mms2⅐Ubc13 complex. As shown in Fig. 2A, chain synthesis with the L8A and V70A mutants proceeded at a similar rate to the reaction with wild type Ub (lanes 7-14 versus 1-3; the anomalous migration of Ub-L8A seen in lanes 7-10 is frequently observed). In contrast, the Ub-I44A mutation caused a profound defect in chain synthesis ( Fig. 2A, lanes 4 -6). Because the Ub-I44A mutation is lethal (34), we used an indirect approach to ask whether the in vitro defect conferred by this mutation correlated with a deficiency in error-free DNA lesion bypass. A plasmid specifying wild type Ub or Ub-I44A was transformed into a yeast strain engineered to express Ub-K63R as the sole form of Ub (11). The parent strain, called SUB413, is UV light sensitive because of a specific defect in error-free lesion bypass that is due to the inability of Ub-K63R to be assembled into Lys-63-linked chains (Fig. 2B, open circles) (11,18). As expected, expression of wild type Ub from a control plasmid complemented this phenotype (filled circles). Note that Ub-K63R does not act as a dominant chain terminator, presumably due to rapid disassembly and re-synthesis of chains within cells (8). In contrast, expression of Ub-I44A in SUB413 failed to complement the UV light-sensitive phenotype of this strain (Fig. 2B, filled squares), despite robust expression as evidenced by increased levels of free and conjugated Ub (Fig. 2C). We postulate that the inability of Ub-I44A to promote DNA damage tolerance reflects the inability of this mutant to be used as a substrate for Mms2/Ubc13-catalyzed synthesis of Lys-63linked chains ( Fig. 2A). Polar Effect of Ub-I44A Mutation-We next sought to determine whether Ub-I44A was defective in binding to one or to both of the Ub binding sites in the Mms2⅐Ubc13 complex. To address this question we created two versions of Ub-I44A, each of which could perform only one role (i.e. donor or acceptor). Blocking the C terminus of Ub via biosynthetic introduction of an extra residue (Asp-77) prevents activation of Gly-76; this blocks the donor function of Ub (35). To ablate the acceptor function of Ub we blocked Lys-63 (and other lysines) through reductive methylation (39,44). Prior studies with several conjugation systems have shown that the presence of Asp-77 does not inhibit the acceptor activity of Ub and that reductively methylated-Ub 76 is functional as a donor (18,35,39,44). As expected, the Ub-Asp-77 and reductively methylated-Ub 76 proteins carrying the (wild type) isoleucine side chain at position 44 were efficiently conjugated to each other by the Mms2⅐Ubc13 complex to generate Ub 2 (Fig. 3A, lanes 1-4). Introducing the I44A mutation into the Ub-Asp-77 acceptor virtually eliminated Ub 2 synthesis (Fig. 3A, lanes 5-8), whereas introducing this mutation into the reductively methylated-Ub 76 donor did not detectably inhibit chain formation (lanes 9 -12). Thus, the Ub-I44A mutation has a polar effect on the Mms2/Ubc13-catalyzed reaction; it profoundly inhibits acceptor Ub functionality but has no effect when Ub is bound at the donor site. (Although we cited this result in an earlier publication (30), these data have not been presented previously.) Quantitative reactions with a radiolabeled donor Ub confirmed these conclusions. Introducing the I44A mutation into the acceptor Ub inhibited by 90 -95%, almost equivalent to the effect of removing the reaction site altogether through introduction of the Ub-K63R mutation (Fig. 3B, open circles and squares versus filled circles). In marked contrast, introducing the I44A mutation into the labeled donor Ub had a negligible effect (Յ20% inhibition, data not shown). Competition experiments were used to confirm that the I44A Error bars that cannot be seen are smaller than the diameter of the symbol. C, Ub-I44A is strongly expressed (anti-Ub Western blot). Extracts from log-phase SUB413 cells transformed with empty vector (lanes 1 and 3) or the Ub-I44A-expressing plasmid (lanes 2 and 4) were analyzed (lanes 1 and 2, 2 l of extract; lanes 3 and 4, 4 l of extract). Migration of molecular mass standards is shown at the left. Coomassie staining of duplicate lanes confirmed equal loading of the sample pairs (not shown). Ub 1 and Ub n refer to mono-Ub and Ub-conjugated proteins (the latter including free polyUb chains), respectively. mutation inhibited acceptor Ub binding and not a downstream catalytic step. As shown by the filled circles in Fig. 3C, mono-Ub carrying the K63R,Asp-77 double mutation inhibits Ub 2 synthesis in a concentration-dependent manner as a result of the binding of this (chemically inert) mutant to the acceptor site. The value of K i(app) ϳ100 -200 M is consistent with previously estimated K d values of 150 -400 M for acceptor Ub binding to this (yeast) complex (29). A stronger interaction, K d ϳ30 M, has been reported for the acceptor site of human Mms2 (45). When Ub-I44A,D77 was used as a competitor, inhibition was greatly reduced (Fig. 3C, open circles). The data indicate an increase of Ն4-fold in K i(app) , although the inhibition is too weak to allow a reliable determination of the new value. The Ub-I44A competitor carries a lysine side chain at position 63. If labeled Ub were transferred to this site, inhibition would be underestimated. However, data shown in Fig. 3B indicate that there is negligible transfer to Ub-I44A bound at the acceptor site. Taken together, the data presented in Fig. 3 indicate that the Ub-I44A mutation selectively compromises the interaction of Ub with the acceptor site of the Mms2⅐Ubc13 complex. These results reinforce the notion that the inability of Ub-I44A to support error-free bypass in vivo (Fig. 2) reflects compromised binding to this site. Our standard catalytic assays were done at a Ub concentration (117 M) that is below saturation for the acceptor site. Thus, the ability of Ub-L8A and Ub-V70A to support chain synthesis ( Fig. 2A) suggests that unlike Ile-44, the Leu-8 and Val-70 side chains do not make important contributions to the interaction of Ub with the acceptor site of yeast Mms2. (Note that the donor site is saturated in this assay, because its occupancy is mainly determined by the upstream interaction of Ub with E1, which follows K m ϳ1-2 M for wild type Ub (46,47)). Role of Mms2-Ile-57 in Acceptor Ub Binding-Based on our model for acceptor Ub binding to the Mms2⅐Ubc13 complex, it appeared possible that Ub-Ile-44 could contact Mms2 near the side chain of Mms2-Ile-57 (Fig. 1). As a first test of this model, we introduced the I57A mutation into Mms2 and compared the purified mutant protein to wild type Mms2 in a steady-state chain synthesis assay. The I57A mutant displayed strongly reduced activity (Fig. 4A). Quantitative assays with 125 I-labeled donor Ub showed that the initial rate of Ub 2 synthesis was reduced by 15-fold (data not shown). Control experiments showed that wild type Mms2 and the I57A mutant bound with similar efficiencies to wild type Ubc13 (data not shown). Therefore the defect in chain synthesis seen with Mms2-I57A cannot be attributed to a weakened interaction with Ubc13, consistent with the large distance between Mms2-Ile-57 and the heterodimer interface ( Fig. 1 (30, 48)). To test whether the heterodimer containing Mms2-Ile-57 was impaired in its ability to bind the acceptor Ub, we again turned to competition assays. Here we used catalytically inert Lys-63-Ub 4 as the competitor (see "Materials and Methods"). The weak competition seen in assays involving Mms2-I57A (Fig. 4B, open circles), in comparison to the result with wild type Mms2 (filled circles), indicates that Mms2-Ile-57 plays an important role in acceptor Ub binding. The I57A mutation in Mms2 led to a Ն6-fold increase in the value of K i(app) . Mms2-Ile-57 Is Important for DNA Damage Tolerance-We next tested whether the in vitro defect conferred by the Mms2-I57A mutation (Fig. 4, A and B) translates into compromised DNA damage tolerance in vivo. A low copy yeast plasmid specifying Mms2-I57A under the control of the endogenous MMS2 promoter was transformed into an mms2⌬ strain of S. cerevisiae. As shown in Fig. 4C, expression of Mms2-I57A afforded minimal rescue of the UV-light sensitive phenotype of the mms2⌬ strain (filled squares versus open circles) in comparison to results obtained for wild type Mms2 in the same vector backbone (filled circles). Blotting experiments confirmed that the mutant protein was expressed at a comparable level to wild type Mms2 (data not shown). Thus, Mms2-Ile-57 is important for chain synthesis in vivo. Mutation of Ubc13-Asp-81 Inhibits DNA Damage Tolerance-Ubc13-Asp-81 is predicted to lie near the boundary of the acceptor site in the Mms2⅐Ubc13 complex ( Fig. 1) (30). We found in earlier work that mutating Ubc13-Asp-81 to arginine or alanine inhibited in vitro chain synthesis completely or severely, respectively (30). To address the importance of Ubc13-Asp-81 in vivo, we expressed each of these mutant Ubc13 proteins in a ubc13⌬ strain. As shown in Fig. 4D, the phenotype of the ubc13-D81R strain (filled squares) was indistinguishable from that of the null strain (open circles), whereas expression of the D81A mutant (filled triangles) provided weak rescue. Both proteins were expressed comparably to wild type Ubc13 expressed in the same vector backbone (data not shown). These phenotypes correlate well with the biochemical effects of the corresponding mutations. DISCUSSION The ability of chemically distinct polyUb chains to act as functionally distinct signals can be appreciated by comparing the properties of Lys-48-linked with Lys-63-linked chains. Lys-48-linked chains are the principal signal for targeting to proteasomes (7,8), whereas Lys-63-linked chains are nonproteolytic signals in several different pathways. For example, the modification of TRAF6 with Lys-63-linked chains activates a specific protein kinase upstream of IB␣ kinase, ultimately leading to phosphorylation and degradation of IB␣, translocation of NFB into the nucleus, and the induction of inflammatory responses (1). In the Ub-dependent DNA damage tolerance pathway studied here, the modification of PCNA with a Lys-63-linked chain promotes an errorfree mode of DNA lesion bypass (11,22). Because TRAF6 and PCNA are metabolically stable proteins, Lys-63-linked chains do not elicit target protein degradation in these signaling pathways. Instead of being recognized by proteasomes, the non-canonical TRAF6-linked chains bind to the kinase adaptor proteins TAB2 and TAB3 (49); the receptor of the PCNA-bound chain signal is not yet known. Although the target protein that is modified by the non-canonical chain differs between the two signaling pathways, there are notable similarities in the biochemistry of signal generation. In both cases, a heterodimeric UEV⅐Ubc13 complex collaborates with a RING domain E3 to generate the chain. Here we have identified specific molecular determinants of the UEV⅐Ub interaction in the DNA damage tolerance pathway. The Ub-based determinant, the Ile-44 side chain, is essential for viability in budding yeast (34). This side chain is important for the recognition of mono-Ub signals in the non-essential process of endocytosis (34) and for the recognition of Lys-48linked chains by proteasomes, which is an essential process (8,43). Ub-Ile-44 is known to make an important contribution to the affinity of mono-Ub for representative members of two families of Ub-interacting elements, the ubiquitin-interacting motif (50,51) and CUE (52)(53)(54) domains. The present results add the UEV protein Mms2 to this list. This conclusion is based on the profound inhibition of Mms2/Ubc13-catalyzed chain synthesis caused by the Ub-I44A mutation ( Fig. 2A), an effect that is specifically because of compromised interaction of this mutant Ub with the acceptor site of the heterodimeric UEV⅐E2 complex (Fig. 3, A and B). Although the weak affinity of wild type mono-Ub for this site of the yeast Mms2⅐Ubc13 complex precluded a direct determination of the affinity of Ub-I44A, the results of competition studies (Fig. 3C) indicate that reduced binding to this site is the principal cause of the catalytic defect. The other major constituents of the hydrophobic patch of Ub do not seem to contribute importantly to the Mms2/Ub interaction, because the individual Ub-L8A and Ub-V70A mutations have no detectable effect on the kinetics of chain synthesis at a subsaturating Ub concentration ( Fig. 2A). NMR chemical shift perturbation studies indicate that the hydrophobic patch of Ub contacts Mms2 in the human Ub⅐Mms2 complex (31). Our finding that the Ile-44 side chain plays a unique role in mediating the interaction is consistent with these results but represents a significant refinement. Further studies will be necessary to determine whether Ub-Ile-44 also plays a predominant role in the interaction of Ub with human Mms2. The higher affinity of the latter interaction, important energetic contribution of these side chains to binding, as seen in recent studies of ubiquitin-associated domains (55). Indeed, we found that Ub-I44A is fully functional when covalently bound at the donor site in Ubc13 (Fig. 4A) even though NMR data suggest that the Ub-Ile-44 side chain contacts human Ubc13 and yeast Ubc1 in their respective Ub thiol ester complexes (32,41). Ub-Ile-44 is also located at the interface in complexes of mono-Ub with representative NZF and ubiquitin-associated domains (55)(56)(57)(58). To our knowledge, an important energetic contribution of the Ile-44 side chain to binding has not been established in these cases. Although we have not systematically addressed how affinity depends on chain length, data obtained in the present study suggest that Lys-63-Ub 4 binds only ϳ2-fold more tightly than mono-Ub to the acceptor site ( Fig. 3C versus 4B). This is not surprising given the limited area of this site (Fig. 1). Because Ub-Ile-44 remains exposed in the extended, open conformation of Lys-63-linked Ub 2 (59), each Ub in Lys-63-Ub 4 is likely to be accessible for interaction; this will enhance affinity through a simple concentration effect. These considerations, as well as the positions of the donor/acceptor sites, argue for a distributive (versus processive) mechanism of chain extension by the isolated Mms2/E2 complex (32). Earlier kinetic data are largely in agreement with this model (29). It remains possible that additional relevant conjugation factors (Rad5 and the Rad6⅐Rad18 complex) increase processivity in vivo. Based on our model for acceptor Ub binding (30), we postulated that Mms2-Ile-57 might be part of the acceptor Ub binding site. The current results confirm this hypothesis. The I57A mutation has no effect on the Mms2/Ubc13 interaction, but it caused a severe inhibition of chain assembly (Fig. 4A) that could be assigned, through competition studies, to reduced binding of Ub at the acceptor site (Fig. 4B). Ile-57 is positioned at the beginning of the third ␤-strand of yeast Mms2, and there is an isoleucine residue at the orthologous position in human Mms2 (30). Although the chemical shift of this amide nitrogen of isoleucine is not perturbed upon Ub binding to human Mms2, the amide nitrogen of the next residue is strongly perturbed (32). Thus, this region of Mms2 is likely to constitute a conserved Ub interaction surface. Mms2-Ile-57 is also conserved in mammalian Uev1A, which plays an Mms2-like role in the cytosolic IB␣ kinase activation pathway (28). We also mutated a residue of Mms2 that is predicted to be more centrally located in the acceptor site. The Mms2-S33A mutation (see Fig. 1) had no detectable effect on chain synthesis in vitro or UV sensitivity in vivo (data not shown). These results exclude an important contribution of the Ser-33-hydroxyl to acceptor Ub affinity. It remains possible that this side chain helps to generate shape complementarity. We proposed previously that Ubc13-Asp-81 resides at the border of the acceptor site (Fig. 1), because mutating this residue inhibited catalytic activity severely without impeding Ub occupancy of the donor site (30). Although competition studies revealed detectably weakened binding of the acceptor Ub to the Mms2⅐Ubc13⅐D81A complex (see Fig. 5C of Ref. 30), this effect was modest relative to the effects of the Mms2-I57A and Ub-I44A mutations in the present study (Figs. 3C and 4B). The new results presented here suggest that Ubc13-Asp-81 does not play a major role in determining the affinity of the acceptor Ub. Further work is needed to explain why mutating this residue inhibits catalysis so strongly. Perhaps Ubc13-Asp-81 helps to position Ub-Lys-63 correctly in the Ubc13 active site. We observed a perfect correlation between in vitro and in vivo effects of the mutations studied here (Figs. 2-4), suggesting that the Mms2/Ubc13-catalyzed reaction contributes to rate limitation in vivo in our experimental system. (This may not be true in all yeast strains (60).) This outcome is most significant in the case of the Mms2-I57A mutation, which has no known effect besides reducing the affinity of the acceptor Ub. Thus, the UV light sensitivity conferred by this mutation (Fig. 4C) strongly argues that the Ub binding site of Mms2 remains relevant when Rad5 participates in chain synthesis. The location of the Rad5 binding site at the top of Ubc13 (as oriented in Fig. 1 (30, 48, 60)), distant from the acceptor site of Mms2, is consistent with this model. The precise role of Rad5 is uncertain, but it is likely that its interaction with the Rad18⅐Rad6 complex (19) helps to guide the Mms2/Ubc13 heterodimer to monoubiquitinated PCNA. Rad18 possesses autonomous affinity for PCNA, and the two proteins associate in mammalian cells (25,26). Such a bivalent interaction with PCNA could help to overcome the weak intrinsic affinity of Ub for the acceptor site of Mms2. Understanding the principles that enforce linkage specificity in polyUb chain synthesis, recognition, and disassembly is an important challenge, because these principles underlie signaling specificity and in some cases, may guide pharmacologic intervention (61). Here we have identified molecular determinants that are important for binding of the acceptor Ub to the UEV protein Mms2. Our finding that Ub-Ile-44 is an important determinant reinforces the notion that linkage specificity in this particular reaction arises through an indirect mechanism (30). The Ub-Ile-44 determinant is distant from the site of chemical reaction, but contact with this side chain acts to exclude lysine residues other than Lys-63 from the active site of Ubc13. In marked contrast, the small ubiquitin-like modifier-specific conjugating enzyme Ubc9 directly binds side chains immediately adjacent to the target lysine, generating specificity for lysines within a particular consensus sequence (62). Therefore, target lysine specificity in the ligation of Ub family proteins can arise through at least two different mechanisms (63). Interestingly, the other well characterized UEV protein, Tsg101/Vps23, interacts with mono-Ub through a different UEV surface that is mainly contributed by a unique insertion in the primary structure of Tsg101 (64,65). (Although Ub-Ile-44 is centrally located in the Tsg101/Ub interface, its quantitative contribution to affinity has not been determined.) The outcome of the Ub/UEV interaction is also differs between the two proteins. Tsg101 promotes the entry of monoubiquitinated cargo proteins into multivesicular bodies, whereas Mms2 acts in conjunction with Ubc13 to catalyze a specific chemical reaction (63). Diverse modes of Ub recognition by these and other Ub/polyUb receptors will contribute to the selectivity of Ub-dependent signaling.
2018-04-03T02:48:27.781Z
2005-05-20T00:00:00.000
{ "year": 2005, "sha1": "50ebd8b58fe72f75b549b312a35771125eaaa8aa", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/20/19829.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "46d221d6b6c35fbf79bb062b5285638486fd94c7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250325730
pes2o/s2orc
v3-fos-license
On the Connection of Matroids and Greedy Algorithms Website: www.ijeer.forexjournal.co.in On the Connection of Matroids and Greedy Algorithms ░ ABSTRACT - Matroids are the combinatorial structure and Greedy algorithmic methods always produces optimal solutions for these mathematical models. A greedy method always selects the option that looks best at each step of process of finding optimal solution. In other words, it selects a choice which is optimal choice locally in such a strategy that this locally chosen option may direct to a solution that will be globally optimal. It is true that while selecting locally optimal solution at each stage, Greedy algorithms may not always yield optimal solutions, but if we can transform an unknown problem into matroid structure, then there must be a greedy algorithm that will always lead optimal solution for that unknown problem. The range of solutions provided by Greedy is large as compared to the applicability of the Matroid structure. In other words the problems that can be translated into Matroid structure is proper subset of set of all problems whether Greedy algorithm produces optimal solution. Matroid structure thus ensures the global optimal solution one can obtain with help of Greedy approach. We study various logarithmic and linear hierarchical based mathematical models from divergence sources to maximize our information for research purposes. We analyze the time complexity and provide constrains over the upper/lower bounds in correspondence with the optimal (maximum/minimum) solution. We try to establish the relationship between the maximization of information divergences, the optimal-likelihood theory, and classified sharing is instituted. We propose integration of unknown rough sets to matroids in this paper. Particularly, we devise methodically the upper and lower tightening bounds on rough matroids which may expand up to the generic combinatorial matroid structure. The relationships are established by the upper and lower tightening bounds approximations of generalized combinatorial rough sets based on different interdependent relation sets, respectively. As we define the generalized lower/upper bounds for rough matroid, we define a new structure for lower/upper greedoid leading to generalization of the greedoid. Additionally, based on the new established relation, the generalized rough set also provides a theory of poset matroid. ░ 1. INTRODUCTION The Greedy algorithms are very effective and widely acceptable algorithm that are applicable on wide range of real time problems to produce optimal (Maximum/minimum) solution. Each step of Greedy algorithm makes a locally best available choice at that particular moment of calculation. In other words, this method select a locally best (optimal) choice in such a hope that the selected choice may lead to a globally optimal solution. Although greedy methods may not produces optimal solutions always but for a range of problems, Greedy methods do yield optimal results much efficiently in comparison to solution provided by dynamic programming for the same problem. A greedy algorithm produces an optimal solution to a given problem by selecting a best choice among a range of choices. Decisions are made by selecting the best available choices at each stage. We will discuss about a theory that underlies a combinatorial structures known as matroids in this section. A greedy algorithm will always yield an optimal solution for these combinatorial structures (matroids). A matroid is defined as an ordered pair P = (S, ξ) that satisfies the following conditions-1. The set S is a Finite. 2. The set ξ is a collection of nonempty subsets of S, called the independent subsets, satisfying the property as if Y ∈ ξ and X ⊆ Y, Then X ∈ ξ. This property is called hereditary. 3. If X ∈ ξ, Y ∈ ξ, and |X| ⊂ |Y|, then there must be some element a ∈ X -Y, then we will have X ∪ {a} ∈ ξ. And we say that P satisfies the exchange property. The sets in P are generally known as independent sets. We can state that if A is subset of any independent set of P, then A will itself be an independent set. The union of all sets in P is known as ground set. If an independent set X is not a proper subset of another independent set, then X is called a basis. The exchange property implies that every basis of a matroid will have the same cardinality for every independent subset of A [1][2]. Subgraph A graph G′ = (Φ′, Ε′) is said to be subgraph of a given graph G = (Φ, Ε), if all the vertices and edges in G′ are also in G. G′ is obtained by deleting some edges or some vertices or both from G. In In other words (Φ′, Ε′) ⊆ (Φ, Ε) [3]. Tree A tree is defined as connected graph and does not have any cycles, or a tree is a connected acyclic graph. The edges of a tree are known as branches. It may be assure that a tree must to be a simple graph without self-loops and parallel branches forming loops [3]. One of the important classes of graphs is the trees. The importance of trees is evident from their applications in various areas, especially theoretical computer science and molecular evolution. The various kinds of data structures referred to as trees in computer science are similar to trees in graph theory, except that computer science trees have directed edge [3]. Properties  A tree with n vertices has n edges.  A minimally connected graph is a graph that gets disconnected if we remove any one its edge. Clearly, there is no cycles in a minimally connected graph  A graph is a tree if only if it is minimally connected.  A simple graph with n vertices, n-1 edges and no cycles is connected must be a tree.  Any tree with at least two vertices has at least two pendant vertices. ░ 3. GRAPHICAL MATROID: A DEMONSTRATION Let us consider a graph G = (Φ, Ε), where Φ denotes the no of vertices and E denotes the no of edges is shown in Figure 1.  It contains several circuits so it is not a tree. The acyclicity and independent property of the matroid requires the given graph to be Tree. The graphical matroid is said to be independent if the graph does not contain any cycle. If there exits any cycle in the graph then independent property of the matroid is no longer preserved. A graph can be a tree if is connected. That is, if each of the nodes is connected with a link to at least one other node. If a node is not connected to some other node, then the assembly is not a tree so all the operation that involves the graphical matroid structure must be in acyclic and connected only form. 2. If X is a proper subset of E, then X ∈ IG iff X is acyclic. That is, a set of edges X will be independent iff the sub graph Gx = (Φ, X) produces a forest. 3. IG is the family of all independent subset of SG. Verification of Matroid Structure To prove that the given Structure satisfies the matroid Structure we have to the following terms Independent Structure This is the fundamental and essential property that any matroid structure must have to satisfy. In case of Graphical matroid, the independent property is the acyclic graph. The reference graph MG = (SG, IG) is the acyclic graph. Whenever the acylicity of the graph is preserved, then matroid structure is also remain preserved.  No. of edges is five (finite). Thus, it is a finite graph.  We have defined SG = E, so SG will be finite.  All the family of subsets of edges i.e. IG will also be finite hence Independent. Exchange Property Let us consider the reference graph again in Figure 3: The resultant graph G′X is nothing but the original graph GY = (Φ, Y) itself. A forest is an undirected graph, all of whose connected components are trees. Now it is clear from the figure that if we connect two disjoint a cyclic component, and then the resulting graph will remain acyclic. 1. The resultant graph is still acyclic thus preserve the independent property. 2. Adding an edge between two different components is safe and does not create a cycle as long as components are itself acyclic. 3. Thus, two components are merged into single component producing a new component by preserving the original property. 4. Hence proving the Exchange Property. Hereditary Property  Let A∈ IG is a subset of forest.  Let B⊂A, which means no. of tree in B is less than that of A. International Journal of Electrical and Electronics Research (IJEER) Open Access | Rapid and quality publishing Research Article | Volume 10, Issue 2 | Pages 126-130 | e-ISSN: 2347-470X  Intuitively, removing of trees from a forest will leave it as forest again.  Therefore, it satisfies independent property. Consider the reference graph MG= (SG, IG) as in Figure 9. Removal of ek does not create a cycle in graph [3], rather it disconnect the graph, hence proving Hereditary Property. A task-scheduling problem In task scheduling problem, we optimally schedule single-unit tasks (activities) executing on a single threaded processor [1]. Here every unit-time task is provided with its deadline and an incurred penalty if the task fails to meet its mentioned deadline. A single-unit task is a process that required single time unit to complete when allocated on a single processor machine. More formally, we are provided with following information  A set of n unit time activities S = {a1, a2. . . an}.  n deadline d1, d2, . . . , dn, corresponding to each activity ai.  Inequality 1 ≤ di ≤ n, is satisfied for each activity.  A set of non-negative penalties w1, w2. . . wn incurred with each activity ai, If it does not end by its deadline. We can define the execution of activities of S in any of |S|! The sequence thus obtained is called schedule. An activity ai finishes after its deadline then it will be late in a schedule. An activity ai will be early in a schedule if it is finished before its deadline. We can always transform an arbitrary schedule into early-first form, in which the early tasks followed by the late tasks. Such schedule where all early task are followed by all late tasks are in form called canonical structure. A canonical structure can be translated in a Matroid structure and by this virtue; a canonical form of schedule will always generate an optimal solution [1]. The activities needed to be sorted in increasing order of their deadlines or decreasing order of their penalties. In either case, the theme of presented algorithm will not change. Here our goal in to minimize the penalties for the late tasks. Algorithm 1. The task of scheduling the activities are highly dependent on verifying the independence of a give set A of activity. 2. For n activities, any schedule can take maximum time n unit. 3. The activities are sorted in non-decreasing penalties. 4. Activities can also be sorted in increasing of their deadlines, without affecting the algorithm and running time. 5. Nt(A) denotes the number of activities in A that are finished by time t or before. For t =0, 1, 2 … n, we have Nt(A) ≤ t. This is call independent check. 6. By definition then N0(A) = 0.
2022-07-07T15:02:42.942Z
2022-06-30T00:00:00.000
{ "year": 2022, "sha1": "5e8fa2aea5dbeb5d961d804845794b0f660939d5", "oa_license": "CCBY", "oa_url": "https://ijeer.forexjournal.co.in/papers-pdf/ijeer-100213.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "936808faec353471c9d2ea1deb3f374f8f48b025", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [] }
248000137
pes2o/s2orc
v3-fos-license
Peptide candidates for the development of therapeutics and vaccines against β-coronavirus infection ABSTRACT Betacoronaviruses (β-CoVs) have caused major viral outbreaks in the last two decades in the world. The mutation and recombination abilities in β-CoVs resulted in zoonotic diseases in humans. Proteins responsible for viral attachment and replication are highly conserved in β-CoVs. These conserved proteins have been extensively studied as targets for preventing infection and the spread of β-CoVs. Peptides are among the most promising candidates for developing vaccines and therapeutics against viral pathogens. The immunostimulatory and viral inhibitory potential of natural and synthetic peptides has been extensively studied since the SARS-CoV outbreak. Food-derived peptides demonstrating high antiviral activity can be used to develop effective therapeutics against β-CoVs. Specificity, tolerability, and customizability of peptides can be explored to develop potent drugs against β-CoVs. However, the proteolytic susceptibility and low bioavailability of peptides pose challenges for the development of therapeutics. This review illustrates the potential role of peptides in eliciting an adaptive immune response and inhibiting different stages of the β-CoV life cycle. Further, the challenges and future directions associated with developing peptide-based therapeutics and vaccines against existing and future β-CoV pathogens have been discussed. Introduction The increase in the emergence and re-emergence of viral respiratory diseases in recent times has gravely threatened public health and the global economy. In the last twenty years, four major viral outbreaks have been recorded, including the highly pathogenic severe acute respiratory syndrome coronavirus (SARS-CoV) [1] and Middle East respiratory syndrome coronavirus (MERS-CoV) [2]. Since December 2019, a highly contagious novel coronavirus, SARS-CoV-2, has been responsible for a respiratory illness called the coronavirus disease of 2019, i.e. COVID-19 [3,4]. A very high basic reproduction number (R0) of 2-2.5 caused an unprecedented spread of the SARS-CoV -2 virus globally [5]. The COVID-19 disease has claimed millions of lives worldwide, becoming the first documented coronavirus pandemic in history [6]. The occurrence of several variants of concern (VOCs), including Alpha (B.1.1.7), Beta (B.1.351), Delta (B.1.617.2), and Omicron (B.1.1.529), has resulted in new waves of SARS-CoV-2 infections, causing considerable loss of lives and economic standstill throughout the world [7]. SARS-CoV and SARS-CoV-2 belong to the B (Sarbecovirus) lineage of the β-CoV genus, while MERS-CoV is the first C (Merbecovirus) lineage β-CoV that can infect humans [8]. Human coronavirus OC43 (HCoV-OC43) and HCoV-HKU1 belong to A (Embecovirus) lineage of β-CoV and cause symptoms of the common cold in humans [9]. SARS-CoV-2 shares 96.2% nucleotide sequence identity with the bat CoV RaTG13, proposing a bat origin of the virus [10,11]. Besides, SARS-CoV-2 shares about 79% and 50% sequence identity with SARS-CoV and MERS-CoV, respectively [11]. Host cell entry and replication mechanisms of all humans infecting β-CoVs are quite similar ( Figure 1). The SARS-CoV infection of 2002 resulted in a fatality rate of 11%, whereas the MERS-CoV outbreak in 2012 had a fatality rate of 34% [12]. The fatality rate of SARS-CoV-2 was lower at 1.6%; however, the highly infectious nature of the virus caused it to spread around the world [13] rapidly. The (+) ve sense single-stranded RNA (ssRNA) genome of SARS-CoV-2 enables the virus to be easily detected by the intracellular toll-like receptors (TLRs), which have an affinity towards virusassociated molecular patterns (VAMPS) [14]. The human TLR4 act as the native immune sensor for β-CoV spike proteins and activates several signaling cascades that engender a cytokine storm. This results in uncontrolled inflammation combined with direct virus-induced multi-organ damage, including acute respiratory distress syndrome (ARDS), leading to possible death [12]. New variants of SARS-CoV-2 with increased host cell binding affinity are emerging rapidly, making it difficult to curtail the spread of the virus and end the pandemic [12]. Structural and non-structural proteins of β-CoVs play a crucial role in the vir-us<apos;>s attachment, replication, and proliferation and thus are promising targets for the inhibition of β-CoV infection [15][16][17][18]. Effective treatments are essential to combat β-CoV infections. The development of vaccines and antiviral drugs is critical to alleviating the health and economic burden of diseases caused by β-CoVs [11]. Due to high specificity, efficiency, tolerability, and safety, peptides have increased interest in pharmaceutical research and development (R & R&D) [19]. The use of natural and synthetic peptides to develop novel therapeutics promises the potential to treat various diseases [20]. Diverse sources of therapeutic peptides include microorganisms, plants, food, host defence antimicrobial, and antiviral peptides. Peptides can also be synthesized via recombinant and chemical methods [20][21][22][23][24]. Food-derived antiviral peptides can play an essential role in improving the immune system<apos;>s ability to combat pathogens, thereby enabling individuals to combat pandemic outbreaks without the risk of side effects [25]. Besides, antiviral peptide enriched functional foods can provide nutrition to and ensure the wellbeing of the people of all ages in a struggling global economy, as required by Goal 3 (Good Health and Well-being) of the United Nations (UN) sustainable development goals (https:// www.un.org/sustainabledevelopment/health/) [26,27]. Several studies have reported the ability of natural and synthetic peptides to interact with critical viral proteins, indicating the potential use of these peptides in developing antiviral therapeutics [22,28]. Peptide-based vaccines containing multiple conserved immunodominant epitopes can provide broad immunity against multiple serotypes of a virus [29]. The development of such a vaccine is of high significance as the genetic analysis of SARS-CoV-2 from different countries has revealed the diversification of the virus into several clades and the emergence of VOCs [11]. Additionally, cell-penetrating peptides (CPPs) can cross the cell membrane and carry small therapeutic molecules into cells [30]. In the present review, we discuss the different biotechnological approaches in developing peptide-based antiviral therapeutics that can be explored to combat existing β-CoV infections and similar life-threatening viral diseases in the future. Potential therapeutic targets for combating the replication and transmission of β-CoVs β-CoVs are enveloped, positive-sense, ssRNA viruses with a genome size of approximately 30 kb [11]. The genetic information for NSPs is encoded in two large open reading frames (ORFs), ORF 1a and ORF 1b, which amounts to two-thirds of the β-CoV genome. The remaining one-third of the genome encodes for the structural spike, envelope, membrane, and nucleocapsid proteins [11]. The genome of HCoV-OC43 and HCoV-HKU1 encode for an additional protein, hemagglutinin esterase (HE) [31]. HE functions in viral attachment by specific receptor binding determinant salicylic acids and cleaving off specific O-acetyl groups [32]. β-CoVs are zoonotic pathogens that have crossed the species barrier to infect humans [33]. The origin of SARS-CoV, MERS-CoV, and SARS-CoV-2 can be traced back to bats [31,[34][35][36], while HCoV-OC43 and HCoV-HKU1 have originated from rodents [37][38][39]. Cross-species transmission of β-CoVs facilitated by direct or indirect zoonotic contacts and sufficient genomic recombination results in the spread of β-CoVs in humans, eventually threatening the emergence of a novel viral disease [31]. The trimeric spike protein is required for the critical function of attachment and fusion of the virus to the host cell-specific receptors. Mutations in β-CoV genomes during the evolution of the virus enable them to infect the human host, importantly by modifying the receptor specificity Figure 1. β-CoV lifecycle and potential inhibitory roles of peptides. The targets for the inhibition of β-CoV infection and proliferation include the viral spike protein which binds to the host cell receptors and facilitates entry of the virus into cell cytoplasm, the host translational machinery that is used for the synthesis of viral polyproteins, the viral serine proteases that process the release of viral structural proteins and enzymes, and the viral RNA dependent RNA polymerase (RdRp) enzyme that facilitates the replication of viral genomic RNA. of the spike protein [33]. The protein receptors in humans for β-CoVs entry are cell surface peptidases, including the angiotensin-converting enzyme 2 (ACE2) for SARS-CoV and SARS-CoV -2 [15,40], dipeptidyl peptidase 4 (DPP4) for MERS-CoV [8], and 9-O-acetylated sialic acid (9-O-Ac-Sia) containing glycan-based receptors for the entry of HCoV-OC43 and HCoV-HKU1 [9]. The binding of the spike protein with host receptors triggers the pathogenesis of β-CoVs ( Figure 1). Recent studies have reported that the binding of SARS-CoV-2 spike protein with ACE2 leads to the activation of TLRs, resulting in the release and proliferation of pro-inflammatory cytokines, leading to inflammation [14]. β-CoV spike protein demonstrates a binding affinity with extracellular domains of TLRs (TLR 1, TLR 4, and TLR 6). TLR 4 has the strongest binding affinity to SARS-CoV-2 spike protein, followed by TLR 6 and TLR 1, suggesting the spike protein-TLR 4 interactions to be responsible for the immunological manifestation of SARS-CoV-2 infection [41]. Therapeutics targeting TLRs can be helpful in preventing the spread of β-CoVs and inhibiting the inflammatory response of the host immune system to β-CoV infection. TLR-agonists can be used for pre-stimulation of the host<apos;>s immune system to boost immunity against infection in uninfected individuals. In contrast, TLRantagonists can be used to prevent the development of cytokine storms and inflammation in individuals infected with β-CoVs by inhibiting the binding of viral spike protein with TLRs [42]. It is reported that the fatality rate of SARS-CoV -2 (1-1.6%) is far lower than that of both SARS-CoV (11%) and MERS-CoV (34%) [43]. However, receptor affinity analysis revealed that SARS-CoV -2 binds to the ACE2 receptor much more efficiently than SARS-CoV, highlighting the highly infectious nature of the novel virus [18,44]. In humans, receptors for β-CoVs are highly expressed in multiple organs, including the kidney, small intestine, liver, and testis, increasing an indi-vidual<apos;>s vulnerability to such infections [45]. Interestingly, conserved residues exist in the receptor-binding domain (RBD) of the β-CoV S1 subunit, and inhibitors that competitively bind to these conserved residues of RBD could efficiently block the attachment of the virus to the host receptors ( Figure 1) [11]. Following the attachment of S1 protein to the host receptor, a conformational change of the viral S2 protein is triggered, leading to fusion of the viral envelope with the receptor cell membrane [46]. The heptad repeat 1 (HR1) and HR2 domains of the S2 protein play a significant role in β-CoV fusion with the target cell and fusion inhibitory peptides. These conserved domains could be targeted to impede the release of the viral genome inside the host cell cytoplasm (Figure 1) [38]. Due to its critical role in viral attachment, fusion, and entry into host cells, the S protein of β-CoVs has been the primary target for developing therapeutics, including entry inhibitors, antibodies, and vaccines [11]. Replication of the β-CoV RNA is preceded by the translation of the replicase genes from the virion genomic RNA and assembly of the replicase complexes [18]. The viral proteases, e.g. 3CL pro (NSP5), and PL pro (NSP3), are responsible for the processing of viral polyprotein into NSPs, including the RdRp complex ( Figure 1) [47][48][49][50]. These proteases are excellent targets for inhibiting viral replication as their cleavage specificity is unlike that of any known human proteases [11]. Proteins of the RdRp complex are translated from ORF 1a and ORF 1b into RdRp (NSP12), which in complex with cofactors, NSP7 and NSP8, catalyzes the replication of viral genomic RNA [18]. NSP13, a helicase, is another essential replication enzyme that plays a critical role in the tropism and virulence of β-CoVs. NSP13 can be used as a therapeutic target for inhibiting viral replication [48,51]. Impeding the activity of NSPs could inhibit the replication and proliferation of β-CoV, making these proteins promising targets for the development of inhibitor therapeutics [52,53]. High infection rates lead to genetic mutations in the pathogen resulting in the emergence of variants with increased infectivity and evading immune systems [54]. Since the onset of the COVID-19 pandemic, the emergence of VOCs has been associated with increased transmissibility and enhanced virulence. All the currently reported SARS-CoV-2 VOCs have mutations in the RBD and the N-terminal domain (NTD), increasing the affinity of the viral spike protein to the ACE2 receptor [55]. The Alpha variant includes spike protein changes, including deletion 69-70, P681H, S982A, N501Y, deletion 145, D614G, D1118H, T716I, and A570D [54]. In individuals infected with the B1.1.7 variant, the risk of death was reportedly higher than early SARS-CoV -2 infections [56]. Eight mutations in the S protein, including A701V, D215G, D80A, E484K, K417N, L18F, N501Y, and R246I, led to the emergence of the Beta variant with increased binding affinity for the ACE2 receptors [57]. The Delta variant was initially identified in December 2020 and was responsible for the deadly second wave in India. This variant quickly became the most dominant SARS-CoV-2 VOC globally and is associated with 10 mutations in the spike protein, which caused this variant to have a superior rate of transmission and infections compared to other previously known ones SARS-CoV-2 variants [54]. Due to more than 30 mutations in the S protein, which resulted in a sharp increase in infection cases, the Omicron variant was quickly recognized as a VOC [58]. The in silico studies have suggested that the Omicron variant is ten-fold more contagious than the original virus or around twice as infectious as the Delta variant [59]. Three-dimension structure-based analyses of Omicron RBD-antibody interaction have indicated that the B.1.1.529 variant may be twice as likely to escape current vaccines as compared to the Delta variant [59]. A complete experimental analysis of the Omicron variant is necessary and understanding the effects of Omicron infection will take several weeks or even months. The emergence of new SARS-CoV-2 variants challenges the progress made in halting SARS-CoV-2 infections despite the development of vaccines against COVID-19 and mass vaccination efforts. The development of vaccines and therapeutics with potent activity against constantly mutating β-CoVs is necessary to curb the spread of such pathogens. Development of peptide-based vaccines and other immunotherapeutics against β-CoV infections Chemotherapeutic and immunotherapeutic strategies have been proposed for prophylaxis against β-CoV infections and to treat the diseases' different conditions [60]. Chemotherapy involves the use of different drugs that prevent the spread of infection in the host by inhibiting critical stages such as adhesion, entry, and replication of the virus [60]. Drugs such as Remdesivir, Ivermectin, Heparin, and Camostat Mesylate are some of the chemotherapeutics currently being studied to inhibit SARS-COV-2 infection [60]. However, there is a lack of evidence for curing β-CoV infections by chemotherapy and immunotherapy that helps to control SARS-CoV-2 infection [60]. Immunotherapy involves the use of immunogenic compounds that interact with the host immune system to control the spread of the pathogen and prevent inflammatory responses such as cytokine storms. Immunotherapeutic strategies include vaccination and the use of immunomodulatory agents such as monoclonal antibodies, immunostimulants, and immunosuppressants [60]. Vaccines are among the most potent candidates for disease prevention that elicit a memory immune response against the pathogen [24]. Vaccines have successfully been used to prevent several viral pathogens, including pox virus, measles virus, mumps virus, and rubella virus [24]. Among the various types of vaccines, subunit vaccines present several advantages over other vaccines, such as the absence of virulent factors and a relatively safe profile [61]. Additionally, antibodies elicited against inactivated whole-virion or full-length viral structural protein vaccines may lead to antibody-dependent enhancement (ADE), which results in increased viral infection of cells expressing Fc receptors [62]. The development of peptide vaccines can prevent the risk of ADE where synthetic peptides can be used as antigenic B-and T-cell epitopes for the development of subunit vaccines against β-CoVs. Conserved viral peptides can be presented by the major histocompatibility complex (MHC) molecules leading to an adaptive immune response ( Figure 2) [63]. The vital function of viral structural proteins to fuse and enter the host cells has attracted several studies on vaccine and antiviral drug development [64]. The host receptor explicitly recognises the S1 RBD subunit of the spike protein, and its sequence is conserved in the downstream C-terminal domain (CTD) of the spike protein of most β-CoVs, including SARS-COV-2, SARS-CoV, HCoV-HKU1, and MERS-CoV [64]. HCoV-OC43 is the only known human infecting β-CoV with the RBD present in the NTD of the spike protein [65]. Similarly, the N protein of β-CoVs is a highly conserved and antigenic structural protein with multiple functions, including nucleocapsid formation, signal transduction, RNA replication, and mRNA transcription [66]. The conserved nature and critical function of β-CoV S and N protein could be a breakthrough in vaccine development. A recent study has identified a set of highly conserved B-and T-cell epitopes in SARS-CoV S and N proteins that can be used for designing vaccines against SARS-CoV-2 [67]. Administration of SARS-CoV S1 and N protein fragments in rhesus macaque, using adenovirus as the vector, resulted in its immunization with antibody responses against S1 and T-cell responses against the N protein [68]. A recombinant vaccine constructed using a chimeric virus based on the vesicular stomatitis virus (VSV) with the G gene replaced by MERS-CoV S induced neutralizing antibodies and T-cell responses against MERS-CoV in rhesus monkeys after a single intramuscular or intranasal immunization dose [69]. Vaccination of rabbits with a recombinant fusion protein (RBD-Fc) containing 193 amino acid SARS-CoV RBD and human IgG1 Fc fragment led to induction of potent antibody response with complete inhibition of SARS-CoV infection [70]. Similarly, SARS-CoV-2-neutralizing antibodies were effectively induced in mice vaccinated with RBD-Fc developed using SARS-CoV-2 RBD [71]. Induction of humoral immune response and T-cell immunity was observed in albino rats vaccinated with recombinant NTD of the MERS-CoV S protein [72]. While T-cell responses are observed for both S and N proteins, it has been widely observed that neutralizing antibodies are directed only against the S protein, specifically, the RBD as the major immunodominant region [73,74]. Several subunit vaccines developed using peptide fragments of MERS-CoV RBD have induced robust immune responses in mice, specifically when administered by the intranasal route [75][76][77][78]. The economic viability, safety, effectiveness, and ease of rapid modification and production make synthetic peptides among the best antigenic determinants for the design and development of vaccines against viral pathogens [24]. However, the need for the viral peptide to be effectively presented by the MHC proteins and to invoke a subsequent B-and T-cell response makes selecting the candidate peptide most arduous. Specifically, the labor-intensive and expensive method of searching immunodominant epitopes by experimental evaluation of peptides from vast libraries fails the quick development of antiviral therapeutics during the ongoing pandemic [79]. A previous study on virus-specific cytotoxic T lymphocyte (CTL) immunity to HIV infection reported that individuals infected with the Human immunodeficiency virus (HIV) that do not progress to acquired immune deficiency syndrome (AIDS), have CTLs that target different MHC class I epitopes on HIV [80]. This observation suggests the advantages with in silico development of CTL vaccine for HIV and related viral diseases. Identification of several antigenic determinants has been achieved by prior predictions of B-and T-cell epitopes by bioinformatic analysis [79]. Interestingly, it has been reported that SARS-CoV -2 N protein contains multiple class I epitopes with predicted MHC restrictions that are consistent with broad population coverage [81]. A robust S proteinspecific CD4 + T-cell reactivity in the majority of convalescing COVID-19 cases is congruous with the role of SARS-CoV-2 structural proteins in eliciting an adaptive immune response in the host [82]. Careful selection of specific immunogenic epitopes could help in the rational development of a multiepitope peptide vaccine against any future β-CoVs. Several immune simulation studies involving integrated immunoinformatic approaches have designed such multi-epitope vaccines for inducing high levels of B-and T-cell mediated immunity (Figure 2) [83,84]. A peptide vaccine, EpiVacCorona, developed for protective immunity against SARS-CoV-2 has been approved after clinical trials [85]. This multi-epitope vaccine consists of synthetic fragments of SARS-CoV-2 S and N proteins that, upon administration, have been claimed to elicit an antibody response against the attachment and proliferation of the virus [86]. However, further studies and experimental validation of the designed multiepitope peptide-based vaccines in inducing immune response and protection against β-CoV infections are necessary. Vaccines have shown strong potency against SARS-CoV-2, with the major world population vaccinated with the BNT162b2, mRNA-1273, ChAdOx1, and BBV152 vaccines [87][88][89][90]. However, the emergence of SARS-CoV-2 variants with mutations in the spike protein has compelled the search for other immunotherapeutic candidates [91]. An in silico study examining the binding affinity of eight monoclonal antibodies (mAbs) against SARS-CoV-2 variants of Alpha and Delta lineages reported that regdanvimab, cilgavimab, and tixagevimab make stable complex formation with most Alpha strains; while sotrovimab, bamlanivimab, and tixagevimab showed neutralization of most Delta SARS-CoV-2 variants [91]. A chimeric antibody designed upon conjugation of CDRH3 regdanivimab with sotrovimab framework showed potential in preventing SARS-CoV-2 variants from escaping mAb-mediated neutralization [91]. Another study demonstrated the potential of using the antiparasitic drug Ivermectin for inhibition of SARS-CoV-2 protease, replicase, and human TMPRSS2 [92]. The candidate peptides that target the host cell<apos;>s translational machinery have demonstrated potent antiviral activity against β-CoV infections [93,94]. Ternatin-4, a fungal cyclic heptapeptide, is an inhibitor of eukaryotic translation elongation factor 1 A (eEF1A) that has demonstrated potential interactions with several β-CoV proteins and was reported to exert inhibition of SARS-CoV-2 (IC90 of 15 nM) in Vero E6 cells [95]. Another peptide candidate, Plitidepsin (cyclic depsipeptide isolated from Aplidium albicans), has been reported to directly interact and inhibit eEF1A [93]. The in vivo activity of Plitidepsin, used as a prophylactic treatment, has been associated with a two-fold reduction of SARS-CoV-2 replication in the lungs of mice [94]. Preclinical trials and randomized phase I studies of Plitidepsin against SARS-CoV-2 infected adults have reported a potent inhibition of Alpha, Beta, Delta, Mu, and Omicron variants, with a favourable safety profile in COVID-19 patients [93]. Furthermore, Plitidepsin was found to be more effective against both early and Alpha SARS-CoV-2 variants in human gastrointestinal and lung cell lines as compared to Remdesivir [93]. These immunotherapeutic peptides are potent candidates for β-CoV infections besides vaccines. Peptide-based chemotherapeutics against β-CoVs In addition to the extensive research on vaccines and immunotherapeutics against β-CoV, researchers around the globe are scouting for chemotherapeutics to combat the current pandemic and future β-CoV outbreaks [85,96]. These potential therapeutic solutions aim to target β-CoV infection, replication, and proliferation in addition to restoring the host<apos;>s immune response against the virus [97][98][99]. Rapid analysis of therapeutic targets against β-CoV and the design of potential drugs have been greatly achieved with the help of computational and bioinformatic methods [97,100]. Peptides are among the most explored candidates for anti-β-CoV therapeutic development due to higher levels of safety and effectiveness compared to small molecules [101][102][103]. Several studies have reported the β-CoV inhibitory potential of various natural, recombinant, and synthetic peptides ( Figure 1 and Table 1) [22,98,[104][105][106][107][108]. Antiviral peptides released upon microbial fermentation and enzymatic hydrolysis of food proteins have demonstrated potent inhibition of attachment and replication of β-CoVs during in silico studies ( Figure 3). Despite these findings, there is a need for further in-depth study and extensive work on peptide-based candidates to develop effective therapeutics, specifically available against β-CoV infections. Food derived peptides as potential therapeutics against β-CoVs Food-derived peptides have demonstrated interaction with β-CoV structural proteins and NSPs that may prevent viral infection and proliferation [28,104]. Bioactive peptides released upon fermentation of foods exert several functionalities that include antimicrobial, antioxidant, antihypertensive, and anticancer properties and can be explored to develop nutraceuticals and therapeutics ( Figure 3) [109][110][111][112]. Fermented food-derived peptides have previously demonstrated high antiviral activity against viral pathogens [113]. The peptide, KFVPKQPNMIL, derived from soy cheese produced using Lactobacillus delbrueckii WS4, demonstrated high binding affinity towards key residues of both S1 RBD and 3CL pro of SARS-CoV-2, SARS-CoV, MERS-CoV, and HCoV-HKU1, thereby indicating a potential for inhibition of both attachment and replication of β-CoVs [104]. Such food-derived peptides could potentially bind with multiple viral proteins could be used as lead compounds to develop potent therapeutics against β-CoVs. Molecular docking studies of peptides derived from fermented soybeans against SARS-CoV-2 RBD and human TLR4/ Myeloid Differentiation factor 2 (MD2) complex revealed that the peptide ALPEEVIQHTFNLKSQ, generated during soybean fermentation with Bacillus licheniformis KN1G showed a high binding affinity with both S1 RBD and TLR4/MD2 complex [114]. This study indicated the peptide<apos;>s potential in inhibiting viral attachment and regulation of cytokine storm induced by SARS-CoV-2 [114]. High-affinity binding with SARS-CoV-2 S1 RBD was observed during molecular docking studies using peptides obtained from in silico gastrointestinal (GI) digestion of wheat, barley, and oat proteins [115]. In another in silico study, the peptide VPW, derived from edible mealworms showed a superior binding affinity with SARS-CoV-2 RBD as compared to some natural products [105]. In silico GI digestion of storage proteins from quinoa, sesame, rape, sunflower, and pumpkin seeds resulted in the release of several peptides with high GI absorption that demonstrated binding affinities towards multiple structural proteins and NSPs of SARS-CoV-2 during molecular docking studies [116]. Peptides generated upon in silico GI digestion of marine fish proteins have demonstrated a high affinity for key catalytic residues of SARS-CoV-2 3CL pro [22,117]. The tuna skeletal myosin-derived peptide EEAGGATAAQIEM demonstrated good water solubility, no toxicity, and high binding affinity for critical residues of 3CL pro , including the HIS41-CYS145 catalytic dyad [22]. Such food- derived peptides, capable of inhibiting viral entry and replication, can be used to develop therapeutics and prophylactics against current and future β-CoV diseases [118]. Synthetic peptide and peptide-based therapeutics against β-CoVs The emergence of several novel synthetic strategies has empowered the design and modification of peptides that offer desired therapeutic functionality against a broad spectrum of viral pathogens [85]. Synthetic peptides are cheap, easy to massproduce, and highly pure as compared to natural or recombinant peptides [119]. Several studies have reported the antiviral activity of peptides derived from structural proteins and NSPs of β-CoVs, and host cell receptor proteins making them favorable candidates for the development of antiviral therapeutics [85]. In silico analyses have demonstrated that food-derived peptides show potential inhibition of β-CoV attachment, entry, replication, and proliferation that is dependent on the amino acid composition, peptide length, bioavailability, and physicochemical properties of the peptides. Inhibitors of RBD-receptor interaction and membrane fusion Impeding the interaction between S1 RBD and the receptor protein can inhibit the attachment of the virus to its host cell. Peptides derived from the RBD of β-CoVs can competitively bind to the receptor protein and exert antiviral activity. A SARS-CoV RBD-derived peptide (amino acids 471-503) specifically blocked RBD-ACE2 interaction, resulting in the inhibition of entry of SARS-CoV into Vero cells with an IC 50 of approximately 40 μM [120]. In another study, a chemically synthesized polypeptide, containing two RBD binding motifs of ACE2, was artificially linked together by glycine leading to potent inhibition of SARS pseudovirus infection in HeLa cells with an IC50 of 0.1 μM [121]. Studies have reported the inhibition of MERS-CoV entry into host cells using neutralizing mouse mAbs [122]. These mAbs were generated by immunizing mice with synthetic peptide complexes derived from MERS-CoV spike protein [123]. Molecular docking and dynamic simulation studies of numerous synthetic peptides, based on sequences derived from ACE2 protease domain and human antimicrobial peptides, have revealed specific and stable binding with SARS-CoV-2 S1 RBD [28,96,124]. The absence of side effects, such as hemolytic activity, toxicity, and the superior binding affinity for spike protein over ACE2, increase the favourability of using peptides to develop therapeutics against SARS-CoV-2 attachment without interfering with ACE2 activity [28]. Inhibition of the fusion of viral spike protein with the host cell membrane has been achieved by synthetic peptides derived from the HR2 region in the S2 domain of the spike protein, which competitively binds with the HR1 domain and blocks the formation of the fusion core [85]. The fusion peptide inhibitors derived from regions of S2 protein outside the fusion protein heptad repeats, such as the N-terminal or the pre-transmembrane domain of the SARS-CoV S2 protein, have shown potential as antiviral agents [125]. SARS-CoV-2 demonstrates a significantly higher membrane fusion capacity than SARS-CoV; therefore, the development of SARS-CoV-2 fusion inhibitors is of significant value [107]. Several fusion inhibitory peptides derived from the HR2 domain, such as MERS-HR2P from MERS-CoV HR2 and CP1 from SARS-CoV HR2, have been previously reported [126][127][128][129]. Synthetic HR2-based peptides designed by molecular dynamics simulation of the SARS-CoV-2 fusion core have demonstrated a stronger binding with HR1 as compared to the natural stage of the fusion core [106]. Researchers have recently designed HR2based lipopeptides with the ability to inhibit the fusion of SARS-CoV-2 to the target cell [107,130]. EK1C4, a lipopeptide developed by conjugating a cholesterol molecule to the pan-coronavirus fusion inhibitor peptide EK1, exhibited a 240-and 150-fold higher inhibitory activity against SARS-CoV-2 S2mediated membrane fusion and pseudovirus infection, respectively [107]. EK1C4 demonstrated a high fusion inhibitory activity against in vitro and in vivo infection of live SARS-CoV-2, HCoV-OC43, and MERS-CoV, suggesting the potential of using peptide-based fusion inhibitors for the development of therapeutics against pan-β-CoV infections [107]. Peptides targeting β-CoV NSPs The two cysteine proteases, 3CL pro and PL pro, are conserved among major β-CoV human pathogens, SARS-CoV, MERS-CoV, and SARS-CoV-2 among the most critical drug targets for developing therapeutics [17,131,132]. The synthetic octapeptide, AVLQSGFR, forms strong hydrogen bonds with catalytic residues of SARS-CoV 3CL pro , actively inhibiting the replication of SARS-CoV in Vero cells [133]. Similarly, several synthetic peptides have been proposed to inhibit replication of several β-CoV strains by blocking the activity of the 3CL pro protein [134][135][136][137][138]. Synthetic peptides designed using computational models have shown strong binding affinity against SARS-CoV-2 3CL pro [108,139]. Synthetic evolutionary peptides were designed using machine learning algorithms based on conserved 3CL pro motifs from diverse viral sequences of COVID-19 cases reported from Italy, the USA, India, and China [108]. Four peptides from the designed library showed strong and stable binding affinities against SARS-CoV-2 3CL pro [108]. Likewise, inhibitory peptides, designed with a high degree of selectivity for SARS PL pro have demonstrated a high binding affinity for critical residues of SARS-CoV-2 PL pro [140]. The 2'-O-methylation of the viral mRNA cap, catalyzed by the 2'-O-methyltransferase (2'-O-MTase) enzyme, NSP16 of β-CoVs, serves as a molecular signature for the differentiation of self mRNA from host mRNA, which helps the virus to evade host immune systems [141,142]. A class of zinc finger protein, NSP10, interacts with NSP16 and this interaction is crucial for 2'-O-MTase activity of NSP16 [143]. The inhibition of NSP16 can lead to the suppression of viral replication and the prevention of viral infection. Short synthetic peptides derived from the interaction domain of NSP10 demonstrated (in vitro) inhibition of SARS-CoV NSP10/NSP16 complex activity [144]. Similarly, the peptide P29, YGGASVCIYCRSRVEHPDVDGLCKLRGKF, derived from NSP10 of mouse hepatitis virus (MHV), demonstrated 2'-O-MTase inhibitory activity against SARS-CoV and MERS-CoV, with an inhibitory efficiency of >50% [145]. Researchers studying inhibition of SARS-CoV-2 RdRp have mainly focussed on existing antiviral drugs owing to the advantages of repurposing strategies that build on previous research, the candidate drug is ready for clinical trials. It can be quickly approved by the food and drug administration (FDA) [146]. Nucleoside analogues, including Remdesivir, Favipiravir, and Ribavirin, have shown potent in vitro RdRp inhibitory activity and have entered clinical trials [147]. However, some participants' decrease in the inhibitory activity and the emergence of adverse effects, including hepatotoxicity, respiratory toxicity, cardiovascular toxicity, nephrotoxicity, reproductive toxicity, and gastrointestinal symptoms, have prevented the approval of nucleoside analogues for use in COVID-19 patients [148]. Peptide-based inhibitors against RdRp can overcome such adverse effects due to the safety profile of peptide therapeutics. Interestingly, in molecular docking studies, the FDA-approved synthetic peptide drug Examorelin showed strong binding efficacy with both core and holoenzyme of SARS-CoV-2 RdRp [147]. Clinical trials of such peptide-based candidates can lead to the development of anti-β-CoV therapeutics with minimum or nil risk of adverse effects [149]. However, the use of advanced biotechnological tools for increasing peptide bioavailability, corroborated by in vivo studies, is necessary for developing peptide therapeutics against β-CoV [101,150]. A recent study has reported that ACE2, TMPRSS2, and TMPRSS4 of tree shrew are more similar to humans (85.47%) as compared to rats (82.58%) and mice (82.81%), suggesting the potential use of tree shrew models for in vivo investigations of peptide therapeutics against β-CoV infections [40]. Using CPPs as intracellular shuttling vectors of anti-β-CoV therapeutics The hydrophobic nature of the cell membrane acts as a major obstacle for drug delivery, resulting in a reduced potency of therapeutics. Both naturally derived and synthetic CPPs have been extensively investigated as carriers of membrane-impermeable molecules for intracellular drug delivery [151]. CPPs deliver the cargo therapeutic through caveolaemediated endocytosis, micropinocytosis, or the clathrin-independent endocytosis mechanism [152]. The well-known CPP, HIV-1 Tat (RRRQRRKKR), was used for intracellular transportation of antisense peptide nucleic acids that inhibit ribosomal frameshifting resulting in the suppression of SARS-CoV replication [151]. Similarly, CPPs can be used for the intracellular transportation of therapeutic drugs targeted to suppress SARS-CoV-2 replication while maintaining the potency of the drug<apos;>s inhibitory activity. Since viruses are intracellular obligate parasites, a large number of CPPs originating from viruses have been used as intracellular shuttling vectors to facilitate the transportation of cargos through the host cell membrane [30]. CPPs have several advantages over other drug delivery methods, such as a high rate of cellular permeability, higher uptake capacity, reduced cell toxicity, the capability to translocate into a diverse range of cell types, and an easy and inexpensive production process [152]. Interestingly, four novel CPPs, SCV2-CPP118, SCV2-CPP119, SCV2-CPP122, and SCV2-CPP129, have recently been identified from SARS-CoV-2 RdRp, based on in silico evaluation of physiochemical properties, protease susceptibility, uptake efficiency, membrane interaction, higher helical or sheet secondary structures, and toxicity [30]. These peptides can be used as drug delivery vectors for therapeutics against replication of SARS-CoV-2 and other β-CoV pathogens. However, in vivo analysis of the drug-carrying capacity of these CPPs is necessary, including biotechnological modification of the peptides to overcome potential CPP drawbacks such as metabolic instability, probable allergenicity, proteolytic cleavage, and endosomal entrapment and degradation [30]. Challenges associated with the development of peptide-based therapeutics High selectivity, efficiency, safety, and tolerability of peptides have piqued the researchers for the development of prudent and potent therapeutics [19]. The discovery of anti-β-CoV activities of several natural and synthetically designed peptides, targeting attachment and replication of the virus, cements the requirement of peptide-based prophylactics and therapeutics against COVID-19 future pandemics. However, the development of peptidebased therapeutics suffers certain potential drawbacks, including chemical and physical instability, susceptibility to proteolytic hydrolysis, a tendency for aggregation, and low bioavailability and membrane permeability of peptides [19,101,153]. Several strategies have been proposed over the years to overcome the barriers of peptide therapeutic developmental efforts. Alteration of both the amide bond and the side-chains can result in peptidomimetics that are resistant to proteolytic degradation [154]. The introduction of D-amino acids in the peptide leads to cyclization that confers the peptide resistance against proteolytic degradation and increases absorption after oral administration [101,154]. For peptides not amenable to cyclization, attachment of polyethylene glycol (PEG) chains increases absorption and systemic stability of the peptide therapeutic [155]. Cell penetration of peptide therapeutics can be improved by adding positively charged amino acids at terminal positions to facilitate passive or active transport of the peptides through membranes [156]. CPPs contain several positively charged amino acids and are widely used for the delivery of various therapeutics [157]. CPPs derived from the SARS-CoV-2 proteome can be used to efficiently deliver peptide therapeutics against COVID-19 and related diseases [30]. Alternatively, conjugation of therapeutic peptides to ligands of cell surface receptors, including cell adhesion receptors, carbohydrate receptors, lipoprotein receptors, and transferring receptors, can facilitate better internalization of peptide therapeutics [153]. Administration by alternate delivery routes, such as intranasal delivery of pan-β-CoV fusion inhibitory peptide EK1C4, increases the stability and bioavailability of peptide therapeutics [107]. The inclusion of such strategies can help to develop safe, efficient, and effective peptide-based prophylactics and therapeutics against present and future β-CoV associated diseases. Concluding remarks and future perspectives Studies have reported a wide variety of anti-β-CoV activities of peptides over the past two decades. Specific peptides could be synthesized to develop vaccines and therapeutics that are effective against mutating viral pathogens. However, large-scale peptide synthesis is expensive. Specific challenges need to be addressed before achieving peptide-based therapeutics. The β-CoV inhibitory activities of many peptides have been reported by conducting in silico simulation studies. It is vital to validate the therapeutic activities of these peptides by in vitro and in vivo studies. The relatively large size of peptides makes them susceptible to proteolytic degradation, resulting in low bioavailability and short half-lives of peptide-based drugs. However, several modification strategies can improve the stability and activity of therapeutic peptides. In-depth research is required to design potent peptides with superior efficacy and bioavailability. Peptide therapeutics are promising to combat β-CoV pathogens and related viral diseases. Author contributions Rounak Chourasia performed the literature review, wrote a major section of the manuscript, tables, and figures. Srichandan Padhi contributed to the section on the development of peptide-based vaccines against β-CoVs. Loreni Chiring Phukon and Md Minhajul Abedin contributed to the section on using CPPs as intracellular shuttling vectors of anti-β-CoV therapeutics. Ranjana Sirohi provided comprehensive reviews and involved in manuscript revisions. Sudhir P. Singh planned, designed the structure of the review, corrected, revised, and finalized the manuscript. Amit Kumar Rai planned, designed the structure of the review, corrected, revised, and finalized the manuscript.
2022-04-08T06:22:44.364Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "087090adf8398477c4eefb0e3b5186793560b73d", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2022.2060453?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "648b7ebea2a5d472f706520e47ffb2e6b21d4521", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
207949029
pes2o/s2orc
v3-fos-license
PROCESSES OF EXCLUSION AND NEW RURALITY: COMMUNITY CHANGE IN THE LIVES OF RURAL OLDER PEOPLE Abstract Rural settings are sites of rapid change. Now sharing many of the processes that characterise their urban neighbourhood counterparts, older people’s rural communities, even those in remote locations, are being altered by forces driven by gentrification and population churn. While the potential for displacement is apparent, the extent to which older people respond to these processes is not well understood. The degree to which these shifting contexts produce new exclusionary mechanisms for older people to contend with and new opportunities for them to exploit has yet to be sufficiently explored. This paper aims to address the intersection of exclusion and community change in the production of a new rurality for older people. The analysis will 1) present an overview of the relevant international literature, and 2) highlight the current and emerging exclusionary processes that are impacting on the lives of older people using data from individual narratives and time-use diaries. LIVING IN RURAL CONTEXTS: TOWARD A CRITICAL INTERDISCIPLINARY PERSPECTIVE ON RURAL AGING Chair: Kieran Walsh, NUI Galway, Galway, Ireland, Ireland Co-Chair: Mark Skinner, Trent University, Peterborough, Ontario, Canada Despite a growing focus on rural ageing, international literature in this field remains underdeveloped in critical and interdisciplinary perspectives. Reflecting traditional divisions across geographic, gerontological and health literatures, how we understand experiences of growing older in rural settings can still be characterised by a narrow, applied approach. This has implications for our capacity to disentangle multifaceted lived realities from rural contexts, and macro socio-economic and structural environments. There then remains questions about the ways in which the study of rural ageing needs to develop to direct policy, research and practice agendas to be a more critical reflection of these complexities. This symposium aims to draw together interdisciplinary critical perspectives on ageing and rurality as a means to advance this development. It will consider different theoretical approaches and major cross cutting challenges in relation to rural ageing. Burholt and Scharf will examine how critical gerontology has raised awareness of the heterogeneity of rural ageing across social justice elements of demography, resources, recognition and representation. Keogh and Walsh address these same elements in relation to the empirical intersection of exclusion and change in the production of a new rurality for older people. Cutchin and Rowles present a pragmatist theoretical perspective to encapsulate the essence of rural integration within an ever-changing milieu. Poulin et al. offer a critical approach to rural gerontological health that emphasizes intersectionality in the formation and development of older adult health. Herron and Skinner explore the intersectional construction of dementia and mental health in rural settings for older adults. CRITICAL SOCIAL GERONTOLOGY AND RURAL AGING Vanessa Burholt, 1 and thomas Scharf 2 , 1. Swansea University,Swansea,United Kingdom,2. Newcastle University,Newcastle,England,United Kingdom This paper examines the extent to which critical gerontology has raised awareness of the heterogeneity of rural ageing in High Income Countries (HICs) and compare this to our knowledge of the issues that are associated with rural ageing in Low to Middle Income Countries (LMICS). We will draw on Nancy Fraser's social justice framework to summarize key issues around: (1) Demography (such as globalization, urbanization, counter-urbanization and rural population ageing); (2) Resources (individual material and social resources; community resources such as access to services); (3) Recognition (social status, cultural visibility through social participation and cultural worth through valued social roles); (4) Representation (in social, health and rural development policies; and in private sector and NGO approaches). We argue that an intersectional approach that takes into account location and context (structural/economic/political) alongside other dimensions of oppression and/or privilege can provide a better understanding of the experience of ageing in rural areas. PROCESSES OF EXCLUSION AND NEW RURALITY: COMMUNITY CHANGE IN THE LIVES OF RURAL OLDER PEOPLE Sinead Keogh, 1 and Kieran Walsh 2 , 1. National University of Ireland Galway, Galway, Ireland, Ireland, 2. Irish Centre for Social Gerontology, Galway, Galway, Ireland Rural settings are sites of rapid change. Now sharing many of the processes that characterise their urban neighbourhood counterparts, older people's rural communities, even those in remote locations, are being altered by forces driven by gentrification and population churn. While the potential for displacement is apparent, the extent to which older people respond to these processes is not well understood. The degree to which these shifting contexts produce new exclusionary mechanisms for older people to contend with and new opportunities for them to exploit has yet to be sufficiently explored. This paper aims to address the intersection of exclusion and community change in the production of a new rurality for older people. The analysis will 1) present an overview of the relevant international literature, and 2) highlight the current and emerging exclusionary processes that are impacting on the lives of older people using data from individual narratives and time-use diaries. Rural aging as we have conceived of it in the gerontological literature of the past 50 years no longer exists, if it ever did. In this presentation, we contribute toward a reframing of the discourse on rural aging through a critique of established views of rural aging as an ecological, cultural, and phenomenological experience. We argue that each view is limited in its ability to encapsulate the essence of rural living and community. Our critique provides a context for a dynamic perspective on rural aging that embraces the situational uniqueness of each rural environment. We introduce that perspective, based in John Dewey's philosophy, and grounded in the idea of situationally defined manifestations of place integration within an ever-changing milieu. We conclude with a discussion of key implications, including how this perspective reshapes the roles of researchers and older rural residents in the process of ongoing rural gerontological inquiry. LEVERAGING CRITICAL RURAL GERONTOLOGY TO IMPROVE RURAL GERONTOLOGICAL HEALTH Laura Poulin, 1 and Neil Hanlon 2 , 1. Trent University, Ontario, Peterborough, Canada, 2. University of Northern British Columbia, Prince George, British Columbia, Canada A critical approach in rural gerontology has led to a better understanding of the complex interplay between older adults unique aging experiences and the multidimensional and dynamic communities in which they live. The evolution of critical rural gerontology will be explored, outlining why a similar approach is needed in rural gerontological health. In particular, rural gerontological health literature must expand beyond a deficit focus that homogenizes older adult health experiences and recognize the complexities of negotiating older adult health within multidimensional rural spaces. Inherent in this approach is recognizing the intersectionality of older adult health as well as the need to study rural gerontological health as an experience enhanced and inhibited by interactions within and across formal health services, informal social services and informal care. This approach will contribute to innovations in policy and practice addressing the burgeoning interest of how to effectively care for older adults in rural settings. Rachel Herron 1 , 1. Brandon University, Manitoba, Canada People living with dementia can experience significant barriers to meaningful participation in their communities, particularly in underserviced rural and small-town settings. Drawing on a multi-method pilot study employing observations, diaries, focus groups and interviews in rural Canada, we examine the potential of an innovative dance program developed by Baycrest Health Sciences and Canada's National Ballet School, to transform the experiences of people living with dementia and the rural places in which they live. Our findings identify moments, processes, and places of transformation throughout the program including moments of individual self-expression; changing interactions with staff, volunteers, and carers; and changing relationships with home and community. We argue that art-based programs can challenge dominant assumptions about people living with dementia and contribute to the creation of more just health and social care in rural places. In doing so, we illustrate the value of critical arts-based approaches to aging in rural places. Scientists from many disciplines have recently suggested changes in research practices, with the goal of ensuring greater scientific integrity. Some suggestions have focused on reducing researcher degrees of freedom to extract significant findings from exploratory analyses, whereas others concern how best to power studies and analyze results. Yet others involve ensuring that other interested researchers can easily access study materials, code, and data, to help with re-analysis and/or replication. These changes are moving targets, with discussions and suggested practices ongoing. However, aging researchers have not yet been major participants in these discussions, and aging journals are just starting to consider open science policies. This symposium, sponsored by the GSA Publications Committee, will highlight transparency and open science practices that seem most relevant to aging researchers, discuss potential challenges to implementing them as well as reasons for doing so, and will consider how aging journals may implement these practices. Open science practices to be considered include: preregistration, open data, open materials and code, sample size justification and analytic tools for considering null effects. Presenters from a range of areas of aging research (lab, secondary data, qualitative) will show examples of open science practices in their work and will discuss concerns about, and challenges of, implementing them. Then, editorial team members will discuss the implications of these changes for aging journals. Finally, discussant Jon King will give NIA's perspective on the importance of encouraging open science practices in the aging field. OPEN SCIENCE IN GERONTOLOGY: IMPLICATIONS FOR PUBLISHING Derek M. Isaacowitz 1 , 1. Northeastern University, Boston, Massachusetts, United States One big push in open science is to change journal practices to encourage a more transparent and replicable scientific record. I will start by considering why these issues are important from the perspective of a journal editor. The Transparency and Openness Promotion Guidelines were
2019-11-14T17:08:46.179Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "6bb3e995a361a847d88a495bba75d76a5f061788", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/innovateage/article-pdf/3/Supplement_1/S398/32999683/igz038.1475.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d56dac7ae4770585437c1c6f4be7b57d8674adf7", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
16290488
pes2o/s2orc
v3-fos-license
Thermal Imagery-Derived Surface Inundation Modeling to Assess Flood Risk in a Flood-Pulsed Savannah Watershed in Botswana and Namibia The Chobe River Basin (CRB), a sub-basin of the Upper Zambezi Basin shared by Namibia and Botswana, is a complex hydrologic system that lies at the center of the world’s largest transfrontier conservation area. Despite its regional importance for livelihoods and biodiversity, its hydrology, controlled by the timing and relative contributions of water from two regional rivers, remains poorly understood. An increase in the magnitude of flooding in this region since 2009 has resulted in significant displacements of rural communities. We use an innovative approach that employs time-series of thermal imagery and station discharge data to model seasonal flooding patterns, identify the driving forces that control the magnitude of flooding and the high population density areas that are most at risk of high magnitude floods throughout the watershed. Spatio-temporal changes in surface inundation determined using NASA Moderate-resolution Imaging Spectroradiometer (MODIS) thermal imagery (2000–2015) revealed that flooding extent in the CRB is extremely variable, ranging from 401 km2 to 5779 km2 over the last 15 years. A multiple regression model of lagged discharge of surface contributor basins and flooding extent in the CRB indicated that the best predictor of flooding in this region is the discharge of the Zambezi River 64 days prior to flooding. The seasonal floods have increased drastically in magnitude since 2008 causing large populations to be displaced. Over 46,000 people (53% of Zambezi Region population) are living in high magnitude flood risk areas, making the need for resettlement planning and mitigation strategies increasingly important. Introduction For centuries, populations worldwide have relied on seasonal flooding for their livelihoods.This dependence has resulted in settlements being built on riverbanks and floodplains in order to utilize the water sources and fluvial soils provided by seasonal floods.However, with these resources come risks.Seasonal flooding can be unpredictable in its timing and extent, threatening the lives of people who depend upon these floods.It is estimated that one billion people are living in the potential path of a 100-year flood (whether coastal or inland) and this number is expected to double by 2050 due to climate change, sea level rise and deforestation [1].Floods affect the lives of over 500 million people per year worldwide and result in death, disease, homelessness, and other devastating losses.The majority of these communities contain some of the poorest people in the world and disaster warning and preparedness is nearly absent [1].Increasing frequency and intensity of seasonal floods will continue to drive communities out of their homes and lead to environmental refugees and the need for resettlements [2]. In order for successful resettlement plans to be made, areas at the highest risk from inundation must be properly identified and predicted.Remote sensing offers the ability to map and analyze both seasonal and inter-annual flood dynamics at a regional scale, making it an indispensable tool for flood monitoring.Temporal changes in flooding extent can be monitored based on remotely sensed imagery using a variety of data collection methods such as optical [3], thermal, and RAdio Detection and Ranging (RADAR) imagery [4].Further, optical data in particular has often been utilized for flood detection, as techniques for thresholding and ratioing between channels, supervised and unsupervised classifications, and additional indices including the Normalized Difference Water Index (NDWI), Land Surface Water Index (LSWI) and Vegetation Index (EVI) [5][6][7][8][9][10] extend the capabilities of multispectral data.The combination of multispectral images and data from active sensors (including RADAR and LiDAR) presents an advantage in flood detection that is commonly utilized by the remote sensing community [11,12]. Optical imagery of flooded and wet areas may contain a great deal of information that can be used to monitor nutrient fluxes, suspended sediment, and chlorophyll concentrations.This additional information, however, may confound the detection of floodwaters if a researcher is unfamiliar with local water conditions.One of the widely-utilized water indices, the NDWI [13], can mistake land noise for water, though the modification of this index using a middle infrared band can improve results [14].In addition, surrounding land surfaces may be mistaken for water when the spectral signature in one of the infrared bands is similar.Pricope et al. [15] mapped flooding extents, but in some cases, fire scars caused the amount of flooding to be underestimated. The use of thermal data, however, may not result in such inaccuracies.Water's high thermal inertia and emissivity result in areas of homogenous surface temperatures in flooded regions.In addition, thermal data is generally only able to detect surface temperatures, meaning that suspended particles are less likely to influence the delineation of surface waters, which can often carry high sediment loads.The properties of water allow for increased potential with thermal imagery because night and day images can be compared using the diurnal differences of land and surface temperatures.This technique has been used in many regions worldwide, including the Everglades [16], the continent of Africa [17], an agricultural region of Germany [18], the Sudd wetlands of South Sudan [19], and the Xinjiang Autonomous Region of China [20]. The MODIS Terra and Aqua sensors are able to collect thermal data of a sufficiently high temporal resolution for night and day differencing techniques.Khan et al. [7] and Sakamoto et al. [6] demonstrated that MODIS can successfully distinguish between flooded and non-flooded pixels.The lower spatial resolution that MODIS offers is sufficient for mapping in larger regional areas [21], especially transnational water systems [19] in wetland areas [22].The diurnal temperature difference is extremely detectable in semi-arid areas, such as southern Africa, where the surrounding landscape is devoid of localized water or heavy vegetation that may influence flood detection. Fertile alluvial soils, fishery resources and grazing lands in the Upper Zambezi support one of the most densely populated areas in southern Africa [23].Although seasonal floods are essential to the ecology of this region, floods of extreme magnitude cause the rapid migration of people and animals alike, increasing human-wildlife conflict and leading to temporary or permanent displacements.A series of large floods have occurred in the Zambezi region since 2009, and have drawn the attention of relief agencies and placed additional burdens on local communities as well as land and water managers [10].Areas that had been dry for decades have recently been experiencing floods of extreme magnitudes exposing un-expecting villages to a variety of threats [24].Prior research has attempted to identify the causes of the increase in high magnitude floods in recent years in the Upper Zambezi region.McCarthy and Gumbricht [5] used NOAA Advanced Very High Resolution Radiometer (AVHRR) to estimate inundation and concluded that changes in channel distribution and the resulting spatial extent of the flooded area can been attributed to external climate changes and El Nino/Southern Oscillation (ENSO) effects.This is further supported by Wolski et al. [25] who found the annual flood patterns in the Okavango basin have been changing over past decades and examined the extent to which extreme floods, such as those between 2009 and 2011, were caused by greenhouse-gas-driven climate change.Wolski et al. [25] determined that the probability of such extreme floods would be lower in a climate without anthropogenic greenhouse gas forcings.A better understanding of the flooding patterns and the driving forces of these floods can reveal which regions are most susceptible to high magnitude floods and can help determine whether relocation actions are required from the government.Though there are risks, a flood can also present short-term economic opportunities, which may make it difficult to convince communities to move from their homes.Fishing, for instance, becomes possible in areas that are dry under normal conditions [26]. Previous research in the Chobe River Basin (CRB henceforth) shows that it receives pulses of floodwater from various sources throughout the year, but the quantity and impact of each of these sources is not fully understood and results have been somewhat conflicting.The goals of this research are to reveal the surficial hydrologic connections of the Chobe with the Kwando and Zambezi Rivers, quantify the influxes to the Chobe and inundation patterns throughout the year and identify high population density areas that are most at risk.Specifically, this research will address the following questions: (1) Can thermal imagery be utilized successfully to determine the seasonal and inter-annual patterns of inundation in the Chobe River Basin and how have they changed over the past 15 years?(2) What are the driving forces that control the magnitude of flooding of the Chobe River, do different variables have more of an impact on the flooding magnitude than others and how do these variables differ between years that experience average flooding and years with unusual (high or low) flooding magnitude?(3) Where in the Chobe River basin are large populations of people most susceptible to high magnitude floods? Study Area This study focuses on the flooding patterns of the Chobe River basin (CRB), a ~4000 km 2 sub-basin of the Upper Zambezi in Botswana and Namibia (Figure 1).The basin boundaries and drainage area of the Chobe River basin that contain the Mamili and Zambezi wetlands, the Chobe and Linyanti channels and floodplains, and Lake Liambezi were delineated previously by Pricope [9] using a 10-m spatial resolution DEM, 1-m spatial resolution orthophotograph, AVHRR NDVI and MODIS images and permanent water training samples [9]. Together with the Okavango Delta, the Chobe River lies at the center of the Kavango-Zambezi Transfrontier Conservation Area (KAZA), the largest transfrontier conservation area in the world spanning more than 170,000 km 2 and home to the largest elephant population on earth, as well as many other migratory wildlife populations [27].This conservation area provides a wildlife corridor and contains vital water sources for agriculture and grazing in a water-limited ecosystem [28].Increased flooding decreases the amount of land for human use, and increases human-wildlife conflict (Elvis Simba, KAZA) 23 May 2014, personal communication [29]). The annual flood pulses of the Chobe River are influenced by multiple factors, including the timing and relative contributions of runoff from the Zambezi, Kwando and Okavango Rivers, which in turn are determined by precipitation and evapotranspiration rates of each of these basins.Peak flow in the Okavango, Kwando and Zambezi Rivers occurs at slightly different times of the year, controlling the flow rate and flooding extent of the Chobe River.The multiple influxes of water to the river can cause an unusual back-flowing regime in the Chobe River channel during portions of the year [9].The Zambezi River originates in northwest Zambia before flowing through east-central Angola capturing runoff from the Angolan highlands and the Kabompo and Lungwebungu Rivers. The Kwando River, also commonly spelled Cuando, originates in the central plateau of Angola, draining approximately 22% of the Upper Zambezi region [30]. The Kwando River, also commonly spelled Cuando, originates in the central plateau of Angola, draining approximately 22% of the Upper Zambezi region [30].The discharge patterns of the Zambezi and Kwando Rivers are controlled by regional precipitation, which are determined by the migration of the inter-tropical convergence zone (ITCZ) and sea surface temperatures in the Indian and Atlantic oceans [28].The entire Zambezi catchment receives an average of 990 mm of rainfall per year (Figure 2).Although this is a relatively large amount of rainfall, the majority of the precipitation falls in less than six months.The Kwando/Chobe sub-catchment, which provides water to the Chobe floodplain, receives about 800 mm/year of rainfall with most of it falling between October and March [30].Precipitation modeling and observational studies have suggested a drying trend over recent decades throughout Africa [31], though more localized dry and wet patterns over the continent have been documented.The precipitation data shown in Figure 2 reflects a weak drying trend with 90% confidence computed by a Seasonal Mann Kendall test over the study area sub basin.The discharge patterns of the Zambezi and Kwando Rivers are controlled by regional precipitation, which are determined by the migration of the inter-tropical convergence zone (ITCZ) and sea surface temperatures in the Indian and Atlantic oceans [28].The entire Zambezi catchment receives an average of 990 mm of rainfall per year (Figure 2).Although this is a relatively large amount of rainfall, the majority of the precipitation falls in less than six months.The Kwando/Chobe sub-catchment, which provides water to the Chobe floodplain, receives about 800 mm/year of rainfall with most of it falling between October and March [30].Precipitation modeling and observational studies have suggested a drying trend over recent decades throughout Africa [31], though more localized dry and wet patterns over the continent have been documented.The precipitation data shown in Figure 2 reflects a weak drying trend with 90% confidence computed by a Seasonal Mann Kendall test over the study area sub basin. The Kwando River, also commonly spelled Cuando, originates in the central plateau of Angola, draining approximately 22% of the Upper Zambezi region [30].The discharge patterns of the Zambezi and Kwando Rivers are controlled by regional precipitation, which are determined by the migration of the inter-tropical convergence zone (ITCZ) and sea surface temperatures in the Indian and Atlantic oceans [28].The entire Zambezi catchment receives an average of 990 mm of rainfall per year (Figure 2).Although this is a relatively large amount of rainfall, the majority of the precipitation falls in less than six months.The Kwando/Chobe sub-catchment, which provides water to the Chobe floodplain, receives about 800 mm/year of rainfall with most of it falling between October and March [30].Precipitation modeling and observational studies have suggested a drying trend over recent decades throughout Africa [31], though more localized dry and wet patterns over the continent have been documented.The precipitation data shown in Figure 2 reflects a weak drying trend with 90% confidence computed by a Seasonal Mann Kendall test over the study area sub basin. Thermal Imagery (Land Surface Temperature) A series of level-3 MODIS Land Surface Temperature (LST) and Emissivity products from 2000 to 2014 were used to create a time series of annual floods.LST is a product of the MODIS instrument that is onboard the EOS (Earth Observing System) AM (Terra) and PM (Aqua) platforms [32].Land Surface Temperature & Emissivity 8-Day L3 Global 1 km (MOD11A2) images were acquired in 2014 from the USGS EarthExplorer (EE) tool from 2000 to 2014.Upon downloading, MOD11A2 8-day data are composed of daily 1-km LST product (MOD11A1) and stored on a 1-km sinusoidal grid as the average values of clear-sky LST during an 8-day period.These images are validated to stage 2 indicating that their accuracy has been assessed via ground truth over numerous locations and time periods.Land surface temperature is one of nine scientific datasets contained within the MODIS L2 LST product [33]. LST is estimated from space with an algorithm created by Wan and Li, 1997 that uses day/night pairs of thermal infrared (TIR) data in seven MODIS bands [32].The accuracy specification for MODIS LST is 1 • K at 1 km resolution under clear-sky conditions [32].Atmospheric and emissivity effects for several land cover types are corrected using a view-angle dependent split-window LST algorithm. Cloud coverage made images from November to March unusable.This does not act to the detriment of this method because in this region changes in flooding extent do no coincide directly with the rainy season.This method can be further utilized in trans boundary river basins where the majority of the floodwater originates outside the delta. Discharge and Stage Data A key dataset for the execution of this research is the discharge and river stage data that were collected for multiple stations throughout the Chobe River Basin.The Department of Water Affairs office in Kasane, Botswana provided daily discharge data for the Chobe, Okavango, and Boteti Rivers, monthly channel depths of the Chobe River at Kasane from 1971 to present and weekly water situation reports for major dams and rivers in the Chobe/Zambezi, Limpopo and Okavango Basins (Figure 3).Daily water level and daily flow data for the Okavango, Zambezi and Kwando Rivers from 2007-present were received from the Department of Water Affairs in Windhoek, Namibia. data are composed of daily 1-km LST product (MOD11A1) and stored on a 1-km sinusoidal grid as the average values of clear-sky LST during an 8-day period.These images are validated to stage 2 indicating that their accuracy has been assessed via ground truth over numerous locations and time periods.Land surface temperature is one of nine scientific datasets contained within the MODIS L2 LST product [33]. LST is estimated from space with an algorithm created by Wan and Li, 1997 that uses day/night pairs of thermal infrared (TIR) data in seven MODIS bands [32].The accuracy specification for MODIS LST is 1°K at 1 km resolution under clear-sky conditions [32].Atmospheric and emissivity effects for several land cover types are corrected using a view-angle dependent split-window LST algorithm. Cloud coverage made images from November to March unusable.This does not act to the detriment of this method because in this region changes in flooding extent do no coincide directly with the rainy season.This method can be further utilized in trans boundary river basins where the majority of the floodwater originates outside the delta. Discharge and Stage Data A key dataset for the execution of this research is the discharge and river stage data that were collected for multiple stations throughout the Chobe River Basin.The Department of Water Affairs office in Kasane, Botswana provided daily discharge data for the Chobe, Okavango, and Boteti Rivers, monthly channel depths of the Chobe River at Kasane from 1971 to present and weekly water situation reports for major dams and rivers in the Chobe/Zambezi, Limpopo and Okavango Basins (Figure 3).Daily water level and daily flow data for the Okavango, Zambezi and Kwando Rivers from 2007-present were received from the Department of Water Affairs in Windhoek, Namibia. Precipitation To account for precipitation patterns in the Chobe River Basin, as well as the surrounding watersheds, data from the Tropical Rainfall Measuring Mission (TRMM) 3B43 was used.The dataset from 1998 to 2014 was downloaded using the Mirador Data Access on the NASA website.Stacks of monthly and annual rainfall totals were derived and converted to millimeters.The TRMM data, with a spatial resolution of 0.25 × 0.25 degrees, was subset to both the CRB and the larger, surrounding watersheds to gain insight into the influence of local precipitation patterns on flooding area.TRMM precipitation data has proven successful in previous studies, such as Curtarelli et al. [34], which showed the TRMM 3B43 data to be in sound agreement with reference data collected at rain gauge stations.Even in studies that show TRMM data to be somewhat unreliable, the southern region of Africa indicates a lower bias than other regions and the 3B43 product was shown to be the most reliable [35]. Population and Ancillary Data Gridded population datasets created by the WorldPop Project were used to identify areas with high population density that have the highest risk of being exposed to a high magnitude flooding event.The population grids for 2000, 2011 and 2015 with a spatial resolution of 0.000833333 decimal Precipitation To account for precipitation patterns in the Chobe River Basin, as well as the surrounding watersheds, data from the Tropical Rainfall Measuring Mission (TRMM) 3B43 was used.The dataset from 1998 to 2014 was downloaded using the Mirador Data Access on the NASA website.Stacks of monthly and annual rainfall totals were derived and converted to millimeters.The TRMM data, with a spatial resolution of 0.25 × 0.25 degrees, was subset to both the CRB and the larger, surrounding watersheds to gain insight into the influence of local precipitation patterns on flooding area.TRMM precipitation data has proven successful in previous studies, such as Curtarelli et al. [34], which showed the TRMM 3B43 data to be in sound agreement with reference data collected at rain gauge stations.Even in studies that show TRMM data to be somewhat unreliable, the southern region of Africa indicates a lower bias than other regions and the 3B43 product was shown to be the most reliable [35]. Population and Ancillary Data Gridded population datasets created by the WorldPop Project were used to identify areas with high population density that have the highest risk of being exposed to a high magnitude flooding event.The population grids for 2000, 2011 and 2015 with a spatial resolution of 0.000833333 decimal degrees, or approximately 100 m, were created using a spatial model to disaggregate census counts and estimate the number of people within each grid cell of the area [36]. MODIS Pre-Processing and Image Differencing The MODIS Conversion Toolkit (MCTK) was used to extract the daytime and nighttime temperature bands from the original LST Hierarchical Data Format (HDF) stack.During the conversion process, the MCTK automatically applies the scaling factor of 0.02 to accurately adjust the temperature values to degrees Kelvin.Bands 1 (LST_Day_1 km) and 5 (LST_night_1 km) were selected to be stacked in the output image.The output file is a standard grid, which maintains the native map information and sinusoidal projection [37].All pixels with bad data were filled with NaN (Not a Number) and were later masked out when located within an obvious body of water. Similarly to the approach used in Ordoyne and Friedl [16], surface water was extracted from the MODIS images based on diurnal temperature differences of land and water (Figure 4A).The differencing methods utilized the different diurnal temperature fluctuations of land and water to identify areas of the landscape that are covered with a significant amount of flooding.Once the MODIS images were processed in the MCTK, Interactive Data Language (IDL) was used to difference the 8-day composite daytime and 8-day composite nighttime bands using band math. Remote Sens. 2016, 8, 676 6 of 20 degrees, or approximately 100 m, were created using a spatial model to disaggregate census counts and estimate the number of people within each grid cell of the area [36]. MODIS Pre-Processing and Image Differencing The MODIS Conversion Toolkit (MCTK) was used to extract the daytime and nighttime temperature bands from the original LST Hierarchical Data Format (HDF) stack.During the conversion process, the MCTK automatically applies the scaling factor of 0.02 to accurately adjust the temperature values to degrees Kelvin.Bands 1 (LST_Day_1 km) and 5 (LST_night_1 km) were selected to be stacked in the output image.The output file is a standard grid, which maintains the native map information and sinusoidal projection [37].All pixels with bad data were filled with NaN (Not a Number) and were later masked out when located within an obvious body of water. Similarly to the approach used in Ordoyne and Friedl [16], surface water was extracted from the MODIS images based on diurnal temperature differences of land and water (Figure 4A).The differencing methods utilized the different diurnal temperature fluctuations of land and water to identify areas of the landscape that are covered with a significant amount of flooding.Once the MODIS images were processed in the MCTK, Interactive Data Language (IDL) was used to difference the 8-day composite daytime and 8-day composite nighttime bands using band math. Image Threshold Segmentation A fixed threshold for every image of the time series was not an appropriate approach because of the variation in diurnal temperatures during different seasons.Image specific thresholds proved to be a much more effective method because temperature thresholds change throughout the year along with local air temperature and differential land/water specific heat.Otsu thresholding is an unsupervised and nonparametric method of threshold selection from a gray level histogram [38].Advantages of this method include its simplicity, range of applications and the automatic and stable threshold selection [38].The Otsu method "maximizes the ratio of the between-class variance to the within-class variance" [39], thus in our case, pixels with values below the determined threshold are regarded as water while pixels above the threshold are background, or land.A total of 386 differenced images were segmented in IDL using Otsu thresholding (Figure 4).After being processed, each image was visually inspected and compared to the original differenced images to verify the results.Subsequently, a rule-based classification, using the determined threshold, was performed to identify the flooding extent on case-by-case basis. Inundation Mapping Validation The successful application of remote sensing techniques over large areas requires accurate ground truthing [40].Field work was conducted in May and June of 2014 in Namibia and Botswana to obtain ground control points to validate the water signal obtained from MODIS LST images.The ground control points were collected using opportunistic sampling across the floodplain at all Image Threshold Segmentation A fixed threshold for every image of the time series was not an appropriate approach because of the variation in diurnal temperatures during different seasons.Image specific thresholds proved to be a much more effective method because temperature thresholds change throughout the year along with local air temperature and differential land/water specific heat.Otsu thresholding is an unsupervised and nonparametric method of threshold selection from a gray level histogram [38].Advantages of this method include its simplicity, range of applications and the automatic and stable threshold selection [38].The Otsu method "maximizes the ratio of the between-class variance to the within-class variance" [39], thus in our case, pixels with values below the determined threshold are regarded as water while pixels above the threshold are background, or land.A total of 386 differenced images were segmented in IDL using Otsu thresholding (Figure 4).After being processed, each image was visually inspected and compared to the original differenced images to verify the results.Subsequently, a rule-based classification, using the determined threshold, was performed to identify the flooding extent on case-by-case basis. Inundation Mapping Validation The successful application of remote sensing techniques over large areas requires accurate ground truthing [40].Field work was conducted in May and June of 2014 in Namibia and Botswana to obtain ground control points to validate the water signal obtained from MODIS LST images.The ground control points were collected using opportunistic sampling across the floodplain at all major access points along different types of roads and even boats.Standing water, sites with homogeneous vegetation and relative soil moisture data were collected to be used to train the signal obtained from the thermal imagery at the scale of this study area.Data collected at each site included topography, inundation, soil type, soil color, soil moisture and vegetation type.A total of 69 training samples were collected throughout the basin to validate the thresholding technique allowing us to create classes of inundation frequency.The derived inundation extents are compared with these training samples in the results section. Regression Analysis of Flood, Discharge and Precipitation Patterns Regression models are often used to relate hydrological response to different catchment descriptors that are the drivers of flooding in a given region [41].To understand the lag between discharge in the Zambezi and Kwando rivers and the resulting flooding extent, the daily discharge data (m 3 /s) was lagged back in 9 (8-day) time steps.The lag was set at 8-day intervals to correspond to the 8-Day MODIS LST data that was used to derive flooded area.To determine which variables were the most influential on the flooding extent in the CRB, general linear models were fit using a stepwise procedure.The region was subdivided into six sub-regions (Figure 5) to enable the independent analysis of the different hydrologic features within the basin.The derived flooded area for each of the sub-regions was used as the dependent variable and the lagged discharge of the Zambezi and Kwando rivers as the independent variables.To determine what the best predictor of the discharge of the Zambezi River, monthly precipitation data for the surrounding watershed was lagged back in three (one month) time steps.This type of modeling also defines relationships between the different independent variables.Regression models are often used to relate hydrological response to different catchment descriptors that are the drivers of flooding in a given region [41].To understand the lag between discharge in the Zambezi and Kwando rivers and the resulting flooding extent, the daily discharge data (m 3 /s) was lagged back in 9 (8-day) time steps.The lag was set at 8-day intervals to correspond to the 8-Day MODIS LST data that was used to derive flooded area.To determine which variables were the most influential on the flooding extent in the CRB, general linear models were fit using a stepwise procedure.The region was subdivided into six sub-regions (Figure 5) to enable the independent analysis of the different hydrologic features within the basin.The derived flooded area for each of the sub-regions was used as the dependent variable and the lagged discharge of the Zambezi and Kwando rivers as the independent variables.To determine what the best predictor of the discharge of the Zambezi River, monthly precipitation data for the surrounding watershed was lagged back in three (one month) time steps.This type of modeling also defines relationships between the different independent variables.Model selection for general linear models was conducted using the GLMSELECT procedure, SAS Version 9.3.The stepwise selection method was utilized, which adds parameters one at a time, and checks the current parameter set for any to remove at each step [42].The model first fits an intercept value to the data, followed by the variable with lowest p-value.The method continues adding variables in this fashion; however, for each addition it then performs a backward elimination to determine if any remaining explanatory variables should be removed from the model.Thus, the method endeavors to choose the best predictors. At-Risk Population Analysis By considering the relationship between the timing, quantity and influence of the discharge of the rivers that feed the Chobe and the resulting inundation extent in the CRB, areas in the region that are at the highest risk of being exposed to high magnitude flooding can be identified.Two time-steps of the gridded population datasets for Namibia only, one from 2000 and one from 2011 based on the Namibia 2010 census, were used to identify population epicenters that are located Model selection for general linear models was conducted using the GLMSELECT procedure, SAS Version 9.3.The stepwise selection method was utilized, which adds parameters one at a time, and checks the current parameter set for any to remove at each step [42].The model first fits an intercept value to the data, followed by the variable with lowest p-value.The method continues adding variables in this fashion; however, for each addition it then performs a backward elimination to determine if any remaining explanatory variables should be removed from the model.Thus, the method endeavors to choose the best predictors. At-Risk Population Analysis By considering the relationship between the timing, quantity and influence of the discharge of the rivers that feed the Chobe and the resulting inundation extent in the CRB, areas in the region that are at the highest risk of being exposed to high magnitude flooding can be identified.Two time-steps of the gridded population datasets for Namibia only, one from 2000 and one from 2011 based on the Namibia 2010 census, were used to identify population epicenters that are located within high frequency flood areas.The population dataset for Botswana was not included because of the scarcity of data surrounding the CRB.The two population time steps were used to locate regions that experienced the greatest change in population to determine if these areas correlate to regions of high frequency flooding.Once the regions of high population density were identified, they were combined with inundation duration maps representing years of high magnitude flooding, low magnitude flooding and the cumulative inundation map containing all years of this study to identify areas in this region most susceptible to high magnitude flooding and observe how the number of people impacted changes depending on the scale of flooding. Results The results of the work outlined above are presented in this section, which is composed of four sub-sections.First, the inter-annual spatial extent of duration of inundation in the CRB for the period 2000-2013 is shown with derived flood inundation duration maps.Then, the results of the MODIS LST and hydrological analyses are discussed, including major floods and dry years as well as controls of the major subsystems.Third, the results of the predictive regression models are discussed.Lastly, at-risk population centers are identified using models of low and high magnitude flooding.within high frequency flood areas.The population dataset for Botswana was not included because of the scarcity of data surrounding the CRB.The two population time steps were used to locate regions that experienced the greatest change in population to determine if these areas correlate to regions of high frequency flooding.Once the regions of high population density were identified, they were combined with inundation duration maps representing years of high magnitude flooding, low magnitude flooding and the cumulative inundation map containing all years of this study to identify areas in this region most susceptible to high magnitude flooding and observe how the number of people impacted changes depending on the scale of flooding. Results The results of the work outlined above are presented in this section, which is composed of four sub-sections.First, the inter-annual spatial extent of duration of inundation in the CRB for the period 2000-2013 is shown with derived flood inundation duration maps.Then, the results of the MODIS LST and hydrological analyses are discussed, including major floods and dry years as well as controls of the major subsystems.Third, the results of the predictive regression models are discussed.Lastly, at-risk population centers are identified using models of low and high magnitude flooding. Regional Inundation Duration Figure 6 shows the inter-annual spatial extent and duration of inundation in the CRB for the period 2000-2014 from March to November of each year.Even in years of low flooding, surface water remains in topographic depressions along the main channel of the Chobe and regions of the Zambezi and Mamili wetlands for as long as four months.Communities of the region tend to cluster around these semi-permanent water bodies, as will be discussed in Section 6.6.Temporal changes in inundation duration are reflective of the changes in annual discharge and precipitation.Long-term water bodies were identified in Lake Liambezi, portions of the northeastern Zambezi wetlands before reaching the bottleneck in Zambezi East as well as along the ridge of the Chobe River.The number of images included in each year's flood frequency map varied depending on the amount of viable images, averaging at 26 images per year. Inundation Duration 2000-2014 The inundation duration maps for each individual year were combined to identify regions in CRB that have experienced the most inundation over the past 15 years (Figure 8).Depressions in the Zambezi east and along the Zambezi and Chobe channels consistently retain surface water.The portion of the Zambezi wetlands adjacent to Kasane tends to maintain floodwater for much of the year.Other areas in the landscape that are often inundated for extended periods of time include Lake Liambezi and regions of the Mamili wetlands. Inundation Duration 2000-2014 The inundation duration maps for each individual year were combined to identify regions in CRB that have experienced the most inundation over the past 15 years (Figure 8).Depressions in the Zambezi east and along the Zambezi and Chobe channels consistently retain surface water.The portion of the Zambezi wetlands adjacent to Kasane tends to maintain floodwater for much of the year.Other areas in the landscape that are often inundated for extended periods of time include Lake Liambezi and regions of the Mamili wetlands. Inundation Duration 2000-2014 The inundation duration maps for each individual year were combined to identify regions in CRB that have experienced the most inundation over the past 15 years (Figure 8).Depressions in the Zambezi east and along the Zambezi and Chobe channels consistently retain surface water.The portion of the Zambezi wetlands adjacent to Kasane tends to maintain floodwater for much of the year.Other areas in the landscape that are often inundated for extended periods of time include Lake Liambezi and regions of the Mamili wetlands.For verification of derived flooding extent using the LST methodology, ground control points collected in the field were visually compared with the derived flooding extent on 10 June 2014 (Julian Date 2014161) (Figure 9).The furthest flooding extents marked in the field were located within 1 pixel distance (approximately 1 km) from the extents derived with the MODIS imagery.The boundaries of Lake Liambezi and the Chobe Floodplain were especially well correlated, probably due to the large amount surface water present while in the field. Remote Sens. 2016, 8, 676 10 of 20 For verification of derived flooding extent using the LST methodology, ground control points collected in the field were visually compared with the derived flooding extent on 10 June 2014 (Julian Date 2014161) (Figure 9).The furthest flooding extents marked in the field were located within 1 pixel distance (approximately 1 km) from the extents derived with the MODIS imagery.The boundaries of Lake Liambezi and the Chobe Floodplain were especially well correlated, probably due to the large amount surface water present while in the field. Typical Seasonal Inundation Dynamics The MODIS LST imagery provided a way to observe the mechanism in which flood water moves through the interrelated sub-regions in this unique landscape each year.Figure 10 shows thirty time-steps from 2012, chosen due to the higher duration of inundation relative to years at the beginning of the time series, the large area that was flooded and the number of viable images from this year.The annual flood pulse from the Zambezi can first be seen entering the Zambezi wetlands at the end of February after passing Katima Mulilo (Day 57).The water moves south-eastward, spreading through the Zambezi wetlands until it reaches a bottleneck at Impalila Island where it is forced back.The Zambezi wetlands reach peak inundation within a week of maximum discharge of the Zambezi at the Katima Station.After the Zambezi wetlands are fully inundated, floodwater is pushed from the Zambezi east, first through the Bukalo channel, and then down the Chobe, usually around the beginning of March.Floodwater then continues to spill over the Chobe floodplain throughout March and April (Day 60-120), depending on the continued discharge of the Zambezi.A smaller flood pulse arrives later in the wet season (~Day 161) from the Kwando, which enters the CRB in the Mamili wetlands.Water can be seen moving down the channel of the Chobe River by the end of March (~Day 88).Years that are subsequent to wet years, such as 2012, often retain water from the previous year on the landscape in places like Lake Liambezi.The Mamili wetlands remained slightly flooded in the beginning of 2012 because of the large flood events in 2011.The second flood pulse from the Kwando can be seen arriving in mid-June (Day 161) of 2012 pushing flood water northeast up the Linyanti channel where it connects with Lake Liambezi.If the pulse of water in a given year is of a large enough quantity, it begins to be pushed northeast through the Linyanti channel, eventually meeting Lake Liambezi.When flows from both rivers are high enough, the Chobe floodplain and Lake Liambezi become connected via the Savute channel.The area of inundation abruptly falls in the Zambezi wetlands between July and August, around 3 or 4 months after the second peak flow event of the Zambezi. Typical Seasonal Inundation Dynamics The MODIS LST imagery provided a way to observe the mechanism in which flood water moves through the interrelated sub-regions in this unique landscape each year.Figure 10 shows thirty time-steps from 2012, chosen due to the higher duration of inundation relative to years at the beginning of the time series, the large area that was flooded and the number of viable images from this year.The annual flood pulse from the Zambezi can first be seen entering the Zambezi wetlands at the end of February after passing Katima Mulilo (Day 57).The water moves south-eastward, spreading through the Zambezi wetlands until it reaches a bottleneck at Impalila Island where it is forced back.The Zambezi wetlands reach peak inundation within a week of maximum discharge of the Zambezi at the Katima Station.After the Zambezi wetlands are fully inundated, floodwater is pushed from the Zambezi east, first through the Bukalo channel, and then down the Chobe, usually around the beginning of March.Floodwater then continues to spill over the Chobe floodplain throughout March and April (Day 60-120), depending on the continued discharge of the Zambezi.A smaller flood pulse arrives later in the wet season (~Day 161) from the Kwando, which enters the CRB in the Mamili wetlands.Water can be seen moving down the channel of the Chobe River by the end of March (~Day 88).Years that are subsequent to wet years, such as 2012, often retain water from the previous year on the landscape in places like Lake Liambezi.The Mamili wetlands remained slightly flooded in the beginning of 2012 because of the large flood events in 2011.The second flood pulse from the Kwando can be seen arriving in mid-June (Day 161) of 2012 pushing flood water northeast up the Linyanti channel where it connects with Lake Liambezi.If the pulse of water in a given year is of a large enough quantity, it begins to be pushed northeast through the Linyanti channel, eventually meeting Lake Liambezi.When flows from both rivers are high enough, the Chobe floodplain and Lake Liambezi become connected via the Savute channel.The area of inundation abruptly falls in the Zambezi wetlands between July and August, around 3 or 4 months after the second peak flow event of the Zambezi.The majority of the landscape that becomes inundated annually retains surface water for less than four months of the year.The furthest extent of the flood water Zambezi wetlands is sometimes only present for two to three weeks before quickly receding.The same topographic depressions called milapos where water remains for 3-5 months per year, discussed by Pricope [9] were noted throughout the Zambezi, Chobe and Mamili wetlands.Milapos are highly fertile depressions that are commonly used for flood-recession agriculture. Chobe River Basin Statistics A stepwise regression was constructed with the flooding extent of the Chobe River Basin as the dependent variable and the lagged discharge of the Zambezi and Kwando Rivers as the predictor variables.The stepwise selection method was used which switches between adding and deleting parameters one at a time [42].The best predictor of flooding was chosen based on the standardized estimate, which standardizes the independent variables to have a variance of one and compares the effects of potential predictors within a model.The GLMSELECT procedure in SAS was utilized to implement the stepwise regression analysis.The regression determination yielded the following equation where Zi and Ki are the observed discharge of the Zambezi and Kwando Rivers respectively at the i-th time step: 518.575 0.318 0.369 15.744 Based on the standardized estimates of the variables, the stepwise regression determined that the most influential variable on flooding in the Chobe River Basin is the discharge of the Zambezi River 64 days prior to observed flooding followed by the discharge of the Zambezi 8 days prior.The immediate flow of the Kwando also influences the extent of flooding in the CRB.All of the yielded predictors, based on a t-test with 178 degrees of freedom, are significant at 99% confidence.The correlation between the selected model of inundation in the CRB and lagged discharge values yielded an R 2 of 0.658.When only the lagged discharge of the Zambezi is used in the model, the Zambezi discharge at the ninth time-step is identified as the best predictor again, but has a lower standardized estimate than when the lagged discharges of both rivers are used as variables. The same statistical methodology was conducted to identify the best predictor of flooding in the sub-basins of the CRB.Regression analysis using lagged Zambezi and Kwando discharge with flooding area in the Chobe Floodplain Sub-basin yielded the following equation: The selected model yielded an R 2 value of 0.598, and all of the variables chosen to be in the model have p-values of <0.0001, based on a t-test with 175 degrees of freedom.The most influential The majority of the landscape that becomes inundated annually retains surface water for less than four months of the year.The furthest extent of the flood water Zambezi wetlands is sometimes only present for two to three weeks before quickly receding.The same topographic depressions called milapos where water remains for 3-5 months per year, discussed by Pricope [9] were noted throughout the Zambezi, Chobe and Mamili wetlands.Milapos are highly fertile depressions that are commonly used for flood-recession agriculture. Chobe River Basin Statistics A stepwise regression was constructed with the flooding extent of the Chobe River Basin as the dependent variable and the lagged discharge of the Zambezi and Kwando Rivers as the predictor variables.The stepwise selection method was used which switches between adding and deleting parameters one at a time [42].The best predictor of flooding was chosen based on the standardized estimate, which standardizes the independent variables to have a variance of one and compares the effects of potential predictors within a model.The GLMSELECT procedure in SAS was utilized to implement the stepwise regression analysis.The regression determination yielded the following equation where Z i and K i are the observed discharge of the Zambezi and Kwando Rivers respectively at the i-th time step: Based on the standardized estimates of the variables, the stepwise regression determined that the most influential variable on flooding in the Chobe River Basin is the discharge of the Zambezi River 64 days prior to observed flooding followed by the discharge of the Zambezi 8 days prior.The immediate flow of the Kwando also influences the extent of flooding in the CRB.All of the yielded predictors, based on a t-test with 178 degrees of freedom, are significant at 99% confidence.The correlation between the selected model of inundation in the CRB and lagged discharge values yielded an R 2 of 0.658.When only the lagged discharge of the Zambezi is used in the model, the Zambezi discharge at the ninth time-step is identified as the best predictor again, but has a lower standardized estimate than when the lagged discharges of both rivers are used as variables. The same statistical methodology was conducted to identify the best predictor of flooding in the sub-basins of the CRB.Regression analysis using lagged Zambezi and Kwando discharge with flooding area in the Chobe Floodplain Sub-basin yielded the following equation: The selected model yielded an R 2 value of 0.598, and all of the variables chosen to be in the model have p-values of <0.0001, based on a t-test with 175 degrees of freedom.The most influential predictor of flooding extent in the Chobe Floodplain is the discharge of the Zambezi River at the ninth time-step, or 64 days prior to flooding based on this variables standardized estimate (0.46). Mamili wetlands f looded area The selected model for inundation area in the Mamili wetlands identified the flow of the Kwando at the 9th time-step, or 64-days prior, as the most influential predictor based on its standardized estimate (0.57).The selected model yielded an R 2 value of 0.862, with all variables in the model having p-values of <0.0001 based on a t-test with 175 degrees of freedom.Because of the known nature of this part of the system, the model for flooding in the Mamili wetlands was also run with only the flow of the Kwando as an independent variable and identified the same lag as the best predictor. Linyanti f looded area = −82.766− 0.013Z 1 + 3.316K 9 (4) The selected model yielded an R 2 value of 0.699, and all of the variables chosen to be in the model have p-values of <0.0001, based on a t-test with 176 degrees of freedom.The most influential predictor of flooding area in the Linyanti based on the standardized estimate (0.787) is the Kwando discharge at the 9th time-step, or 64 days prior.The predictive equation for flooding extent in the CRB was modeled and plotted against the observed flooding extent derived from the MODIS imagery (Figure 12).The model performed well with exception of the extreme peaks in observed inundation that occur during the wet season. The selected model for inundation area in the Mamili wetlands identified the flow of the Kwando at the 9th time-step, or 64-days prior, as the most influential predictor based on its standardized estimate (0.57).The selected model yielded an R 2 value of 0.862, with all variables in the model having p-values of <0.0001 based on a t-test with 175 degrees of freedom.Because of the known nature of this part of the system, the model for flooding in the Mamili wetlands was also run with only the flow of the Kwando as an independent variable and identified the same lag as the best predictor. 82.766 0.013 3.316 The selected model yielded an R 2 value of 0.699, and all of the variables chosen to be in the model have p-values of <0.0001, based on a t-test with 176 degrees of freedom.The most influential predictor of flooding area in the Linyanti based on the standardized estimate (0.787) is the Kwando discharge at the 9th time-step, or 64 days prior.The predictive equation for flooding extent in the CRB was modeled and plotted against the observed flooding extent derived from the MODIS imagery (Figure 12).The model performed well with exception of the extreme peaks in observed inundation that occur during the wet season. Population-Flood Risk Mapping The population centers of this region are small and clustered around major town centers, such as Kasane, roadways and villages throughout the floodplains.The communities that reside in the floodplain usually live on termite mounds and small islands during the wet season.During the wet season, water level can rise up to 2 m on the floodplains, forcing communities to seek higher ground.In such a flat region, large termite mounds and small islands are their only options [26].Based on the WorldPop dataset [44], there were approximately 90,000 people living in the CRB in 2011 (Figure 13).Many of these population epicenters are located within communal conservancies surrounding the Chobe River in Namibia such as the Lusese, Salambala, Kasika and Impalila Conservancies in the North East, centrally located Bamunu Conservancy and Shikhakhu, Dzoti, Wuparo and Balyerwa in the Mamili region.The communities are strategically located along the floodplains to utilize the natural resources and implement flood recession agriculture, but the high magnitude and unpredictable nature of the floods in recent years has put many people in the path of high magnitude flooding. Population-Flood Risk Mapping The population centers of this region are small and clustered around major town centers, such as Kasane, roadways and villages throughout the floodplains.The communities that reside in the floodplain usually live on termite mounds and small islands during the wet season.During the wet season, water level can rise up to 2 m on the floodplains, forcing communities to seek higher ground.In such a flat region, large termite mounds and small islands are their only options [26].Based on the WorldPop dataset [44], there were approximately 90,000 people living in the CRB in 2011 (Figure 13).Many of these population epicenters are located within communal conservancies surrounding the Chobe River in Namibia such as the Lusese, Salambala, Kasika and Impalila Conservancies in the North East, centrally located Bamunu Conservancy and Shikhakhu, Dzoti, Wuparo and Balyerwa in the Mamili region.The communities are strategically located along the floodplains to utilize the natural resources and implement flood recession agriculture, but the high magnitude and unpredictable nature of the floods in recent years has put many people in the path of high magnitude flooding.The inundation duration for 2002 was used as an example to examine how population centers are impacted during a year with low flooding spatial extent (Figure 14).Even in years that experience low flooding events, over 22,000 people are exposed to flooding for over a month in the Namibian floodplain of the CRB.Many communities in the Zambezi east are located in locations that receive significant amounts of annual flooding almost every year.To quantify the number of people living in flood zones during years of high flooding, the inundation duration index of 2009 was used (Figure 14).In a year of high magnitude flooding, such as 2009, approximately 53%, or 46,801 people were living in an area that was flooded for over a month a year.22%, or 19,652 people are living in an area that will be inundated for more than three months of the year.Over 3,000 of these people live in an area that was flooded for over six months in 2009.The high flood event of 2009 caused many communities in the Zambezi East, especially within the Impalila, Kabulabula and Kasika conservancies to be flooded for 3-6 months.The differences in these examples show how variable and unpredictable the extent and duration of flooding can be in this region.The high magnitude flood of 2009, although spatially affecting more of the region, left communities in the Zambezi East flooded for less time than the flooding in 2002.The inundation duration for 2002 was used as an example to examine how population centers are impacted during a year with low flooding spatial extent (Figure 14).Even in years that experience low flooding events, over 22,000 people are exposed to flooding for over a month in the Namibian floodplain of the CRB.Many communities in the Zambezi east are located in locations that receive significant amounts of annual flooding almost every year.To quantify the number of people living in flood zones during years of high flooding, the inundation duration index of 2009 was used (Figure 14).In a year of high magnitude flooding, such as 2009, approximately 53%, or 46,801 people were living in an area that was flooded for over a month a year.22%, or 19,652 people are living in an area that will be inundated for more than three months of the year.Over 3000 of these people live in an area that was flooded for over six months in 2009.The high flood event of 2009 caused many communities in the Zambezi East, especially within the Impalila, Kabulabula and Kasika conservancies to be flooded for 3-6 months.The differences in these examples show how variable and unpredictable the extent and duration of flooding can be in this region.The high magnitude flood of 2009, although spatially affecting more of the region, left communities in the Zambezi East flooded for less time than the flooding in 2002. Discussion This study analyzed 15 years of flooding in the Chobe River Basin.The area of inundation throughout the Chobe River Basin exhibits extreme spatial and temporal variability and responds to external variables such as the relative discharge of the Kwando and Zambezi, which is controlled by regional precipitation.During interviews, many local officials stated that the floodwater comes from Angola.The lack of precipitation in the CRB during the flooding season supports this statement.The area of maximum flood varies greatly between years and there is a complex link between the different sub-basins of the CRB.The driest year in terms of both rainfall and inundation area was Discussion This study analyzed 15 years of flooding in the Chobe River Basin.The area of inundation throughout the Chobe River Basin exhibits extreme spatial and temporal variability and responds to external variables such as the relative discharge of the Kwando and Zambezi, which is controlled by regional precipitation.During interviews, many local officials stated that the floodwater comes from Angola.The lack of precipitation in the CRB during the flooding season supports this statement.The area of maximum flood varies greatly between years and there is a complex link between the different sub-basins of the CRB.The driest year in terms of both rainfall and inundation area was 2002.The highest observed discharge of the Zambezi River occurred on 22 March 2010 (Julian Date 2010081), followed by 25 March 2009 (Julian Date 2009084), corresponding to the highest observed maximum inundation areas in the CRB for the study period. It has been suggested that the overall trend in flooding extent in the CRB has decreased between 1997 and 2010 [9].The results of the analysis of annual inundation maximum for the time period of this study, which extends four years further, revealed a positive overall trend, indicating that the area inundated has generally increased since 2000.The positive trend captured in this time series was likely influenced by the high magnitude floods experienced in the CRB since 2009.As shown in Figure 7 the maximum inundation area for 2009-2012 was well above the long-term average of this study.The compiled MODIS images allow the creation of flood frequency maps to identify regions throughout the landscape that are flooded for extended periods of time.The highest annual inundation maximum was observed in 2009.Results from Long [10] noted the same increase in inundation between 2008 and 2009.The flood of 2009 caused an estimated $5,000,000 worth of damage to infrastructure in Zambia, destroyed roads and schools and destroyed crops [45].Not only can floodwaters destroy homes and block main roads, they can also bring dangerous wildlife such as hippopotamuses and crocodiles into residential regions.The large flood of 2009 brought hippos and crocodiles 20 km further inland than usual, resulting in injuries and the death of villagers [24]. Flooding in this region is a building process that is controlled by multiple flood pulses received from the Kwando and Zambezi Rivers.It may also be impacted antecedent floodwater that can remain within major channels as well as Lake Liambezi previous flooding [9].The antecedent effect can be observed in 2008 and 2012 when a relatively low discharge of the Zambezi resulted in high flooding extent in the CRB (Figure 15).The common consensus of locals is that the flooding in this region is controlled by the discharge of the Zambezi.Instead, this research indicates that after a year of moderate flooding, the high discharge of the Kwando River can amplify the impacts on the current year, resulting in major flooding that covers vast areas and connects the sub-basins.The predictive models of inundation determined by the GLMSelect procedure exhibit how the flooding in the region is not entirely controlled by one variable.The regression equation determined by Gumbricht et al. [46] to predict spatial extent of inundation in the Okavango delta also yielded multiple variables that strongly influence the resulting flooding extent.As determined by Gumbricht et al. [46], because evapotranspiration varies very little annually, including it the predictive model provides no improvement to the analysis.can floodwaters destroy homes and block main roads, they can also bring dangerous wildlife such as hippopotamuses and crocodiles into residential regions.The large flood of 2009 brought hippos and crocodiles 20 km further inland than usual, resulting in injuries and the death of villagers [24]. Flooding in this region is a building process that is controlled by multiple flood pulses received from the Kwando and Zambezi Rivers.It may also be impacted antecedent floodwater that can remain within major channels as well as Lake Liambezi previous flooding [9].The antecedent effect can be observed in 2008 and 2012 when a relatively low discharge of the Zambezi resulted in high flooding extent in the CRB (Figure 15).The common consensus of locals is that the flooding in this region is controlled by the discharge of the Zambezi.Instead, this research indicates that after a year of moderate flooding, the high discharge of the Kwando River can amplify the impacts on the current year, resulting in major flooding that covers vast areas and connects the sub-basins.The predictive models of inundation determined by the GLMSelect procedure exhibit how the flooding in the region is not entirely controlled by one variable.The regression equation determined by Gumbricht et al. [46] to predict spatial extent of inundation in the Okavango delta also yielded multiple variables that strongly influence the resulting flooding extent.As determined by Gumbricht et al. [46], because evapotranspiration varies very little annually, including it the predictive model provides no improvement to the analysis.The nature and strength of El Nino and La Nina govern precipitation patterns for the region, and therefore control the variation of flow in the Zambezi and Kwando, as well as inundation patterns [9].For example, the low spatial extent of flooding in the CRB in 2002 and 2005 can be accredited to low precipitation totals in the Upper Zambezi sub-basin corresponding to the warm phase ENSO.The El Nino event of 2009/2010, however, brought above average rainfall conditions to the region that resulted in the high magnitude floods seen in this study, as well as by Long [10] (Southern Africa Special Report).Climate forecasts from the NOAA Climate Prediction Center (CPC) and the International Research Institute for Climate and Society (IRI) indicate an elevated chance for El Nino to continue through 2015 [47]. The timing of maximum inundation in the CRB is determined by the arrival of the flood pulses from the Zambezi and Kwando Rivers.The annual cyclic lag between peak discharge in the Zambezi and maximum flooding extent can be seen in Figure 15.The multiple regression model identified the discharge of the Zambezi 64 days prior as the best predictor of flooding throughout the CRB.The magnitude of flow in Zambezi also seems to impact the lag between peak discharge and peak flooded area in the following year.For example, the high discharge of the Zambezi in 2007 and 2010 both resulted in a long lag time between peak discharge and flooding area.The years following these high discharge events (2008 and 2011) however did not experience discharge as high as the previous year, but had much shorter lag times and resulted in a large area of inundation.The lags between peak discharge and flooded area in 2010 and 2011 were relatively rapid following the high discharge event of 2009.This shorter lag between peak discharge and peak inundation may be a result of the landscape still being flooded for the previous year.If the channels are already flooded, they will surpass bank full levels and water will enter the floodplains at a faster rate.It is logical that even in years with high discharge, there is still a significant lag between peak discharge and peak The nature and strength of El Nino and La Nina govern precipitation patterns for the region, and therefore control the variation of flow in the Zambezi and Kwando, as well as inundation patterns [9].For example, the low spatial extent of flooding in the CRB in 2002 and 2005 can be accredited to low precipitation totals in the Upper Zambezi sub-basin corresponding to the warm phase ENSO.The El Nino event of 2009/2010, however, brought above average rainfall conditions to the region that resulted in the high magnitude floods seen in this study, as well as by Long [10] (Southern Africa Special Report).Climate forecasts from the NOAA Climate Prediction Center (CPC) and the International Research Institute for Climate and Society (IRI) indicate an elevated chance for El Nino to continue through 2015 [47]. The timing of maximum inundation in the CRB is determined by the arrival of the flood pulses from the Zambezi and Kwando Rivers.The annual cyclic lag between peak discharge in the Zambezi and maximum flooding extent can be seen in Figure 15.The multiple regression model identified the discharge of the Zambezi 64 days prior as the best predictor of flooding throughout the CRB.The magnitude of flow in Zambezi also seems to impact the lag between peak discharge and peak flooded area in the following year.For example, the high discharge of the Zambezi in 2007 and 2010 both resulted in a long lag time between peak discharge and flooding area.The years following these high discharge events (2008 and 2011) however did not experience discharge as high as the previous year, but had much shorter lag times and resulted in a large area of inundation.The lags between peak discharge and flooded area in 2010 and 2011 were relatively rapid following the high discharge event of 2009.This shorter lag between peak discharge and peak inundation may be a result of the landscape still being flooded for the previous year.If the channels are already flooded, they will surpass bank full levels and water will enter the floodplains at a faster rate.It is logical that even in years with high discharge, there is still a significant lag between peak discharge and peak inundation in the floodplain.The water must first fill the channel before it flows over the banks and onto the floodplain.During years with high discharge (2009-2012), it is observed that the floodplains remain inundated for extended periods of time; long after the river discharge has peaked (Figure 15).Based on the ideas of Ogilvie et al. [48], the floodwater recedes from the floodplains at a slower rate than water within the channels because it is trapped in topographic depressions [48].This can be observed in both the Zambezi and Chobe Rivers in respect to their corresponding floodplains.The stage of the Chobe River at Kasane typically returns to bank full level by mid-July (DOY 201) (Figure 11).Depending on the magnitude of the flooding, floodplains in this region may remain inundated for weeks following this decline in river level.Due to the lack of a complete discharge record for the Kwando River, we cannot make the same assumption for this region. The Kwando/Chobe Sub-basin, which is usually the driest of the Upper Zambezi region, received a large amount of rainfall in January 2008 [30].This spike in rainfall caused the discharge of the Kwando River to drastically increase, producing high amounts of flooding in the Mamili wetlands.The Kwando River, which normally peaks in May-June and quickly recedes, continued to rise resulting in the area surround the river to become swamps [49].After being dry for many years, the large floods of 2008 marked the onset of increased water level of Lake Liambezi, which has drawn people from all over the region to its shores to utilize the fishing opportunities. The benefits and economic opportunities associated with living on a seasonal floodplain draw populations closer to riverbanks, making them more susceptible to large flooding events.The high magnitude floods observed in recent years have caused the displacement of many communities in throughout the Zambezi basin.For example, the floods of the Zambezi in 2008 displaced 90,000 people in Mozambique [50].In the CRB, the amount of people impacted by flooding each year can increase by more than 48% between years of low and high magnitude flooding.To local farmers and residents, rainfall in this region is unpredictable making it difficult to accurately determine when to plant their crops and almost every year, more people are forced to move because of flooding [29].High magnitude floods force entire communities to leave their homes and relocate to drier ground.Resettlement has caused tension between conservancies and has forced communities to relocate. Limitations of this study include the missing MODIS LST imagery throughout the rainy season and the missing years of discharge data for the Kwando River.The relatively coarse resolution of the MODIS dataset limits the detection of small bodies of water and stream channels. Conclusions Seasonal and interannual inundation patterns of the CRB were observed using 386 MODIS LST images for the period 2000 to 2014.Results show that the area of inundation in the CRB has varied from 401 km 2 to 5779 km 2 and has drastically increased since 2000 with a major rise in 2008 and 2009.The timing and magnitude of flooding in the CRB is influenced by multiple variables.To understand the relationship between relative discharge of the Zambezi and Kwando Rivers and inundation extent in the CRB, as well as sub-regions within the basin, a multiple regression was performed.The model determined that the flooding extent in the CRB can be best predicted by the discharge of the Zambezi River 64 days prior to flooding.This type of modelling has the potential to predict the extent of flooding weeks in advance, which could be invaluable to land use managers and resettlement planning.There are over 46,000 people in the Zambezi region living in areas at high risk of long-term flooding.The majority of these people are living in communities throughout the Zambezi wetlands and depend on the seasonal floods for flood recession, or molapo farming.This dependence, along with the economic opportunities that floods can provide, often make it difficult to convince communities to relocate. With large flood events becoming increasingly common and unpredictable, it is critical to quantify the drivers of flooding in this region and understand how they are changing.Understanding these changes and their implications in the Zambezi Region will aid in future regional adaptation and resettlement planning at the more local levels. Figure 1 . Figure 1.Study area showing the Chobe River Basin in Namibia and Botswana.Also shown are the rivers that provide water to the region: the Zambezi River, the Linyanti and Chobe Rivers in the center of the basin, and the Kwando River. Figure 2 . Figure 2. Mean monthly precipitation in the Upper Zambezi Sub basin calculated from TRMM 3B43 data for the period 1998-2014. 2. 2 . Materials 2.2.1.Thermal Imagery (Land Surface Temperature) A series of level-3 MODIS Land Surface Temperature (LST) and Emissivity products from 2000 to 2014 were used to create a time series of annual floods.LST is a product of the MODIS instrument that is onboard the EOS (Earth Observing System) AM (Terra) and PM (Aqua) platforms [32].Land Surface Temperature & Emissivity 8-Day L3 Global 1 km (MOD11A2) images were acquired in 2014 from the USGS EarthExplorer (EE) tool from 2000 to 2014.Upon downloading, MOD11A2 8-day Figure 1 . Figure 1.Study area showing the Chobe River Basin in Namibia and Botswana.Also shown are the rivers that provide water to the region: the Zambezi River, the Linyanti and Chobe Rivers in the center of the basin, and the Kwando River. Figure 1 . Figure 1.Study area showing the Chobe River Basin in Namibia and Botswana.Also shown are the rivers that provide water to the region: the Zambezi River, the Linyanti and Chobe Rivers in the center of the basin, and the Kwando River. Figure 2 . Figure 2. Mean monthly precipitation in the Upper Zambezi Sub basin calculated from TRMM 3B43 data for the period 1998-2014. 2. 2 . Materials 2.2.1.Thermal Imagery (Land Surface Temperature) A series of level-3 MODIS Land Surface Temperature (LST) and Emissivity products from 2000 to 2014 were used to create a time series of annual floods.LST is a product of the MODIS instrument that is onboard the EOS (Earth Observing System) AM (Terra) and PM (Aqua) platforms [32].Land Surface Temperature & Emissivity 8-Day L3 Global 1 km (MOD11A2) images were acquired in 2014 from the USGS EarthExplorer (EE) tool from 2000 to 2014.Upon downloading, MOD11A2 8-day Figure 2 . Figure 2. Mean monthly precipitation in the Upper Zambezi Sub basin calculated from TRMM 3B43 data for the period 1998-2014. Figure 3 . Figure 3.Time series of daily discharge of the Zambezi River collected at the Katima Station from 2000 to 2014. Figure 3 . Figure 3.Time series of daily discharge of the Zambezi River collected at the Katima Station from 2000 to 2014. Figure 4 . Figure 4. (A) Sample differenced MODIS Land Surface Temperature image for 16 May 2000 using the day and night LST bands; (B) Sample inundation extent extracted from differenced MODIS LST image using Otsu thresholding. Figure 4 . Figure 4. (A) Sample differenced MODIS Land Surface Temperature image for 16 May 2000 using the day and night LST bands; (B) Sample inundation extent extracted from differenced MODIS LST image using Otsu thresholding. Figure 5 . Figure 5. Subsets of the Chobe River Basin used in regression analysis. Figure 5 . Figure 5. Subsets of the Chobe River Basin used in regression analysis. Figure 6 Figure6shows the inter-annual spatial extent and duration of inundation in the CRB for the period 2000-2014 from March to November of each year.Even in years of low flooding, surface water remains in topographic depressions along the main channel of the Chobe and regions of the Zambezi and Mamili wetlands for as long as four months.Communities of the region tend to cluster around these semi-permanent water bodies, as will be discussed in Section 3.5.Temporal changes in inundation duration are reflective of the changes in annual discharge and precipitation.Long-term water bodies were identified in Lake Liambezi, portions of the northeastern Zambezi wetlands before reaching the bottleneck in Zambezi East as well as along the ridge of the Chobe River.The number of images included in each year's flood frequency map varied depending on the amount of viable images, averaging at 26 images per year. Figure 6 . Figure 6.Spatial extent of inundation duration cycles in the Chobe River Basin calculated from MODIS LST data for the period 2000-2014.Areas within the CRB limit that are white reflect no flooding. Figure 6 . Figure 6.Spatial extent of inundation duration cycles in the Chobe River Basin calculated from MODIS LST data for the period 2000-2014.Areas within the CRB limit that are white reflect no flooding. Figure 7 . Figure 7. Derived annual inundation maximums from 2000 to 2014.The long-term average of 4097 km 2 is shown. Figure 8 . Figure 8. Flood frequency in the Chobe River Basin for the period 2000-2014. Figure 7 . Figure 7. Derived annual inundation maximums from 2000 to 2014.The long-term average of 4097 km 2 is shown. Figure 6, the years of low flooding include 2002, 2005, and 2006, all of which are relatively early in the study period.In more recent years, the duration and extent of flooding have increased dramatically, especially from 2009 to 2012.Areas within the landscape that remain inundated for the longest measured time (7-8 months) include Lake Liambezi in 2011-2013, as well as portions in the Mamili Wetlands and Linyanti channel.The average annual maximum flood for the period 2000-2014 covered 4097 km 2 of land in the CRB. Figure 7 shows how the consecutive years varied from this average. Figure 7 . Figure 7. Derived annual inundation maximums from 2000 to 2014.The long-term average of 4097 km 2 is shown. Figure 8 . Figure 8. Flood frequency in the Chobe River Basin for the period 2000-2014.Figure 8. Flood frequency in the Chobe River Basin for the period 2000-2014. Figure 8 . Figure 8. Flood frequency in the Chobe River Basin for the period 2000-2014.Figure 8. Flood frequency in the Chobe River Basin for the period 2000-2014. Figure 9 . Figure 9. Examples of ground control points taken in the field with their corresponding flooding extent derived from MODIS imagery on the 8-day aggregated 10 June image.All shown GCPs were marked on the furthest extent of flooding in that area.Images shown were taken on location with a Trimble Juno SB device. Figure 9 . Figure 9. Examples of ground control points taken in the field with their corresponding flooding extent derived from MODIS imagery on the 8-day aggregated 10 June image.All shown GCPs were marked on the furthest extent of flooding in that area.Images shown were taken on location with a Trimble Juno SB device. Figure 10 . Figure 10.Time series of classified MODIS LST images showing the movement of surface flooding through the CRB in 2012. Figure 10 . 20 Figure 11 . Figure 10.Time series of classified MODIS LST images showing the movement of surface flooding through the CRB in 2012. Figure 11 . Figure 11.Stage of Chobe River at Marina Lodge, Kasane, Botswana (Data courtesy of the Department of Water Affairs office in Kasane, Botswana). Figure 12 . Figure 12.Comparison of the observed inundation derived from the MODIS LST imagery and with that modeled using Equation (3) over the period 2001-2003. Figure 12 . Figure 12.Comparison of the observed inundation derived from the MODIS LST imagery and with that modeled using Equation (3) over the period 2001-2003. Figure 13 . Figure 13.Distribution of population in the Zambezi Region of eastern Namibia. Figure 13 . Figure 13.Distribution of population in the Zambezi Region of eastern Namibia. Figure 14 . Figure 14.Population centers located within flood zones in years of low magnitude (top) and high magnitude flooding (bottom). 2002.The highest observed discharge of the Zambezi River occurred on 22 March 2010 (Julian Date 2010081), followed by 25 March 2009 (Julian Date 2009084), corresponding to the highest observed maximum inundation areas in the CRB for the study period.It has been suggested that the overall trend in flooding extent in the CRB has decreased between 1997 and 2010 [9].The results of the analysis of annual inundation maximum for the time period of this study, which extends four years further, revealed a positive overall trend, indicating that the area inundated has generally increased since 2000.The positive trend captured in this time series was likely influenced by the high magnitude floods experienced in the CRB since 2009.As shown in Figure 7 the maximum inundation area for 2009-2012 was well above the long-term average of this study.The compiled MODIS images allow the creation of flood frequency maps to identify regions throughout the landscape that are flooded for extended periods of time.The highest annual inundation maximum was observed in 2009.Results from Long [10] noted the same increase in inundation between 2008 and 2009.The flood of 2009 caused an estimated $5,000,000 worth of damage to infrastructure in Zambia, destroyed roads and schools and destroyed crops [45].Not only Figure 14 . Figure 14.Population centers located within flood zones in years of low magnitude (top) and high magnitude flooding (bottom). Figure 15 . Figure 15.Daily discharge of the Zambezi and derived area of flooding in the Chobe River Basin from 2000 to 2014. Figure 15 . Figure 15.Daily discharge of the Zambezi and derived area of flooding in the Chobe River Basin from 2000 to 2014. Remote Sens. 2016, 8, 676 7 of 20 major access points along different types of roads and even boats.Standing water, sites with homogeneous vegetation and relative soil moisture data were collected to be used to train the signal obtained from the thermal imagery at the scale of this study area.Data collected at each site included topography, inundation, soil type, soil color, soil moisture and vegetation type.A total of 69 training samples were collected throughout the basin to validate the thresholding technique allowing us to create classes of inundation frequency.The derived inundation extents are compared with these training samples in the results section.2.3.4.Regression Analysis of Flood, Discharge and Precipitation Patterns
2016-08-24T23:09:51.855Z
2016-08-20T00:00:00.000
{ "year": 2016, "sha1": "8d29b6152449b07efd74a5b12ba7a8a0851e2a9d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/8/8/676/pdf?version=1471683211", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8d29b6152449b07efd74a5b12ba7a8a0851e2a9d", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology", "Computer Science" ] }
237453304
pes2o/s2orc
v3-fos-license
Choline Chloride-urea Deep Eutectic Mixture Water for the Synthesis of an Amphiphilic Compound of Glyceryl Monocaffeate microbial, anticancer, anti-inflammatory, and anti-HIV properties 3, 4 . Nonetheless, its lipophilicity or effectiveness in the oil system is restricted due to its low solubility in non-polar media and therefore limits its application in oil-based food, pharmaceutical, and cosmetic industries 5 ) . CA also has low solubility in water systems 6 ) . Thus, the modification into CA derivatives especially caffeic acid esters derivatives would overcome the limitation pose by CA as well as enhance its biological activities. caffeic acid ester was initially identified in oats 7 The Abstract: The Lipase-catalyzed synthesis of glyceryl monocaffeate (GMC) in choline chloride-urea of natural deep eutectic solvent (NADES) media is reported to provide amphiphilic character to caffeic acid (CA). The modification of CA into GMC could potentially increase its solubility and widen the application of CA’s biological activities in water and oil-based systems. The high conversion was achieved when the reaction was carried out with the addition of more than 20 %v/v water, at a high molar ratio of glycerol and 40°C. It was found that the lipase-catalyzed transesterification of ethyl caffeate (EC) and glycerol in choline chloride-urea of DES media obeyed ping-pong bi-bi mechanism with V max = 10.9 mmol.min –1 , K mEC = 126.5 and K mGly = 1842.7 mmol. caffeic acid ester derivative with solubility in water about 3 times as compared to CA 1.76 mg.ml 1 in the water at 20 . The enhancement of hydrophilicity and lipophilicity in GMC could provide a promising new type of compound that is water and oil soluble with a plethora of biological activities. GMC could be synthesized via lipase-catalyzed transesterification reaction in an organic solvent at mild reaction conditions. By carried out the reaction in the organic solvent may increase the solubility of the substrate for improving yield, however, this approach denying the green chemistry concept. As alternative deep eutectic solvent or specifically natural deep eutectic solvent NADES has emerged as a substitute to organic solvent. Recent success on the enzyme-catalyzed reaction in the NADES has been reported in numerous reports 9 11 . Besides NADES, propylene carbonate could also be used as a green solvent for lipase catalyzed synthesis where recently proven by Cumming and co-workers during the synthesis of GMC 12 . However, our work was focused on the feasibility of NADES reaction media of choline chlorideurea for the synthesis of GMC via an immobilized lipasecatalyzed reaction of ethyl caffeate EC and glycerol. In addition, the possible reaction mechanism was investigated to provide valuable information on the biocatalytic reaction in the used media. Materials and chemicals The lipase used for the transesterification reaction was originated from Candida Antarctica B immobilized on the acrylic resin Novozyme 435 . CA and urea were purchased from Acros Organic. Choline chloride, glycerol, and 4-nitrophenyl butyrate were purchased from Sigma-Aldrich. Diethyl ether, ethanol, magnesium sulfate anhydrous, acetonitrile, disodium hydrogen orthophosphate, and acetic acid were obtained from Fisher Scientific. Sodium carbonate, sulfuric acid, sodium chloride, sodium hydroxide, and methanol were from Merck-Millipore. Whereas the standard substrate of EC was purchased from Enzo Life Sciences. Preparation of NADES The NADES used in the current study is based on a eutectic mixture of choline chloride and urea. The NADES was prepared according to the procedure described by 13 . The choline chloride and urea are hygroscopic, thus it was dried in a vacuum oven at 0.5 bar and 60 before further use. The dried choline chloride and urea were mixed at a molar ratio of 1:2 in a sealed container. Subsequently, the mixture was heated at 100 and slowly stirred to form a colourless liquid after approximately two hours. The duration required to form the liquid depends on the total volume, temperature, and homogeneity of the mixture. Synthesis of ethyl caffeate The EC was synthesized through an acid-catalyzed esterification reaction according to the procedure described by 14 . 2.5 g of caffeic acid, 200 mL of ethanol and 2 mL of H 2 SO 4 were mixed in a round bottom flask. The reaction has proceeded at 60 under reflux condition and continuously stirred at 400 rpm for about 8 hr. Beforehand, the molecular sieve was also added to the reaction mixture to absorb the water by-product produced during the reaction. Then, the reaction mixture was cooled and neutralized with 10 v/v of sodium carbonate. The EC produced was extracted using diethyl ether to form a clear yellowish solution. Subsequently, the trace amount of water was removed using anhydrous magnesium sulfate. The liquid mixture was collected and the EC was recovered from diethyl ether by using a rotary evaporator at reduced pressure. The formation of EC was confirmed using the Nuclear magnetic resonance NMR analysis. Enzyme activity assay The activity of the enzyme was assayed by using 4-Nitophenyl butyrate PNPB as a standard substrate. The assay mixture was 100 mM sodium phosphate buffer solution with the addition of 0.5 v/v Triton X-114. Then, the pH of the resulted assay mixture was adjusted to pH 7.2 at 37 with a 1 M sodium hydroxide solution. Meanwhile, 50 mM of PNPB was prepared in acetonitrile as the standard substrate solution. During the assay, 10 mg of the immobilized enzyme is added to 1 mL of assay reagent and then incubated in an orbital shaker at 37 for 30 min. Afterward, 0.01 mL of standard substrate solution is added to allow the reaction to occur for 10 min at 37 . The reaction solution was filtered by using a syringe filter and poured into a cuvette to remove the immobilized enzyme. The change of colour was quantified by using UV-Vis spectrophotometer at 400 nm and 1 cm light path length. All assays were performed in triplicates. The unit activity was calculated by using the following equation; Where; A is the absorbance value abs , V T is the total volume of the assay mL , df is the dilution factor, t is the assay duration min , V Enz is the volume of the enzyme assay solution and ε is the millimolar extinction coefficient of 4-nitrophenol at 400 nm 1.48 10 5 mM 1 .cm 1 . The unit activity IU definition means that one unit will release 1.0 micromole 10 6 mole of 4-nitrophenol per minute at pH 7.2 at 37 using 4-nitrophenyl butyrate as a substrate. Lipase-catalyzed transesteri cation reaction The substrates of EC and glycerol were dissolved in 5 mL of DES with added water. The resulting mixture was stirred for a few minutes to allow it homogenized. The reaction was carried out in the incubator shaker and started when the immobilized enzyme added to the reaction media. A 100 μL of the sample was periodically withdrawn from the reaction media and diluted in 900 μL methanol. Then, the sample was filtered by using 0.2 μm of syringe filter. The dissolved components in the sample was analyzed by using High-performance liquid chromatography HPLC . All experiments were conducted at least in triplicates. The conversion of the EC was calculated by using the following equation; where, C EC,o and C EC are the initial and final concentrations of the EC. Since the conversion was taken as the amount of EC consumed. The conversion should not be taken as an absolute formation of GMC as there are possibilities of the formation of CA due to reverse reaction. Determination of kinetic mechanism The experiment to study the kinetic mechanism of the reaction was carried out at the same reaction media. The reaction temperature was 40 and agitated at 200 rpm. The unit activity and reaction period were remained constant. The initial rate of reaction was taken from the timeprogressive curve of the product. The concentration of EC and glycerol were varied between 0.02 to 0.1 M and 1.0 to 1.8 M, respectively. Lineweaver-Burk s plot was used to determine the kinetic mechanism involved. HPLC analysis The concentration of the substrate and product presence in the sample withdrawn from the reaction media was quantified using the HPLC coupled with ultraviolet UV detector at wavelength 325 nm. The injection volume and the flow rate were set at 0.5 μL and 0.5 mL.min 1 . R,R Whelk-O1 chiral column Regis Technologies, USA was used for the detection and separation of substrate and product at an oven temperature of 40 . The mobile phase used to consist of isocratic mixtures of methanol and deionized water DI with an addition of acetic acid as a modifier for better separation methanol: DI 0.5 v/v acetic acid, 80:20 v/v . The acquisition of the data was carried out for 15 minutes and no obvious peak appeared after that period. Nuclear magnetic resonance NMR study The study was conducted at the School of Chemical Sciences, Universiti Sains Malaysia by using the proton nuclear magnetic resonance 1H-NMR to confirm the structure of the EC synthesized. The sample was dissolved in deuterated acetone acetone-d6 for the analysis. Electrospray ionization-mass spectroscopy ESI-MS study The qualitative identification of GMC produced was carried out by MUPA Laboratory, School of Chemical Sciences, Universiti Sains Malaysia using Liquid Chromatography-Mass Spectrometer LCMS-TOF Agilent 1290 with electrospray ionization ESI method. The total run time was 3 min. Chemical characterization and analysis The results on the chemical characterization are shown in Fig. 2. The structure of EC synthesized was confirmed using NMR analysis. The 1H-NMR data are similar to those reported by Xiang and co-worker Fig. 2 15 . The composition of the reaction mixture was analyzed by using HPLC. The peak at 8.8 and 9.4 min Fig. 2 b corresponds to EC and was proved by comparing the chromatogram of the standard EC obtained from Sigma Fig. 2 c . The two peaks that appeared at next to each other at different retention times might be due to the stereoisomers of EC cis-EC and trans-EC . Whereas the peak at 6.8 min belongs to the products. The GMC and EC present in the reaction mixture were further analysed by using the electrospray ionization method mass spectrometry ESI-MS . The compound can be identified based on their molecular weight which appeared in the spectroscopy spectrum as shown in Fig. 2 Effect of enzyme loading The unit activity of the enzyme was varied between 250 to 1500 IU to investigate its implication on the EC conversion in a pure NADES solvent system. As expected, the conversion of EC was very low 22 due to the nature of pure DES as being highly viscous. Previously, Durand and co-workers reported an even lower conversion of 2 during esterification of phenolic acid ester and 1-octanol in a pure eutectic mixture of Choline Chloride:Urea media 16 . Besides, the association of EC and glycerol as a part of the NADES hydrogen bond network is also a concern. The formation of hydrogen bonds by the substrates and the NADES may hinder the mechanism of the reaction, especially the formation of a carbocation. Regardless, the EC conversion gradually increases when a higher unit activity of the enzyme was used as shown in Fig. 3 a . The conversion reaches a maximum level when the unit activity is at 1250 IU. However, it decreases if the unit activity was increased to 1500 IU. Based on the observation, the immobilized enzyme was agglomerated at high loading in the reaction media. This phenomenon is due to the hydrophobic nature of the support made of microporous acrylic resin or may be due to the lower mobility of the immobilized enzyme in the viscous media. This observation is similar to those reported by Sun and co-workers 17 . As a result, it gives a poor distribution of the immobilized enzyme in the media, high steric hindrance, and induces diffusion limitation of the substrate. However, these drawbacks can be solved by reducing the viscosity of the NADES with the addition of water 18 . The mobility and dispersion of the immobilized enzyme should be considered as their hydrodynamic behavior strongly influenced by its chemical characteristic either in aqueous, organic, or DES media. Effect of reaction time The reaction period required to reach maximum conversion is normally overlooked. By taking data too early might result in the wrong conclusion for some cases as the reaction is not completed. In the present work, the data were taken when the reaction has reached a plateau. Several parameters could prolong the time required to reach maximum conversion such as the unit activity of the enzyme i.e., a specific activity or enzyme loading , temperature, substrate concentration, etc. In the present work, the reaction time was varied between 15 to 240 min as depicted in Fig. 3 b . The enzyme activity, temperature, water content, and EC:Glycerol molar ratio were fixed at 1250 IU, 40 , 20 v/v, and 1:50, respectively. The conversion rate is relatively fast where the conversion reaches 62 in 15 min. After 60 min the reaction began to reach a plateau at 87 of conversion. In comparison, 72 hr was required for the synthesis of octyl p-coumarate by Novozyme 435 in the same NADES media 16 . This positive result might be due to higher unit activity and EC:Glycerol ratio employed. Fast conversion shall increase productivity, however excess amount of enzyme may not be a cost-effective approach. Hence, it is suggested to conduct cost and yield optimization analysis before actual production. In this study, most of the data were taken after 4 hr to allow the reaction to reach the highest conversion. Effect of substrate molar ratio The transesterification reaction is a reversible process. It is very common to shift the reaction equilibria toward ester formation by using one of the substrates in excess. In the present work, glycerol was used in excess with EC:Glycerol molar ratio varied between 1:40 to 1:90. The EC conversion of more than 95 was achieved as anticipated. Based on Fig. 3 c , the conversion of EC increases from 95 to 98 as the molar ratio of EC to glycerol is increased up to 1:50. Further increase in molar ratio maintains the EC conver- (d) sion above 98 . This might be attributed to a lower degree of hydrolysis of the product. It must be noted that glycerol is viscous, an excessive amount may increase the total viscosity of the media. In the present work, 20 v/v of water was added for dilution. Effect of water content The viscosity of the NADES causes problems in their application for the separation process or reaction media. The viscous solvent exhibits a poor hydrodynamic profile and difficult to operate. Hence, the addition of water has been suggested to reduce its viscosity and make it more convenient to handle 16 . The water content was varied from 0 to 40 v/v during the lipase-catalyzed transesterification of EC and glycerol. As expected, the effect of water addition is very profound to the enzymatic reaction in DES media as depicted in Fig. 3 d . Without water, the conversion of EC in a NADES media is merely 22 . The EC conversion significantly improves when 5 v/v of water was introduced. This result implies that NADES choline chloride/ urea was unable to replace the role of water on catalytic action of the enzyme. A complete conversion of the EC is possible when the water content is over 20 v/v. However, the increase in conversion might not be completely due to transesterification nor reduced viscosity, but rather due to the competing hydrolysis reaction of ethyl caffeate to caffeic acid in the presence of an excess amount of water. Hence, we recommend future investigation on the extent of hydrolysis reaction during the transesterification of ethyl caffeate and glycerol within the same reaction system. Even though water is required for enzyme hydration, a small quantity is usually sufficient for transesterification. If the hydrogen bond between the water and choline chloride are unable to inhibit the hydrolysis reaction, dilution with other nonreactive compounds should be proposed. Effect of reaction temperature Chemical synthesis at lower temperatures reduces energy demand and utility costs. However, at lower temperature does not give better conversion even though for biocatalyst such as enzymes. The optimal temperature always exists for any enzyme-catalyzed reactions. Thus, the optimal temperature for lipase-catalyzed transesterification of EC and glycerol was investigated at a temperature between 30 to 60 . It was found that the conversion of EC to GMC is gradually increased with higher temperatures as shown in Fig. 4 a . However, this trend is insignificant at a temperature of more than 40 where the conversion remains close to 90 . This result is consistent with the previous investigation where Novozyme 435 normally shows optimal performance at between 40 to 45 19,20 . The enzyme employed is rather productive at a lower temperature, which gives 72 conversion at 30 . The activation energy of the reaction During a chemical reaction, sufficient energy is needed to overcome the energy barrier called activation energy E a to form the product. In an enzymatic reaction, enzyme increases the rate of reaction by lowering the activation energy. The activation energy for lipase-catalyzed reaction can be estimated based on Arrhenius s plot; a natural logarithm of the initial rate of reaction ln v against reciprocal of absolute temperature 1/T . Based on the negative slope of Arrhenius s plot in Fig. 4 b , the E a value estimated is 50.4 kJ.mol 1 with an R 2 value of 0.923. Enzymes with low activation energy are preferable for industrial applications. Zanin and co-workers reported that normal activation energy for most enzymes is smaller than 104.6 kJ.mol 1 21 . In this study, the value of the activation energy is comparable with the one reported by Sun Optimization of parameters Three significant parameters were chosen based on the screening study which are enzyme loading, water content, and reaction time. The effect of the substrate molar ratio was excluded since the reaction is carried out at excess glycerol. Whereas the temperature was fixed at 40 as the minimal impact was recorded. The optimization was carried out using a central composite design CCD available in the Design-Expert software. Based on the analysis, a quadratic model was proposed p 0.05 to estimate the conversion of the EC with R 2 0.9344 and the lack of fit was 0.0728 p 0.05 . The final equation expressed in coded factors for conversion of the substrate is represented as Eq. 3; The chosen parameters indeed significantly affect the conversion of EC in the following order; water content p 0.0001 enzyme loading p 0.0027 reaction time p 0.0029 . However, the interaction between parameters was insignificant to EC conversion with p-values more than 0.4 at a 95 confidence level. This means other parameters would not give any significant impact on the relationship between independent and dependent parameters. For instance, the interaction between enzyme loading and the EC conversion is not affected by water content or reaction time. Its behavior remains the same regardless of the amount of water content and the duration of the reaction. A 3D response surface plot of the parameter interaction is shown in Fig. 5 for better illustration of its impact on the EC conversion. Enzyme kinetic analysis The study on the kinetic mechanism of lipase-catalyzed transesterification of EC and glycerol was carried out to understand the order of substrates binds and product release from the active site of the enzyme. For a reaction between two substrates, there are three possible cases of reaction mechanism namely; random-sequential, orderedsequential, and ping pong bi-bi mechanisms. For the sequential mechanism, it involves the formation of a ternary enzyme-substrate complex whereas for the ping pong mechanism it involves the formation of the secondary enzyme-substrate complex. To determine the mechanism, a double reciprocal plot reaction rates and initial substrate concentration or Lineweaver-Burk s plot was adopted, and the trend is shown in Fig. 6. Both figures show linear lines in parallel as the substrate concentration increased which means the biocatalytic reaction in compliance with the ping-pong bi-bi mechanism. This conclusion is in agreement with the most works related to transesterification by Novozyme 435 23 25 . However, there is no indication of a substrate or product inhibition since the rate of reaction maintained an upward trend as the concentration of both substrates is increased. Previously, the alcohol substrates and for some cases, the product was reported to be inhibitors. 2-Phenylethanol and n-butanol are the inhibitors during the lipase-catalyzed synthesis of ethyl-3-phenyl propanoate and caffeic acid phenethyl ester, respectively 23,25 . Whereas, citronellyl acetate is the inhibitor during lipase-catalyzed transesterification between vinyl acetate and citronellol 24 . To confirm that the substrates are not the inhibitor, the data were fitted to four models namely; ping pong bi-bi without inhibition Eq. 4 , ping pong bi-bi with inhibition by EC Eq. 5 , ping pong bi-bi with inhibition by glycerol Eq. 6 and ping pong bi-bi with inhibition by both EC and glycerol Eq. 7 . v V max EC Gly K mGly EC K mEC Gly EC Gly 4 v V max EC Gly K mGly EC 1 EC /K i EC K mEC Gly EC Gly 5 v V max EC Gly K mGly EC K mEC Gly 1 Gly /K i Gly EC Gly 6 v V max EC Gly K mGly EC 1 EC /K i EC K mEC Gly 1 Gly /K i Gly EC Gly 7 Subsequently, the values of the kinetic parameters were predicted by using non-linear regression analysis and tabulated in Table 1. The result showed that ping pong bi-bi without inhibition gave a very small sum of squared error SSE of 7.4 10 4 indicating a good fit of the model. Meanwhile, the kinetic parameters from other models gave unrealistic and negative value as well as higher SSE values. The maximum rate of reaction V max is estimated at 10.9 mM.min 1 . Whereas the Michaelis constant for EC, K mEC was 126.5 mM while the Michaelis constant for glycerol, K mGlycerol was 1842.7 mM. The K m value can be interpreted as the affinity of the enzyme towards the substrate. Low K m value means a higher affinity of the enzyme towards the substrate. It can be observed that K mEC is lower than K mGlycerol which means that lipase tends to bind with EC rather than glycerol. This is because lipase tends to form the enzymesubstrate complex first with acyl donor as mentioned earlier 26 . From the Lineweaver-Burk plot and regression analysis, it can be concluded that the reaction mechanism obeyed ping pong bi-bi without substrates inhibition. It is proposed that EC binds to the lipase enzyme to form an enzyme-EC complex. Enzyme-EC complex then isomerizes into enzyme-acyl-ethanol complex and further releases ethanol as the first product. Then, glycerol combines with an intermediate enzyme enzyme-acyl to form an enzyme-acyl-glycerol complex, which again undergoes isomerization to enzyme-GMC complex. Finally, the second product, GMC is released and the enzyme retains its original conformation. The proposed reaction scheme was illustrated in Fig. 7. Conclusion Based on our study, we concluded that it is difficult to work with pure NADES Choline chloride:Urea media for lipase-catalyzed transesterification of EC and glycerol. They are viscous and limit the potential of lipase as a catalyst. However, when water is added they are comparable to aqueous and organic media. Like other media, lipase-catalyzed transesterification of EC is influenced by enzyme activity, water content, substrate molar ratio, temperature, and as well as reaction time. An EC conversion of over 90 is possible with optimized reaction conditions. Besides, the lipase-catalyzed synthesis of GMC follows the ping-pong bi-bi mechanism and there is no indication of substrate inhibition observed within the studied range. This positive result gives an alternative to aqueous media for green synthesis of GMC from EC. However, the formation of CA because of reverse reaction is a concerning issues with the present approach. Conflicts of Interest There are no conflicts of interest to declare.
2021-09-10T06:18:10.194Z
2021-09-08T00:00:00.000
{ "year": 2021, "sha1": "7c000f4c5f4acea6c18750d178b191cc1f57a755", "oa_license": "CCBYSA", "oa_url": "https://www.jstage.jst.go.jp/article/jos/advpub/0/advpub_ess21010/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8edcf2bb44a7dd690f2b80bb0798d3cbd42ebd7a", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235413074
pes2o/s2orc
v3-fos-license
Making voluntary medical male circumcision services sustainable: Findings from Kenya’s pilot models, baseline and year 1 Voluntary medical male circumcision is a crucial HIV prevention program for men in sub-Saharan Africa. Kenya is one of the first countries to achieve high population coverage and seek to transition the program to a more sustainable structure designed to maintain coverage while making all aspects of service provision domestically owned and implemented. Using pre-defined metrics, we created and evaluated three models of circumcision service delivery (static, mobile and mixed) to identify which had potential for sustaining high circumcision coverage among 10-14-year-olds group, a historically high-demand and accessible age group, at the lowest possible cost. We implemented each model in two distinct geographic areas, one in Siaya and the other in Migori county, and assessed multiple aspects of each model’s sustainability. These included numerical achievements against targets designed to reach 80% coverage over two years; quantitative expenditure outcomes including unit expenditure plus its primary drivers; and qualitative community perception of program quality and sustainability based on Likert scale. Outcome values at baseline were compared with those for year one of model implementation using bivariate linear regression, unpaired t-tests and Wilcoxon rank tests as appropriate. Across models, numerical target achievement ranged from 45–140%, with the mixed models performing best in both counties. Unit expenditures varied from approximately $57 in both countries at baseline to $44-$124 in year 1, with the lowest values in the mixed and static models. Mean key informant perception scores generally rose significantly from baseline to year 1, with a notable drop in the area of community engagement. Consistently low scores were in the aspects of domestic financing for service provision. Sustainability-focused circumcision service delivery models can successfully achieve target volumes at lower unit expenditures than existing models, but strategies for domestic financing remain a crucial challenge to address for long-term maintenance of the program. Introduction sustainability policy, plans and institutions [15,16]; and embedding VMMC services in cross referral networks with priority health services [17]. But important questions remain in the service delivery realm; including what age groups to target; what delivery modality or modality mix to use; whether local implementing partners should remain involved or be replaced by Ministry of Health (MoH) staff; whether staff should be dedicated to VMMC alone or perform VMMC as one among other service responsibilities. Finally, because implementing countries have substantial internal variations in key factors like population density and transportation access to health care, no single model may be appropriate throughout the different settings in a given country. Despite this lack of concrete experience, VMMC sustainability may become more feasible in the maintenance phase. A widely accepted service delivery approach is to continue circumcising rising adolescents. If this approach is taken, service delivery would concentrate on venues like schools, which deliver large numbers of clients per location, rather than stretching resources to reach broader age groups across more diverse venues. The intrinsic high demand for VMMC among adolescents may also lead to decreased need for demand creation and cost saving as circumcision during adolescence becomes the norm. To address the sustainability experience gaps in a country that has nearly completed its scaleup phase, the US Centers for Disease Control and Prevention, Kenya's National AIDS and STI Control Program (NASCOP), and the Ministries of Health of Migori and Siaya counties in Kenya, collaborated to organize, evaluate and refine adolescent service models intended to achieve sustainability. Progress towards sustainability was measured using a set of customized metrics largely drawn from the PEPFAR SID and designed to capture a wide range of both standard and novel parameters. The early adolescent age group (10-14 years) was targeted, due primarily to its historical high uptake of VMMC services in Kenya and its efficient accessibility through schools, detailed further below. Here, we describe the approach taken and the findings from baseline and the first year of implementation. Study design The full study protocol is available in S1 Appendix. We conducted a prospective pre-post assessment of static, mobile and mixed models each implemented in a purposively selected cluster of wards in Siaya and Migori counties of Kenya for a total of six clusters. Wards are Kenya's smallest electoral units within each county with a typical population of 10,000-20,000 people. Since program inception, all the three models evaluated in this study had been implemented seamlessly across the country at the discretion of implementing partners but without formal assessment focused on sustainability. Under the evaluation protocol, the target age band for all three models was boys aged 10-14 years. For safety, 10 years was the lower age limit for PEPFAR-supported VMMCs outside infancy at the time. The 10-14 year age band was chosen for this evaluation because of: a) its rapid contribution to maintaining coverage among older, sexually active age groups as the boys age; b) its intrinsically high demand for VMMC, e.g., 60% of all Kenyan VMMC clients were aged 10-14 years in 2017 [10]; c) logistical efficiency of accessing them through schools, given that Kenya has 72% net male primary school attendance [18]; and d) the necessity of circumcising adolescents for at least the next decade regardless of strategy, because even with adoption of an infant circumcision strategy, adolescent MC would still remain available to cover infants past two months who would not safely get MC until they reach the lower age cutoff. In each study cluster, the VMMC services were reorganized to prioritize boys 10-14 years and to be provided solely through one of the three models. Across all study clusters, clients above 14 years who presented for VMMC were offered the service but were neither actively recruited nor counted towards the evaluation targets. Mobilisers focused on activities reaching younger adolescents such as direct meetings with schoolteachers, parents, and village leaders, and leveraging health promotion events planned in the wards by ministry of health team. They were refreshed monthly and their outputs reviewed weekly to ensure fidelity. It was envisioned that exclusive focus on 10-14-yearolds in clearly defined clusters would make it possible to measure any efficiency or program quality gains associated with each model, which are critical for sustainability. Each study cluster consisted of 2-6 contiguous wards where only one service delivery model was implemented. The clusters were purposively selected through consensus with the county Ministries of Health and the implementing partners operating locally. Factors considered in selection and assignment of models to clusters were: a) to leave 'buffer' areas (not implementing any study model) between different clusters; b) to avoid including in the study cluster wards that border areas with large traditionally circumcising populations, from where many adolescents would potentially cross over to seek VMMC as an alternative to cultural circumcision; and c) to include one sparsely populated rural cluster for mobile model, one peri urban cluster with moderately dense population for mixed model and one densely populated urban cluster for static model. A. In the clusters assigned a 'static' model, general clinicians stationed in a static health facility offered VMMC to clients who come requesting it. Outreach and mobile services were suspended. The VMMC teams determined the number and distribution of static sites sufficient to meet all VMMC needs of the cluster. Mobilizers were trained to refer 10-14 year old potential VMMC clients to these sites, as well as any older men who requested the service. B. In the clusters assigned a 'mobile' model, a dedicated mobile VMMC team was responsible for maintaining VMMC coverage over a large catchment area via year-round outreach visits to multiple venues. Teams had a 'base' site but followed a set monthly schedule of service delivery in outlying health facilities and community sites within the cluster. Clients aged 15+ years were served at the 'base' sites. Clients aged 10-14 years were referred by mobilisers to the mobile/outreach services only. C. In clusters assigned a 'mixed' model, providers offered both static and mobile service models. Teams chose their strategy mix between the two models freely. Adolescent-targeted campaign-style demand creation and services were conducted at opportune times in the school year. The national routine VMMC data collection tool used in the study clusters was expanded to include evaluation-specific variables such as the ward of residence for each 10-14-year-old client circumcised. The evaluation was based on numerical target achievement and expenditure data plus key informant data in each model area, as well as consensus lessons learned from the investigators and implementers. The broad mix of outcomes was needed to cover the wide spectrum of elements that are potentially important to program sustainability and to explore for additional considerations needed to make VMMC sustainable. VMMC services were provided by the implementing partners already responsible for these services in the study areas: Center for Health Solutions (CHS) in Siaya, and University of Maryland Baltimore (UMB) in Migori through subgrantee Impact Research and Development Organization (IRDO). These partners were responsible for reporting service delivery and expenditure data from each study cluster. Jhpiego, an international academic partner providing technical support and coordination for VMMC services in Kenya, conducted data collection from key informant interviews and consensus lessons learned from the investigators and implementers as well as analysis. Jhpiego also tracked any administrative markers of the progress of VMMC service transition from implementing partners to the county ministries of health, such as staff employment and inclusion of VMMC in government strategic documents and budgets. In the analysis described here, primary outcomes in the baseline period (November 2016-October 2017) and Year 1 (actually the 10-month period December 2017-September 2018, truncated to align with the PEPFAR fiscal year) were compared. November 2017 was a planning period falling between the baseline and year one implementation periods. Baseline conditions, interventions and data collection At baseline, all selected study clusters had ongoing VMMC services, provided through delivery approaches selected by partners but typically more similar to the 'mixed' model than to the others used in the study. Typically, these involved both year-round services performed by staff at their static facilities and mobile or outreach services provided in tents or peripheral health facilities, respectively. In keeping with national age prioritization, they emphasized recruitment of men 15-49 years [19], but circumcised any age-eligible boy or man on request and often held school-based campaigns. During the baseline year, November 2016-October 2017, partners collected routine programmatic data regularly reportable to PEPFAR, including client numbers and ages. Notably, the baseline year was also the service startup year under new CDC/PEPFAR cooperative agreements for the partners working in both counties. Partners and the county MoHs also maintained routine internal expenditure records on the county without disaggregation to the study cluster levels. These expenditures captured by activity type using internal categorization systems were reported to the project evaluation team and retrospectively aligned with the study's activity categories and designated as recurrent vs. investment by the investigators. Recurrent expenditures included administration, medical and non-medical supplies, program supplies, communication, vehicle maintenance and renovations, sensitization meetings and supervision, training of service providers including refresher trainings, salary for contracted staff, transport and personnel costs, utilities (e.g. water bills, electricity bills, etc.), expenditures on technical working groups, allowances and expenditures on County Health Management Teams (CHMTs) and Sub-County Health Management Teams (SCHMTs), supportive supervision, logistical coordination with schools and other sources of adolescents for recruitment, transportation, and backup supply chain support. Investment expenditures included renovating or improving minor theatres in the county health facilities, renovating central sterile services department (CSSD) in the health facilities, building incinerators, improving water supply and management and distribution systems for VMMC commodities, information technology infrastructure, office equipment and medical equipment. At the study launch in November 2017, each study cluster was assigned one of the three service delivery models targeting boys 10-14, replacing routine service delivery: Each model was implemented by a team operating in partnership with its county health department according to the Kenya national VMMC guidelines [20] and following WHO standards [21]. Each team included a clinical officer or nurse VMMC performing the surgery, an assistant, a VMMC counsellor and an infection prevention officer. Standard national VMMC demand creation strategies were implemented. These included referrals from satisfied clients and peer mobilisers, distribution of education and communication materials in communities and schools, and interpersonal communication approaches to both clients and their parents, all focused on young adolescent clients. Data collection and outcomes Programmatic data was captured using paper-based data collection tools which were later transferred by a trained staff member into a spreadsheet using standard study forms and categories on a monthly basis, and then provided to study coordinators based in the counties. Expenditure data was directly collected by economic study team staff in collaboration with MoH and implementing partner staff retrospectively, reviewing their primary expenditure data together once per project year for the full period. Metrics were collected separately for every study cluster in each county. These, and their collection processes, were: • Primary performance outcome: Number of VMMCs achieved among 10-14-year-old residents of study clusters, as a percentage of the cluster's annual target. Client age was collected in routine intake data, and providers were trained to document self-reported client residence (inside vs. outside the study cluster) through chart flagging. This was based on where they reported spending the most nights in the past six months, excluding boarding school. These data, and VMMC numbers, were summarized and submitted through a monthly paper study form for each cluster. Annual targets for study clusters were calculated based on each cluster's 10-14-year-old male population projected to 2017 from the 2009 national census [22]. The goal was to achieve 90% coverage over a two-year period (essentially completing the scaleup phase in the model areas among the target population), from a baseline of 68% in Migori and 76% in Siaya obtained through expert consensus. Annual targets for all clusters ranged from 1277 to 3302. Routine program data from the baseline period was not stratified by client residence, and therefore the comparator when considering changes in overall performance from baseline is the total number of 10-14s circumcised in Year 1 (Y1). • Secondary performance outcome: Percentage of 10-14-year-old VMMC clients who were residents of the model area. This was collected to provide context, in the event of a cluster performing well numerically but without achieving its coverage targets because a large proportion of clients lived outside the cluster. • Primary expenditure outcome: Unit expenditure (UE) per VMMC. The numerator was total VMMC expenditure from all sources of support on all clients, because it was not feasible to attribute expenditures to specific age categories. The denominator was total MCs across all ages, to avoid an artificial UE decline from the increasing proportion of MCs done in 10-14s by Y1. Decreases noted in UE by Y1 were therefore expected to represent a combination of newfound efficiencies and existing inherent efficiencies from targeting adolescents. Activityspecific data were collected in both fine and broad categories, and sources of funds were attributed at baseline and for each cluster and activity. A financial approach was taken where actual expenditures were available; an economic approach was used for equipment already purchased or donated prior to the baseline period. • Secondary expenditure outcome: Percent of total expenditure in each broad recurrent spending category supplied by the county Ministry of Health. The fine categories used in initial data collection were collapsed by the local economic coauthors into prespecified broad categories in reporting MoH contributions to percent recurrent expenditure, to more intuitively display which areas MoH was preferentially investing in. Categories were: human resources for routine services; human resources for technical assistance and quality assurance; commodities/ consumables; facility operation, transport, waste management; and demand creation. Annual targets for rising Ministry contributions were set for each category, ranging from 10-80% in Y1 and all reaching 100% by year 5. MOH expenditure that was supported by the IP was collected at the IP level (classed as IP expenditure). • Additional expenditure outcome: Though not prespecified as a major outcome, identification of major expenditure drivers and comparison across models was also of interest in understanding impact of model type on financial aspects of sustainability. • Key respondent perception outcome set: mean Likert scale scores across all respondents on each question in a given questionnaire, covering perceptions of multiple aspects of program sustainability. Respondents were grouped into three different levels, with a different self-administered anonymous key respondent survey given to each level, after consent. These were: leadership level (National VMMC MoH staff, county-level health leadership, UN agency representatives on the national VMMC task force), project site level (VMMC site staff and managers, facility in-charges), and community level (a school administrator, a teacher, a religious leader, a local leader and five residents per ward). For leadership respondent recruitment, questionnaires were distributed to all staff of county health management teams within the HIV domain and all national MoH program managers. For site-level recruitment at all participating facilities, questionnaires were distributed to facility in-charges at health centers and medical superintendents at higher-level facilities. For community member recruitment, village elders, teachers and church leaders were convenience sampled, one per ward, based on willingness to participate, seniority and accessibility. Thus except for community-level respondents, the study did not target specific numbers but rather sought to obtain a census of all eligible respondents. Surveys covered respondent perceptions of VMMC services in the study cluster across the appropriate subset of the four key PEPFAR domains of sustainability for each respondent group. Domains included multiple elements and each element potentially contained multiple questions. Domains are defined and their contents summarized in Table 1; full questionnaires are in S2-S4 Appendices. The Civil Society Engagement subdomain, for example, includes questions about the degree of commitment of diverse community groups to the program, the proactive efforts of program leadership to communicate updates to community groups, and community engagement efforts in planning and evaluating the program. The questionnaires were distributed at study launch for the baseline period, and a year later for the Y1 period. Individual respondents could be the same individuals at both time points if they met the recruitment criteria at both, or could change between time points. Respondents received questionnaires and instruction sheets from study coordinators, who collected the completed questionnaires after two working days. Non-responders were reminded three times, and continued non-response led to automatic replacement. Responses were along a Likert scale from 1 ("never" or "strongly disagree") to 5 ("always" or "strongly agree"). • Additional outcomes: To gather information on the predetermined policy and strategic metrics of sustainability (summarized in S5 Appendix) based on the PEPFAR SID, the evaluation team also reviewed relevant national and county-level VMMC documents including policies, strategic plans, scopes of work and others. Finally, changes in implementation approaches over time were reported through biweekly to monthly calls between study leadership and implementer leadership; and implementers, lead evaluators, and investigators together generated consensus on challenges, refinements and lessons learned through dedicated discussions. Data management, storage and analysis Completed data collection tools were verified by research coordinators and electronically transferred to the database by the evaluation data officer. Monthly routine database backups were automatically scheduled. Paper forms and records were kept under lock and key. A random sample of 10 records per model was verified by the research officer to ensure accuracy of the details captured. Qualitative data was entered into the REDCap software, which was programmed to include logic and consistency checks to minimize data entry errors. Summary statistics were also used to ensure no missing data; any missing data was resolved via calls with persons responsible for the initial reporting. Data was stored by Jhpiego on the Kenya National AIDS & STI Control Programme (NASCOP) encrypted secure database. Y1 data analysis addressed: • Performance: Monthly and monthly cumulative achievements among target-aged residents, as a percent of target, were tracked for each model area in Siaya and Migori. Additionally, though not a major performance outcome, mean total monthly VMMCs (total VMMCs performed in all clients during the project year, divided by number of months included in that year) were compared between the baseline and Y1 years. This analysis was done to provide an intuitive check on the reasonability of performance, as a large absolute drop from baseline to Y1 would have required explanation even if performance were adequate against targets. The monthly means rather than annual totals were used to adjust for the different lengths of the two "years" compared, as there were 12 months in the baseline year and 10 in Y1. The comparison used a simple bivariate linear regression with generalized estimating equations with exchangeable correlation to account for clustering at the level of the facility. The p-value generated was for the t statistic testing the hypothesis that the coefficient for the project year is equal to 0, with a significance threshold of p < .05. • Expenditure: A program-level approach was taken. Key descriptive analyses done on the model-area level included baseline and Y1 unit expenditure, and percent contribution by MoH to expenditure in total and stratified by major expenditure domain. Assumptions used in expenditure estimation in each county, and mapping to the categories displayed in the results section, are in S6 Appendix. • Key respondent perceptions: For each model area and for all model areas combined, mean Likert scores for each subdomain (calculated as the mean of scores across all questions in the domain) were tested for change between baseline and Y1 using an unpaired t-test for indicators with sample size over 30. For indicators with sample size less than 30, the Wilcoxon rank test was used to test for differences. Descriptive analysis was also done to identify the individual questions with the five highest and lowest scores for each questionnaire at each time point. • Challenges, refinements and lessons learned: These were distilled from weekly and one-off meeting notes and summarized by the primary investigator. Ethics statement This study was approved by the Johns Hopkins University School of Public Health Institutional Review Board, and the Maseno University Ethics Review Committee. This project was reviewed in accordance with CDC human research protection procedures and was determined to be research, but CDC investigators did not interact with human subjects or have access to identifiable data or specimens for research purposes. Performance outcomes Targets and performance of study models are shown in Table 2, covering the primary and secondary performance outcomes. Performance against targets was higher in Migori than in Siaya in the mixed and mobile models. In both counties, the mixed model outperformed the others against targets. Mean monthly MCs performed in 10-14-year-olds (not shown), compared between project year periods, increased from baseline in the static and mixed models, either significantly despite the shorter year (Siaya) or non-significantly (Migori), but decreased significantly in both counties' mobile models. Substantial variations in monthly performance over the Y1 period were also noted. Fig 1 shows monthly number of MCs in the target population performed by model in Migori and Siaya, with key influencing events marked. The overall pattern was characterized in both areas by spikes in performance during campaigns in April, August and December, but also by gradually increasing base volume. Performance also dropped at the beginning of the PEPFAR year due to staff diversion from clinical work to planning activities by the implementing partners. Expenditure outcomes Total VMMC expenditure in Migori rose from approximately $906,000 at baseline to $1,009,011 in Y1 despite the shortened year (approximately $1.2 million if corrected for the shortened year). In Siaya it fell from $2.4 million to $2.1 million (approximately $2.5 million if also corrected). Total investment expenditure in Siaya rose from $11,800 at baseline to $32,000 in Y1, all by the PEPFAR-supported implementing partner. Total investment expenditure dropped in Migori from $53,000 at baseline to none in Y1. Unit expenditures for each model and county in the baseline and Y1 periods are shown in Table 3, along with Y1 volumes, covering the primary expenditure outcome. The key columns for comparison between baseline and Y1 are shaded. Baseline unit expenditures were essentially the same between counties. Unit expenditures dropped from baseline to Y1 in the mixed model (both counties) and the Migori static model; they rose substantially in the mobile model and the Siaya static model. Comparison with achieved volumes shows that models with higher volumes had lower unit expenditures, except that the Migori static and Siaya mixed models had low expenditures with low volumes. Primary expenditure drivers in Migori were personnel (both years), travel (both years), commodities (baseline) and administrative costs (Y1). Primary expenditure drivers in Siaya were personnel, travel and commodities (both years). Primary Y1 expenditure drivers in the model areas with substantial expenditure increase were personnel (Siaya static) and commodities, personnel and travel (Migori mobile). Primary Y1 expenditure drivers in areas with substantial decrease were personnel and travel (Migori mixed) and personnel and commodities (Siaya mixed). Indirect personnel were a more important cost driver in Migori than in Siaya; in Migori these costs were paid mostly to implementing partner HQ staff, and increased in Y1 as additional HQ support was added, where in Siaya they were paid mostly to county health management team staff. Full expenditure category breakdowns are in Fig 2. Percents of total expenditures covered by the MoH for each county by category (secondary expenditure outcomes for the study) are shown in Table 4. MoH contributions dropped from baseline to Y1 in most categories and in total, except for small increases in facility expenditures (Migori) and assumption of responsibility for demand creation (Siaya). Total MoH expenditures also dropped. In Siaya they dropped from $147,700 (14.77 million KSh) at baseline to $95,000 (9.5 million KSh); at the same spend rate over a full 12 months, corrected total Y1 expenditure would have been $114,000. In Migori they dropped from $116,000 (11.6 million KSh) at baseline to $62,300 (6.23 million KSh); corrected 12-month Y1 expenditure would have been $66,600. The shortened Y1 period should not affect MoH percent contributions. Key respondent perception outcome Mean key informant response scores for baseline and Y1 are displayed in Table 5, covering the key respondent perception outcome. Mean baseline and Y1 scores were similar across models (maximum differences of 0.64 and 0.61 respectively). Higher scores signify stronger mean respondent agreement with a set of positive statements across aspects of the domain or subdomain. On the national level, greatest strengths (highest mean individual question scores) for Y1 were in stakeholder understanding of the project goals, communication between the project team and county health teams, routine convening of all stakeholders by county health teams, county health team commitment to maintaining VMMC coverage, and alignment of staffing goals with VMMC targets. Greatest weaknesses were lack of public expenditure reporting, lack of sufficient resource allocation for VMMC in county health budgets, and lack of county health team engagement with civil society for VMMC program planning and feedback. Also notably, this was the only level on which scores decreased significantly in Y1, with respondent assessments of the "planning and coordination" and "civil society engagement" subdomains dropping substantially. On the site level, for all models, common greatest strengths for Y1 were in reliable regular provision of VMMC services, regular review of service quality against standards, conduction of VMMC without interfering with other health facility services, community acceptability of services, and staff training and certification in service provision and quality improvement. Key weaknesses included lack of diverse community groups committed to success of VMMC sustainability (mixed and static models), and lack of appropriate resource allocation in county budgets (mobile and static models). There was also a perceived lack of task-shifting to the lowest permitted cadre (static model), but this task-shifting was in fact in place. Nearly all subdomain scores increased significantly in Y1, and none decreased significantly. On the community level, for all models, strengths for Y1 were in ease of timely access to services, communication about services to community and incorporation of community feedback. Weaknesses were in community representation in meetings with the MoH and involvement in the service planning stage. A notable weakness identified at baseline but not in Y1 was in keeping service provision nondisruptive to other community services (e.g., schooldays). Both subdomain scores increased significantly in Y1. Detailed key informant responses are available in S7 Appendix. Additional outcomes: The documentation review found multiple enablers already in place from baseline in both counties. These included, among others: • documentation of annual targets and division of responsibilities between county and partner staff • an enabling environment free of policies discriminating against potential clients outside the typical target population • full task-shifting already in place: of surgical MC from doctors to nurses, and of infection prevention from nurses to high school graduates with 6 months of training in health and infection prevention. • institutionalized continuous quality improvement practices Challenges, refinements and lessons learned The most important early challenge, across multiple model areas, was aligning demand creation with the target population. Mobilization is conducted not only by implementing partner staff, but also by trained community health volunteers (CHVs) employed by the MoH to perform outreach on multiple topics. The CHVs initially continued their prior practices of mobilizing for VMMC across model area boundaries, including in traditionally circumcising communities. When this was noted after the first month, expansion trainings were held and coupled with field supervisory follow-up visits, to align mobilization work with the study models. This also became the key refinement identified by partners for continued use after the study period: ensuring that mobilizers prioritized adolescent clients from non-circumcising communities to maximize the contribution to increased coverage. Implementers also observed that all models required continued investment in demand creation work. This was true even in the static model, where it had been initially hoped that since services there were at consistent locations and times, this predictability and the intrinsic high demand for VMMC in 10-14s might make continued substantial investment in demand creation unnecessary. The models' age focus made close coordination with schools and local Ministry of Education (MoE) leadership crucial. On the leadership level, VMMC program staff engaged with MoE county and subcounty management teams, resulting in school heads adding VMMC promotion messages into existing packages of school-based preventive health talks. On the school level, a pool of interested teachers emerged who committed to be trained and mentored as VMMC champions and went on to lead regular VMMC health talks in their schools. However, stakeholders were unable to codify coordination in a formal Memorandum of Understanding between Ministries, which proved to be a complex process to navigate, requiring national-level engagement. An ongoing challenge across all models was staff turnover, necessitating frequent trainings and sometimes creating temporary staffing gaps. Similarly, periodic local circumcision campaigns required temporary staff reassignment between or away from model areas, though eligible VMMCs generated by these campaigns were counted toward model achievements. In addition, goal staffing levels initially calculated as sufficient for maintenance-phase services were not always enough to achieve local targets. Model area programs had to actively reassess staffing levels and adjust them several times during the study period. However, the staffing realm also underwent a crucial sustainability development during Y1. Both county MoHs deliberately made substantial shifts toward ownership of the human resources and management aspects of VMMC service delivery, working with the implementing partners to plan and execute these shifts. The Migori county MoH spent most of Y1 planning this transition, but in Siaya, service delivery staff began the move from IPs into MoH payrolls and oversight structures, reporting to facility in-charges and discharging some additional health responsibilities along with their VMMC work. Partners there continued to provide the salary funding through a sub-agreement with the MoH, but salaries and benefits were aligned with MoH standards. In both counties, county/sub-county health management teams continued working with IP supervision teams to conduct regular supportive supervision activities. Another crucial advance emerging from the development of these transition plans was that both counties added VMMC to their annual operations plans in 2018. However, these plans are not supported by dedicated budgets, and the lack of domestic financial support for the program was seen by all as the primary obstacle to sustainability. The county health directors also began work with the county health assemblies' Committees for Health, to legislate and allocate funds for VMMC in the county HIV budgets, to advance domestic financial commitment to the program through platforms such as the next 5-year County Integrated Development Plan. Another avenue identified for increasing domestic contributions was incorporating costs currently borne by partners for types of support services already funded by MoHs for other health care programs-e.g., autoclaving and waste management-into MoH budgets. A third was incorporating VMMC service delivery into provider (especially nurse) job descriptions with associated benchmarks, implicitly increasing domestic human resources investment. Implementers and MoHs also identified and executed additional opportunities for more efficient service delivery. These included implementing a national policy replacing universal HIV testing of 10-14-year-old MC clients with risk-based testing; shifting surges in service delivery to times of year when 10-14s were free from school; and the national MoH's restocking sites where its mapping had identified instrument stockouts as an obstacle to maintaining services with VMMC reusable kits. Finally, some challenges were model-or area-specific. For the static models these included refitting three facilities previously used only as outreach sites with the dedicated space and equipment to support daily VMMC services, in their new role as static sites. Service volumes were constrained until these renovations were completed, after three months. For mobile models, a key challenge was long travel times from base facilities to some service sites, causing service delivery to begin late in the day with clients waiting longer than expected. Finally, in some model areas, medical resources were temporarily diverted due to a cholera outbreak, and in others facility availability was interrupted during doctors' and nurses' strikes. Discussion This manuscript compares the baseline and first implementation year for a nonrandomized implementation science study of three VMMC service delivery models intended to progress toward sustainability by multiple metrics, in a country with a mature VMMC program. Key findings included the adequacy and feasibility of all models for a startup year; the standout performance of the mixed model in both achievements and unit expenditure; notable progress toward MoH leadership particularly in human resource areas; and lack of progress so far in the crucial area of financial responsibility. Service sustainability is multidimensional and requires multiple types of quantitative and qualitative indicators to assess. This evaluation collected a uniquely comprehensive set of metrics in an initial effort to achieve this. With respect to performance against targets, we consider all models to have performed adequately for a startup year, after correction to a 12-month period-including the lowest performer, the Migori static model at 54%. In our view, none should be ruled out as a potential sustainable approach for some regions of Kenya. The general upward monthly performance trends across study clusters and models were reassuring. Absolute numbers achieved among 10-14-year-olds rose from baseline to Y1 in the static and mixed clusters and dropped in the mobile areas, but the calculated targets for those clusters were substantially below their baseline year achievements and the models performed adequately against those targets. The coming addition of Y2 data will provide further clarity. The complexity of the relationships between prior achievements, model overall achievements and performance against targets results largely from an important clientele feature noted in Y1 that is likely to have also been true for many years previous. Nearly half of clients served, even in the 10-14 age band, were traveling in from neighboring areas, typically traditionally circumcising areas. All areas except Migori static area would have overachieved their targets had their 10-14 achievements all been in cluster residents. In other programs that border on such areas, to what extent this phenomenon is a problem depends on multiple factors. These include the HIV incidence disparity between the circumcising and non-circumcising areas, whether the VMMC resource envelope is sufficient to cover both simultaneously or requires prioritization, and whether local nonmedical MC practices are considered effective in reducing HIV risk, depending on how much of the foreskin is removed. In Kenya, HIV prevalence in the pre-ART, pre-VMMC era was much lower in circumcising than in non-circumcising areas [23], a finding largely attributed to effective nonmedical circumcision practices. The most notable expenditure finding was the low UE of the mixed model, a promising sign for its sustainability. Though all outcomes are difficult to compare across models because of their purposive geographic area selection, the mixed model areas, with their combination of rural and urban characteristics would not have been expected a priori to be cheaper settings to operate. In addition, the value difference is substantial, and is seen with both the high volume in Migori and the lower volume in Siaya. Mobile models are conventionally considered to have high total costs in VMMC programming due to multiple travel-associated costs, and our findings build on this by confirming high UE as well, at least at the modest volume achieved. By the same reasoning, static models are typically considered cheapest, but our findings are mixed. Siaya's static model notably had comparable unit expenditures to its mobile model, and higher unit expenditures than Migori despite having higher volumes. An important suggestion arising from these findings, and contrary to conventional expectations, is that static models (often also meant when the term 'integrated' is used) should not be assumed to be inherently more affordable or sustainable than others. Unit expenditures dropped from baseline in both countries' mixed models and the Migori static model. Drops are probably attributable in part to the maturation of the partners' programs in both counties from their startup period in the baseline year, but their concentration in the mixed models is informative. A 2016 analysis performed by another Kenya implementing partner on its programmatic VMMC data found an average unit expenditure of $44.21 [24], comparable to the mixed model Y1 findings. The other key expenditure finding, and the most concerning finding from the study overall, was the decrease in Y1 in MoH contribution to recurrent expenditures, from an already low baseline. The primary MoH contribution category in both counties, human resources for Technical Assistance and quality assurance, was mostly attributed to leadership staff time spent participating in study planning and oversight meetings. Its decrease in Y1 reflected lower time demands as compared to the intensive planning phase at baseline, but the lack of increase reflects lack of committed budgeting. This remains the biggest challenge to program sustainability. The governance processes now underway described above represent the best pathway at present to obtaining domestic financing commitments. These may include taking on waste management and autoclaving costs, and formally committing additional health care worker time to VMMC via job descriptions. Potential other approaches to funding the program, like cost-sharing with clients, have not been explored to date. A key missing parameter in developing sustainable VMMC financing is an understanding of how long the program will remain necessary or efficient if HIV incidence continues to drop in implementing countries. The extent to which ongoing antiretroviral scaleup alone can suppress transmission under current test-and-treat approaches is controversial and depends in part on how much transmission results from recent infections which are unlikely to be immediately treated. A final consensus set of criteria for achieving HIV epidemic control has not been defined [25], but recent large-scale intensive combination prevention studies have not achieved annual general population HIV incidences below 0.59% [26,27]. Thus, the future duration of need for VMMC in the general population is currently unclear, making it difficult to form expectations about whether it will outlast donor financing. Key informant scores on other domains of program sustainability were generally acceptable-to-high and improved from baseline to Y1. Findings here are chiefly useful for reassurance that leadership, implementing staff and community members do not identify critical unexpected gaps. The frequent identification of various aspects of financing and resource distribution is unsurprising given our expenditure findings. Improving civil society engagement was the other dominant theme and deserves concrete inclusion in county-level VMMC planning. In recent years it has also become a major area of focus for PEPFAR, as noted in its annual guidance publications, for similar reasons [28]. The final notable finding was worsening scores from baseline in this and the 'planning and coordination subdomain' among national leadership respondents. A possible explanation for this is the observed diversion of multiple county program leadership staff away from VMMC during Y1, in many cases toward other technical areas of the county HIV response. The lessons learned discussed above include both partner operational refinements and concrete national-level policy changes. Some offer substantial hope that it may be possible to lower unit expenditures further (e.g. by drastically decreasing testing in young adolescents). Several limitations to this study are notable. The programmatic setting made confounding by client crossover and unmeasured model area characteristics unavoidable. Target-setting was subject to multiple parameter uncertainties, most importantly baseline MC coverage in young adolescents. No recent survey data with sufficient power in this age group was available; the most recent representative data was the 2014 Demographic and Health Survey, which had found total circumcision coverage in 15-49-year age band of 56% on Siaya and 73% in Migori [29]. Targets for the first two years were designed to complete the scaleup phase by reaching 90% coverage in the target population in each model area, but given the uncertainties in some parameters used to calculate them, some may have represented larger proportions of the true unmet need than others, making percent achievements not entirely comparable across models. A forthcoming regional MC coverage survey will provide clarity. In particular, the striking "overperformance" of the mixed model in Migori-where the number of circumcised clients exceeded not only the target but the underlying target population size estimate-is explicable only by underestimation of the target population size, large-scale incorrect designation of nonresident clients as residents, or possibly large-scale VMMC-seeking by underaged clients reporting their age as 10 years. As population sizes were projected forward from decade-old census data, and the 2019 Kenyan census did not provide sufficient geographic granularity to correct them, population estimation is a likely source of error. Authors most familiar with the study setting believe that overestimation of the baseline MC coverage among 10-14-year-olds in the Migori mixed area also contributed. But we are unable to confidently determine what the relative contributions of these explanations are to this outlier finding or, outside of chance, why only this model area's results were so discrepant. Unit expenditure analysis may underestimate MoH total contributions by excluding prior investments in facilities, though this is likely mitigated by the fact that donors also invested substantially in facilities in the early years of the VMMC program. Conversely, while we aimed to capture the expenditure incurred for the project implementation, implanting partners and the MoH did have some expenditures-primarily in the human resource areas-for the evaluation itself, possibly caused overestimation human resource expenditures, and particularly the MoH contribution. Much of the MoH leadership time was spent on this purpose. However, most evaluation expenditures were made by the evaluating partner, and thus excluded. The key respondent surveys were purpose-built and not externally validated, though they were derived from widely-used evaluation materials. Questions in any subdomain may be more relevant and comprehensive for certain settings than others. Another limitation is that implementer observations and refinements, though they represent valuable experience that was a key goal of this study, are necessarily subjective and need corroboration from experience elsewhere. Most importantly, findings here can only very tentatively be generalized elsewhere in Kenya, let alone other VMMC countries. A substantial change to VMMC programming which fell after the data collection period also impacts the use and interpretation of our findings. PEPFAR discontinued most support for VMMC in boys aged 10-14 in 2020 to ensure client safety based on emerging data, and PEPFAR-supported programs in Kenya now serve only clients aged 15 years and older. Programs that continue serving younger males will therefore find sustainability taking on greater urgency at the same time that close safety monitoring becomes even more clearly crucial. Programs that adopt the 15-year minimum age will eventually, if successful in scaling up, find that most of the uncircumcised population is boys rising into that age range, e.g. the 15-19 years age band, which would then be expected to become the key clientele group for sustainable long-term service. Our experience can perhaps be applied most intuitively to that same age band, particularly those in school who are reachable through campaign-style approaches. Areas with lower male secondary school attendance may need a wider mix of strategies to reach these young men where they are. The experience from this project suggests that it may be possible to bring down UE substantially compared to current rates when focusing on adolescents. These results also raise questions about whether fully static models are an ideal sustainable approach as often assumed, and have clarified for participating stakeholders some key remaining challenges in making VMMC sustainable in Kenya.
2021-06-13T06:16:29.125Z
2021-06-11T00:00:00.000
{ "year": 2021, "sha1": "774b7f3c9323a1d6c7b2d342a3366ca63e13c669", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0252725&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef6290f12454e2187604407279a8c3a4ae415cb0", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
225833922
pes2o/s2orc
v3-fos-license
Going Down Memory Lane in the Application of Ajzen’s Theory of Planned Behaviour Model to Measure Entrepreneurial Intention: An Sem-Pls Approach Undoubtedly, technical education is the backbone of every nation’s growth and development. Understanding and predicting business creation initiatives demand empirical studies using theory-oriented models that appropriately mirror the multi-faceted perception-based processes underlying entrepreneurial intentional and behaviour. Drawing on a model adapted from a study by Linan and Chen (2009), and based on the Theory of Planned Behaviour (TPB) by Ajzen, this article empirically investigates the influence of Perceived Behavioural Control, Subjective Norm and Attitude towards Entrepreneurship, on Entrepreneurial Intention using Structural Equation Modelling (SEM) – Smart Partial Least Square (PLS) approach. In addition, several hypotheses (demo graph ic-oriented variables) in relation to TPB are investigated. Data were collected on 574 students from a public technical university in Ghana. The findings suggest that TPB is an important tool for predicting entrepreneurial intentions. Thus, the findings support the TPB for EI in Ghana. Two motivational factors (Attitude towards Entrepreneurship and Planned Behavioural Control) related to EI, but SN showed a non-significant association with EI. This study also found SN positively affecting attitude toward entrepreneurship and perceived behavioural control. However, only one (PSE-SN relationship) of the demo graph ic-based hypotheses was significant. This study, however, cautions against the generalizability of the findings as the sample size comprises of students from a single institution. One of the theoretical implications of our study relates to evidence of the consistency of the theory of planned behavior in explaining entrepreneurial intention in the Ghanaian context. Future studies could replicate this research by sampling more technical universities in Ghana and other settings. INTRODUCTION Education is arguably an indispensable component in the knowledge-driven society (Schleicher, 2003).Quality education is also quintessential for technological advancement, creativity and innovation to the economic growth and development of any country.Ghana has reached a stage in its development where creativity and innovation have become imperative in propelling its industrialisation agenda for accelerated economic turnaround.For instance, target 4 of the Sustainable Development Goal (SDG) 4 seeks to substantially increase the number of youth and adults who have relevant skills, including technical and vocational skills, for employment, decent jobs and entrepreneurship.Technical universities are expected to play an important role in the support of knowledge creation and knowledge transfer via science and technology, which is critical for the development and wellbeing of any country.This is particularly exemplified by the conversion of public polytechnics into technical universities in Ghana.These technical universities are expected to provide high-level technical skills training in the area of Technical and Vocational This Journal is licensed under a Creative Commons Attribution 4.0 International License Education and Training (TVET) as well as provide opportunities for technical and vocational students from the second cycle institutions.According to Lewin (1997) there are five justifications for governments' fixation and investment in TVET, which include; increasing the importance of schooling by imparting individuals with skills and knowledge required for making the individual an integral member of the community; curtailing the level of unemployment as a result of provision of employable skills to the youth and those who cannot excel academically; increasing economic development because it enhances the quality and skill level of the working population; reducing poverty by virtue of accessibility to higher-income occupations, and changing the attitude of individuals to opt for occupations that have prospects for the future. TVET, as an integral part of the technical university concept, would provide employment avenues for the teeming youth who are seeking for non-existent white-collar jobs in the country after graduation.In the quest to address the challenge of unemployment, industrialisation, and labour utilisation, policy makers in Ghana perceive technical education as the policy instrument to promote social progress using entrepreneurship education.Furthermore, technical universities in Ghana are mandated to excel at both basic and applied research by positioning themselves strategically in the delivery of services like professional training, marketing of new knowledge, consultancy, career guidance and counselling, etc.An important theme that runs through the vision and mission of the technical universities in Ghana is entrepreneurship.The immense contribution of entrepreneurship to the fortune of global economies in the area of employment creation opportunities and economic development has necessitated the promotion of entrepreneurship as a topmost agenda for most nations across continents, especially Ghana. However, there seem to be limited studies on the promotion of entrepreneurship in developing countries, since the attention of previous research on the promotion of entrepreneurship has been on developed countries (Bruton et al., 2008).According to Nabi and Linan (2013) little is known about the factors affecting entrepreneurial intention in developing countries.The knowledge and information about entrepreneurship in the advanced countries may not necessarily be applicable in developing countries due to perhaps diversity in cultural tendencies and other dynamics.This paper seeks to unravel the factors affecting the entrepreneurial intentions of students and offer some valuable insight into aspects of the technical education curriculum that empower students to be entrepreneurially-oriented.Over the years, policy makers and researchers have explored the factors affecting entrepreneurial intentions, given it's immense socio-economic importance (Carree and Thurik, 2006).The tremendous significance of entrepreneurship in any nation's development probably accounts for the reason why the proponents of the technical education concept to situate entrepreneurial education as a focal point in the curriculum.Despite the interest in entrepreneurial intentions, there is scant evidence about entrepreneurial intentions in different entrepreneurship contexts, especially in developing countries like Ghana.Individuals with the intention to pursue a business are highly likely to carry it out (Ajzen 1991;Fishbein and Ajzen, 1975) and it is worth emphasising that examining entrepreneurial intention is an important strategy towards studying actual entrepreneurial behaviour.The most prevalently used theoretical framework in the area of entrepreneurial intention research is the Theory of Planned Behaviour (TPB), which conceptualises that the strength of intentions as an immediate antecedent of behaviour (Ajzen, 1991;2002;2019).Entrepreneurship education may nurture a student's attitudes and intentions, as well as the establishment of a new firm (Linan, 2008).According to the Global Entrepreneurship Monitor (2016), people that study entrepreneurship in school are more likely to be entrepreneurs compared to those without entrepreneurial knowledge. The data obtained from 574 respondents is applied to test the robustness of Ajzen's (1991) TPB, using structural equation techniques to ascertain the existence of structural relationships.Prior studies on entrepreneurial intentions have used linear regression models (Chandler and Lyon, 2001) despite the limitation of biased results.Thus, this study will contribute to the illumination of a specific pattern of relationships among the intention antecedents in a developing country like Ghana where there seems to be a paucity of research on the theory of planned behavior.To our knowledge, this is the first study in which the robustness of the TPB is being tested using a technical university sample in the Ghanaian context.The main objective of this paper is to test and apply the TPB (Ajzen, 1991) to examine the entrepreneurial intention among Sunyani Technical University students.This will contextualize the contribution of the TPB and its applicability to the technical university system.Furthermore, an application of the TPB will help in a comparison with prior studies, of which the majority has taken place in developed countries.The findings of this study will go a long in evaluating the technical university concept and its implications for Ghana's educational system. The structure of this article is as follows.After this introduction, we present a research model and hypotheses.Then we present the research methodology, data analysis and results, followed by the discussion and conclusions.We conclude with limitations of the study, theoretical and practical implications and directions for future research. Sunyani Technical University in Context Following Perez-Esparrells and Orduna-Malea (2018) we consider Technical Universities as all those universities that contain the words "technical," "technology" or "polytechnic" in their official institutional names.In the Ghanaian context, such institutions include those that focus on vocational training, engineering, business and other related courses.The mission of Sunyani Technical University is, "a public institution of higher learning that is committed to the provision of career-focused education in engineering, science and technology, technical and vocational, applied arts and related disciplines with hands-on experience and entrepreneurial development to meet the higher and middle-level manpower needs of the country" (Sunyani Technical University 5-year Strategic Plan). The vision of STU is, "to become a top-notch Technical University for the provision of career-focused, practically-oriented and entrepreneurially-inclined higher and middle level manpower training for the socio-economic development of the Brong Ahafo region and Ghana as a whole." The vision and mission statements of the university show the relevance of entrepreneurship as the focal point of the institution.In fact, Act 922 requires all Technical universities in Ghana to integrate the entrepreneurship curriculum. THEORETICAL FRAMEWORK AND RESEARCH HYPOTHESES Ajzen (2019) defines intentions as "a person's readiness to perform a given behavior."Entrepreneurial intention can be defined as conscious awareness and conviction by an individual to establish a new business venture and plan to do so in the future (e.g.Bird, 1988;Thompson, 2009).The route to starting a new firm may be regarded as voluntary with conscious intentionality.Arguably, intention has been perceived as the single most powerful predictor of entrepreneurial behavior (Autio et al., 2001;Krueger et al., 2000), and also an important dependent variable in its own right (Thompson, 2009). According to the TPB, entrepreneurial intention indicates the effort that the person will make to discharge that entrepreneurial behavior.The TPB depicts the three motivational factors influencing behavior (Ajzen, 1991;Linan, 2004): • Attitude toward start-up (personal attitude) refers to the extent to which one holds a positive or negative personal valuation about being an entrepreneur (Ajzen, 2001;Autio et al., 2001;Kolvereid, 1996b).Generally, the more favorable the attitude towards a behavior, the greater the intention to actualize that behavior • Subjective norms (SNs) refer to the perceived social pressure to carry out or not to carry out entrepreneurial behaviours.Thus, the perception that "reference people" would approve of the decision to become an entrepreneur or not (Ajzen, 2001).SNs examine the sum of individuals' perceptions about how important people in their lives think about their engagement in a particular behavior (e.g. starting an entrepreneurial venture).It has been found to be the weakest link of entrepreneurial intention in some studies (Almobaireek and Manolova, 2012;Krueger et al., 2000).However, a couple of other studies have professed that subjective norms influenced entrepreneurial intention (Iakovleva et al., 2011;Kautonen et al., 2013;Siu and Lo, 2011).Other empirical studies have found support for SN positively affecting antecedents of entrepreneurship intentions: attitude toward entrepreneurial behavior and perceived behavioural (Linan and Santos, 2007;Linan et al., 2011a;2011b;Santos et al., 2014).Consistent with studies by Linan (2004), Linan and Chen (2009) and Linan et al. (2011), a probability of indirect effects of subjective norms on entrepreneurial intention is analysed in this paper, considering the controversy on the relationship.In this sense, there may be reasons to consider the relation SN has on both PA and PBC. Figure 1 exemplifies this notion • Perceived behavioral control (PBC) is defined as the perception of the ease or difficulty of becoming an entrepreneur.In conceptual terms, there is no difference between perceived behavioural control and self-efficacy but operationally, PBC and SE are normally assessed differently.Both refer to people's beliefs that they are capable of performing a given behavior (Ajzen, 2019). Prior studies have empirically applied the TPB to student's Entrepreneurial Intentions and confirmed that Attitude Towards Entrepreneurship, Subjective Norm and Perceived Behavioural all play significant roles (Iakovleva et al., 2011;Karimi et al., 2014;Krueger et al., 2000;Linan and Chen, 2009). Of the three motivational antecedents in entrepreneurial intentions in the model (Figure 1), ATE and PBC have been shown to relate most strongly to not only EI (e.g.Karimi et al., 2014;Linan and Chen, 2009) but also on both personality factor (Fini et al., 2012;Nabi and Linan, 2013;Obschonka et al., 2010;Zhao et al., 2005) and contextual factors (Fini et al., 2012;Goethner et al., 2012).Previous studies on entrepreneurship (Fini et al., 2012;Goethner et al., 2012;Nabi and Linan, 2013) perceive subjective norms as less relevant than ATE and PBC for entrepreneurial intention because entrepreneurs can be generally characterized as more inward as opposed to outward and directed and thus less oriented towards social norms than non-entrepreneurs (Goethner et al., 2012).Vozikis , 1994;Gird and Bagaim, 2008;Lee and Wong, 2004;Malebana, 2014).Others have incorporated in the original theoretical TPB framework some demographic variables which are likely to have a given effect on intention, such as family background (e.g.parents), gender, past business, entrepreneurship and social and social experiences, entrepreneurship training and education (Davisson, 1995;Fayolle and Gailly, 2015;Guerrero et al., 2008;Kolvereid, 1996b;Krueger et al., 2000;Ozyilmaz, 2011;Tkachev and Kolvereid, 1999).These variables were found to indirectly affect intentions through their effect on ATB, SN and PBC (Kolvereid, 1996b;Solesvik, 2013;Tkachev and Kolvereid, 1999).ATB, SN, and PBC serve as mediating variables, hence information on them could be used to better assess the impact of demographic characteristics on entrepreneurial intention (Gird and Bagaim, 2008;Krueger et al., 2000;Tkachev and Kolvereid, 1999).Figure 1, depicts the model we will be using in our study which is similar to the TPB by Ajzen (2019) and applied by Autio et al. (2001), Linan and Chen (2009), Fayolle et al. (2006b), Kolvereid and Isaksen (2006), and Veciana et al. (2005).By virtue of past researches' inability to show a consistent impact of social norms on intentions, and for consistency with respect to our hypotheses we expect that social norms will mediate the effects of demographic factors on entrepreneurial intentions.For instance studies by Carsrud and Brannaback, (2011), Kolvereid and Isaksen (2006) and Conner and Armitage (1998) have all produced mixed results about social norms. Entrepreneurship Education (EE) and Entrepreneurial Antecedents (AA) Entrepreneurship education consists of "any pedagogical or process of education for entrepreneurial attitudes and skills" (Fayolle et al., 2006b. p.702). According to Ajzen (2002), a greater knowledge of differential entrepreneurial aspects will definitely contribute to more realistic perceptions about entrepreneurial activity, thus indirectly influencing intentions.The role of entrepreneurship education in the generation of entrepreneurial behavior is gaining popularity in academic circles (Bae et al., 2014;Entrialgo and Iglesias, 2016;Fayolle and Gailly, 2015).In Ghana, the products of technical universities are expected to display a positive entrepreneurial propensity and disposition because of their exposure to entrepreneurial education.However, studies on EE and entrepreneurial antecedents have produced inconsistent results.For instance, Rauch and Hulsink (2015) and Souitaris et al. (2007) found a direct correlation between EE and attitudes and PBC, while studies conducted by Auken Van (2013) The foregoing observations are the base of the following core and demographic hypothesis of the paper, as depicted in Table 1. METHODS This study examines the application of Ajzen's TPB model to measure entrepreneurial intention among STU students using an SEM-PLS approach. Sample and Procedure Participants in the study consisted of students from all the four faculties of the Sunyani Technical University namely, the Faculty of Applied Science and Technology, Faculty of Built Environment and Applied Art, Faculty of Business and Management Studies and Faculty of Engineering.University students constitute a common sampling frame in entrepreneurship research (Almobaireek and Manolova, 2012;Autio et al., 2001;Fayolle et al., 2006b;Kolvereid, 1996b;Krueger et al., 2000;Linan and Chen, 2009;Moriano et al., 2012;Tkachev and Kolvereid, 1999;Kautonen et al., 2013;Siu and Lo, 2011;Veciana et al., 2005).According to Linan and Chen (2009), a sample of university students offers the advantage of similar age and qualifications, which promotes homogeneity.Reynolds et al. (2002) established that university graduates between the ages of 25 and 34 show the highest propensity toward starting a business.Data were collected via paper and pencil close-ended questionnaire which was designed in order to measure those variables that have an impact on entrepreneurial intentions.Questionnaires were administered in class, with prior permission from the lecturer.Students were briefed on the purpose of the study by a member of the research team and then asked to voluntarily fill the questionnaire.All questionnaires were completed anonymously to ensure confidentiality. Measures The survey is structured by a series of close-ended questions in which varied block of statements are subjectively valued on a Likert-type scale concerning entrepreneurial intention.The 5-point Likert-type on which the items were belt on are; 5 = strongly agree, 4 = agree, 3 = neither agree nor disagree, 2 = disagree, 1 = strongly disagree.Four core variables were used in this direction: Attitude towards Entrepreneurship was measured with an adapted questionnaire by Kolvereid (1996).The Cronbach Alpha value for Attitude towards Entrepreneurship is 0.680 as depicted in Table 2, compared to Kolvereid's (1996) values which ranged from 0.68 to 0.90, though he used a 7-point Likert-type scale. Partial Least Squares (RM) According to Hair et al. (2010) a two-dimensional process can be applied for structural equation modelling (SEM): an • Assessment of the proposed measurement model and • Assessment of the structural model.This process ensures the constructs' measures are valid and reliable before attempting to draw conclusions regarding any relationships among constructs (Barclay et al., 1995). The theoretical framework presented in Figure 1 was tested using Partial Least Squares (PLS), a multivariate analysis technique for testing structural models (Barroso et al., 2010).PLS also allows assessment of the reliability and validity the of measure of theoretical constructs and estimation of the relationships among these constructs (Barclay et al., 1995).According to Wold (1985), the PLS is basically intended for causal-predictive analysis, where the problems explored are complex and prior theoretical knowledge is scarce.Concerning our study, little is known about the application of TPB in the technical university context, hence PLS is a suitable technique to use in this research.PLS is robust for small to moderate sample sizes (Cassel et al., 1999) which makes it appropriate for this study.Lee and Tsang (2001) posit that this technique has been applied in numerous researches developed recently in the entrepreneurship discipline. According to Rigdon (1998) SEM has taken an important centre stage within the academic literature of many disciplines.Currently, SEM is the preferred methodology among researchers in assessing the relationship between constructs such as intention, attitude, satisfaction and role ambiguity.Since SEM is intended for working with manifold related equations simultaneously, it has a number of advantages over some more familiar methods, hence gives a general framework for linear modeling (Monecke and Leisch, 2012).According to the framework for this study, demographic variables will exert a direct influence on entrepreneurial antecedents.Therefore some variables are captured as explaining ATE, SN and PBC.The demographic variables; Gender, Participation in Entrepreneurial Education and Parental Self-Employment) are dichotomous in nature.The statistical analysis conducted using SMART PLS 3.0.The initial model to be tested is presented in Figure 1. Measurement Model Assessing the measurement model for the reflective indicator in PLS is based on individual item reliability, construct reliability, average variance extracted analysis and discriminant validity.Individual item reliability is considered adequate when an item has a factor loading greater than 0.707 on its respective construct.This means more shared variance between the construct and its measures than error variance.In this study, the reflective indicators have loadings above or very near 0.7 (Table 2: Outer Loadings). Construct Reliability was assessed using a measure of internal consistency: Composite Reliability (rc).We interpreted this value using the rules offered by Nunnally (1978), who suggest 0.7 as a benchmark for a "modest" reliability applicable in the initial stages of research.In this study, both the construct and reflective dimensions are reliable (Table 2). The Average Variance Extracted quantifies the amount of variance that a construct captures from its manifest indicators relative to the amount due to measurement error (Chin, 1998).The Average variance extracted value should be greater than 0.50.This means that 50% or more variance of the indicators should be accounted for.Consistent with this rule, the average variance extracted measures for the common latent variables for this study are greater than 0.580 (Table 2). In order to assess Discriminant Validity, Average Variance Extracted should be greater than the variance shared between the construct and other constructs in the model (i.e. the squared correlation between two constructs).For adequate discriminant validity, the diagonal elements should be significantly greater than the off-diagonal elements in the corresponding rows and columns (Barclay et al., 1995).This condition is met as depicted in Table 3. Explanation of Target Endogenous Variable Variance The coefficient of determination R2 is 0.442 for the EI endogenous latent variable.This implies that the three latent variables Coefficient of Determination (R2) A major part of structural model evaluation is the assessment of the coefficient of determination (R2).In this study, EI is the main construct of interest.From the PLS Path model estimation diagram (Figure 2), the overall R2 is found to be relatively good.A threshold value of 0.25, 0.5 and 0.7 are often used to describe a weak, moderate and strong coefficient of determination (Hair et al., 2013).In our case, it suggests that the three constructs ATE, SN and PBC can jointly explain 44.2% of the variance of the endogenous construct EI. Indicator Reliability After examining the outer loadings for all latent variables, one indicator that formed the ATE was removed because its outer loading was smaller than the 0.4 threshold level (Hair et al., 2013).Meanwhile, five indicators (ATE10, ATE11, ATE12, PBC18 and PBC19) were found to have loadings between 0.4 and 0.7.A loading relevance test is therefore performed for these 5 indicators to check if they should be retained in the model.In a loading relevance test, problematic indicators should be deleted only if their removal from the PLS model leads to an increase of AVE and Composite Reliability of their constructs over the 0.5 thresholds.As the elimination of these 5 indicators would result in an increase of AVE and composite reliability of their respective latent construct, they are removed from the PLS model.The remaining indicators are retained because their outer loadings are all 0.7 or higher.An indicator's outer loading should be 0.708 or above since that number squared (0.7082) equals 0.50, meaning the latent variable should be able to explain at least 50% of each indicator's variance.The PLS algorithm is re-run and the resulting path model estimation is presented in Figure 2. The outer loadings of various constructs are shown in Table 2. Internal Consistency Reliability The Composite Reliability for the constructs ATE, SN, PBC and EI are shown to be 0.862, 0.831, 0.814 and 0.878 respectively (Table 2), indicating high levels of internal consistency reliability (Nunnally and Bernstein, 1994).Prior research suggests that a threshold level of 0.60 or higher is required to demonstrate satisfactory composite reliability in an exploratory study (Bagozzi and Yi, 1988) but not exceeding the 0.95 level (Hair et al., 2013). Convergent Validity To check convergent validity, each latent variable's AVE is evaluated.The AVE of the constructs ATE, SN, PBC and EI are shown to be 0.758, 0.622, 0.687 and 0.706 respectively (Table 2). It is found that all of the AVE values are greater than the acceptable threshold of 0.5, so convergent validity is confirmed. 4.9.Discriminant Validity Fornell and Larcker (1981) suggest that the square root of AVE in each latent variable can be used to establish discriminant validity, assuming this value is larger than other correlation values among the latent variables. Evaluation of the Structural Model in PLS-SEM: Collinearity Assessment In addition to checking the measurement model, the structural model has to be appropriately evaluated before drawing any conclusion.Collinearity is a potential issue in the structural model and that variance inflation factor (VIF) value of 5 or above typically indicates such a problem (Hair et al., 2011).The collinearity assessment results are summarized in Tables 4 and 5.It can be observed that all VIF values are lower than 5, signifying that there is no indicative collinearity between each set of predictor variables. Checking Structural Path Significance in Bootstrapping Using a two-tailed t-test with a significance level of 5%, the path coefficient is significant if the T-statistics is larger than 1.96.In this paper it can be observed that only the SN -EI linkage (1.462) is not significant as depicted in Table 6; referring to the core hypotheses.Figure 3 shows the variance explained (R2) in the dependent constructs and the path coefficients (b) for the model.Consistent with Chin (1998), bootstrapping (500 re-samples) was used to generate standard errors and t-statistics.Bootstrap represents a non-parametric approach for estimating the accuracy of PLS estimation.This helps in the assessment of the statistical significance of the path coefficients.Four out of our five core hypotheses were supported since these exceed the minimum level prescribed by a Student's t-distribution with one tail and n-1 (n = number of re-samples) degrees of freedom (Table 7).H3 was not supported.This shows that SN is not a significant antecedent variable of EI.The model seems to have an appropriate predictive power for the dependent variable (Figure 3).Hence EI attains a moderate explained variance figure (0.442).As may be observed, the model is generally supported by this analysis, with the only exception of subjective norm-intention relationship.Therefore, hypotheses 1 and 2 are confirmed, whereas hypothesis 3 is not. It has been argued earlier that the main influence of SN would be exerted through its effects on PA and PBC.Hypotheses 4 and 5 were intended to test this possibility.They have been fully supported since both paths are significant.Demographic variables have relatively small significant effects on the antecedents of entrepreneurial intention and in general, they are small in magnitude.Only the effect of PSE on SN is significant.The model explains 44.2% of the variance in entrepreneurial intention based on SN, ATE and PBC.This result is satisfactory since most previous research using linear models typically explain <40%. The effect size is assessed with a tool known as F Square indicated in Table 6 and Figure 4. Following Cohen (1988) an F Square value of above 0.35 is considered large effect size; values ranging from 0.15 to 0.35 are medium effect size; values between 0.02 and 0.15 is considered small effect and values <0.02 are considered NO effect size.From Figure 4 it can be observed that the PBC-EI relationship is the highest i.e. 0.245.As can be inferred from the other relationship (i.e.SN-ATE, SN-PBC, PSE-SN and ATE-EI), their P-values were significant but going by the F Square rule, their significant effect is not a meaningful one.Regardless, the model has successfully explained more than 40% of the variance of entrepreneurial intention. DISCUSSION Based on the findings presented in this article, support for the entrepreneurial intention model can be professed.The applicability of the TPB to entrepreneurship has received wide empirical support over the years (Kolvereid and Isaksen, 2006).Generally, the results are satisfactory since most of the core hypotheses have been confirmed and the explained variance is moderately high (44.2%),compared to prior studies.In particular, 4 out of the 5 core-model relationships were significant.SN would exert its influence on both ATE and PBC (which in turn explain intention), but not significant on entrepreneurial intention. According to Wyrwich (2015) socialization in a family of entrepreneurs enhances the development of positive values and attitudes towards entrepreneurship.Role models (e.g.parents) can be an influential force on PBC regarding the start-up of a business because wards can learn certain skills and behavior essential for an entrepreneurial venture by observing their role models or parents (Zellweger et al., 2011), which has the propensity to increase PBC.According to Lazear (2005) individuals with a balanced set of skills provided by entrepreneurial education should possess a higher likelihood of being self-employed. The existence of direct relationships between demographic variables and entrepreneurial intention was tested, with all but one showing a non-significant relationship. The results reveal that SN is not only insignificant also the weakest link of entrepreneurial intention which is consistent with previous studies (Autio et al., 2001;Linan and Chen, 2009;Krueger et al., 2000).However, the results confirm previous empirical studies that found support for SN positively affecting antecedents of entrepreneurship intentions: attitude toward entrepreneurial behavior and perceived behavioural control (Linan and Chen, 2009;Mathews and Moser, 1996;Scherer et al., 1991). It is relevant to note that hypotheses 1, 2, 4 and 5 are confirmed hence, the robustness of the model seems to be confirmed.In fact, the research findings have shown that SN exerts influence on both ATE and PBC, which is consistent with previous studies (Linan, 2004;Linan and Chen, 2009).Thus the findings are in line with previous studies concerning the application of TPB as an important model in predicting entrepreneurial intentions of students (Engle et al., 2010;Gird and Bagaim, 2008;Iakovleva et al., 2011;Luthje and Franke, 2003;Souitaris et al., 2007).Previous testing of the TPB in the entrepreneurial research suggested that ATE, SN and PBC typically explain 30-45% of the variance in intentions (Linan and Chen, 2009;Sutton, 1998).Contrary to most studies portraying ATE to be the strongest predictor of EI (Linan and Chen, 2009; Nabi and Linan, 2013), our study found PBC to be the strongest predictor of EI, which is consistent with a study by Karimi et al. (2017).In fact Schlaegel and Koenig (2014) meta-analysis study found strong SN-EI and ATE-EI relationships.These differences may be attributed to cultural differences.Besides, the turbulent economic conditions, political climate and self-efficacy can impact on entrepreneurial intention and behavior. CONCLUSION Taking into consideration TPB, three variables that make up this model were analysed: ATE, PBC and SN.The findings suggest that TPB is an important tool for predicting entrepreneurial intentions.However, the subjective norm predictor was not upheld as an antecedent of entrepreneurial intention.The importance of support from family, friends and other social groups fall in a state of limbo with respect to entrepreneurial intention.However, the other two antecedents of entrepreneurial intention (ATE and PBC) were validated, hence stakeholders in the technical universities should take the lead in preparing graduate for the changing needs of the job market by inculcating in them the 21 st century skills such as TVET, critical and creative thinking and problem-solving skills. Limitations of the Study One limitation of this study was the structural equations, which assume linearity of relationships between latent variables (Hair et al., 1998). Secondly, as the study was carried out in a particular geographical context (Ghana), we must be cautious in the generalization of the results to include other jurisdictions.Besides, the generalizability of the findings may be constrained by the sample which comprises students from a single technical university.The potential for bias prevails inasmuch as the sample respondents may have had an intrinsically high orientation towards entrepreneurship.Therefore, there is a need to examine a more diverse population of students. Moreover, the study is cross-sectional, hence we cannot claim causality in any of the relationships.For this reason, we have emphasized that the results support our hypotheses, but we cannot optimistically suggest that the causal correlations are as proffered until a longitudinal study is carried out. Furthermore, the focus of this study is on the intention rather than on actual start-up decisions.A caveat is that there could be a gap between students' entrepreneurial intention and actual action.Entrepreneurial intention is only assessed at the current point in time, hence we are not certain that students' entrepreneurial intention may or may not be altered in the future, bearing in mind that a successful formulation of dreams or intentions may not necessarily lead to successful implementation. Theoretical and Practical Implications In spite of its limitations, this paper demonstrates some theoretical and practical implications.The theoretical implications of our study relate to evidence of the consistency of the theory of planned behavior in explaining entrepreneurial intention in the Ghanaian context.The robustness of entrepreneurial antecedents of the TPB was shown by the STU students.One of the reasons for the conversion of some polytechnics to technical universities is to promote entrepreneurship among the students, where unemployment is relatively high.Our knowledge of the antecedents of entrepreneurial intention and the factors affecting these antecedents is critical in the promotion of entrepreneurship among the technical university students.In view of this technical and vocational training programmes can be designed to change the mentality and attitudes of the students.There should be pragmatic measures to pull the students from the conventional career mentality to an entrepreneurial orientation by probably exposing them to entrepreneurial role models, a strong entrepreneurial culture, and the institution of an enabling environment among others.Another key proposition of the technical university concept is university-industry collaboration.In this current dispensation, educational institution of higher learning cannot afford to operate in isolation, hence they should collaborate with industry, community and government.Fortunately, this is one of the key ingredients in the technical university model, in which the students, lecturers and other stakeholders are expected to liaise with industry.In fact prominent among the aims of technical universities in Ghana is to remain focused on the application of Competency-Based Training to all teaching staff. Directions for Future Research Taking into consideration both the conclusions and the limitations of this paper, we propose the following lines of future research. This paper used cross-sectional data, though the variables under consideration shape a process that develops over time and whose impacts are only embraced in the long run.Future studies might delve into a longitudinal study that implements measures at different times to test the correlation in the framework.Furthermore, future research is needed to test the generalization of the findings, by covering more technical universities in Ghana and if possible beyond the boundaries of Ghana. AFigure 1 : Figure 1: Entrepreneurial Intention Model reported a negative association and Diaz-Casero et al. (2012) and do Paço et al. (2015) did not find any significant link. Table 1 : Hypotheses (Core and Demographic) Miranda et al. (2017).Miranda et al. (2017)'s Cronbach Alpha was 0.891.The Cronbach Alpha value for Entrepreneurial Intention in this study is 0.791 as depicted in Table2. PBC, SN, ATE and EI.A range of variables were measured including: age, gender, participation in entrepreneurial education, parental self-employment.Entrepreneurial intention was measured with three items and based on the proposals of Autio, et al. (2001), Linan and Chen (2009), Autio, et al. (2001))'s Cronbach Alpha was 0.819.Autio et al. (2001)reported a Cronbach's alpha value of 0.70.The Cronbach Alpha value for Subjective Norm is 0.698 as depicted in Table2.PBC was measured with four items and based on the proposals ofAutio, et al. (2001).The Cronbach Alpha value for PBC is 0.553 as depicted in Table2. A total of 574 respondents completed the questionnaire and were subjected to analysis, of which 78.2% were males and 21.8% were females.In terms of Educational Background of Respondents' parents, 25.3% ticked No formal education, 16.9% for Secondary school, 25.8% for University or higher education, 15.3% for Below high school, 10.3% for Technical and Vocational education and Not With reference to age, 51.2% fall in the 20-24 age category and 37.3% fall into the 25-29 age category.In connection with the year or level of the respondents, 48.3% were in the 1 st year, 26.3% in the 2 nd year and 25.4% were in the 3 rd year.Visa-vis, parental self-employment, 65.9% responded YES whereas 34.1% indicated NO.On the subject of whether they have plans to be self-employed in the foreseeable future after graduation, an Table 3 : Discriminant validity Table 3 clearly shows that discriminant validity is met for this study because the square root of ATE, SN, PBC and EI are much larger than the r corresponding LVC.It should be noted that the AVE values are shown on the diagonal and printed in bold; non -diagonal elements are the latent variable correlations (LVC).
2020-06-11T09:02:48.067Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "a6d04a31fde695edb842234cfeb283a5aa8ed9bb", "oa_license": "CCBY", "oa_url": "https://econjournals.com/index.php/irmm/article/download/9814/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ec1dd24d3dfc0461d48ce78f35595b7cd5ad9a8b", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
231895717
pes2o/s2orc
v3-fos-license
Intestinal morphology and growth performance of the Indonesian indigenous crossbred chickens supplemented with formic acid and Saccharomyces cerevisiae The study investigated gut ecology and morphology of the Indonesian indigenous crossbred chickens (IICC) supplemented with the combination of formic acid and Saccharomyces cerevisiae. Two hundreds day-old IICC were distributed to T0 (control diet), T1 (T0 + 0.2% formic acid), T2 (T0 + 0.3% S. cerevisiae), T3 (T0 + 0.2% formic acid and 0.3% S. cerevisiae). Excreta was collected at week 8, while intestinal ecology and morphology were determined at week 9. In duodenum, T3 chicks showed higher and wider (P<0.05) villi. The T2 and T3 chicks showed deeper (P<0.05) crypt than that of T0. The jejunal villi was higher (P<0.05) in T3 than in T0. The T3 chicks had deeper (P<0.05) crypt compared to other groups. In ileum, the villi height was lowest (P<0.05) in T0. The crypt was deeper (P<0.05) in T3 than in other. Crude protein digestibility coefficient was highest (P<0.05), while fecal protein was loswest (P<0.05) in T1 than in other groups. Compared to T0, the treated IICC showed higher (P<0.05) weight gain and feed intake with T3 had the highest gain and intake but gain:feed rasio 348 J.Indonesian Trop.Anim.Agric. 45(4):348-355, December 2020 was the lowest (P>0.05). IIn conclusion, the inclusion of formic acid and S. cerevisiae in diets improved intestinal ecology and morphology. The IICC chickens fed with formic acid and S. cerevisiae exhibited improved growth performance and nutrient digestibility. INTRODUCTION The demand for the Indonesian indigenous crossbred chicken (IICC) meat has been increasing in the recent time. The IICC is a crossbred between male Indonesian indigenous chicken and commercial laying hen. Compared to the indigenous chickens, the IICC may need shorter period to harvest (Herlina and Ibrahim, 2019). Also, they are more adaptive to the surrounding environment when compared with the modern broiler strains (Darmawan et al., 2017). For this reason, the IICC is very potential to be developed as meat producer and to drive the economic development. Antibiotic growth promoter (AGP) had formerly been incorporated into the feed to promote the growth and health condition of the IICC (Sugiharto et al., 2018). However, such practice is now globally prohibited due to food safety reason. Owing to the essential role of AGP in IICC growth and health, it is therefore urgent to find alternatives to AGP for the sustainable IICC production. Intestine is part of the digestive organs that plays crucial roles in feed digestion and nutrient absorption. In general, the utilization of nutrients from feed can be maximized when intestinal function and health are maintained optimally. Morphology and microbial population in the intestine may in general indicate the intestinal function and health in poultry. The height and width of villus as well as the depth of crypt are common indicators that nutritionists often use to evaluate the performance of the intestine in digesting and absorbing the dietary nutrients (Harimurti and Rahayu, 2009). Intestinal villi is a place where the absorption of nutrients occur, so that the higher and wider of the villi may allow for more absorption of nutrient by the chickens. Moreover, the height and width of villi may also reflect the health of the intestine (Awad et al., 2009). With regard to the microbial population, the balance of microorganisms in the intestine has a very crucial role in preserving the morphological and intestinal health. Indeed, the increased counts of good or beneficial microbes and the decreased pathogenic bacteria are highly correlated with the improved health and production performance of chickens (Widodo et al., 2015). Formic acid is an organic acid that can be administrated as an alternative to AGP. As an acidifier, formic acid is very effective in reducing gut pH, and increasing intestinal villi height and body weight gain of poultry (Phatak et al., 2017). Formic acid can also prevent chickens from intestinal infections caused by Salmonella (S. typhimurium, S. senftenberg and S. putten; Koyuncu et al., 2013) and Escherichia coli (Garcia et al, 2007). However, formic acid may irritate the intestine when given in high doses (Ramli et al., 2008). Saccharomyces cerevisiae is a microorganism that has commonly been used as a probiotic to improve the gut functions of chickens (Kompiang, 2002). The cell wall of S. cerevisiae (β 1,3 and 1,6 glucans) can repair the damaged tissue as well as control intestinal inflammation due to infection or toxin (Ahmad, 2005). The nucleotide content in S. cerevisiae is also able to restore the damage of intestinal mucosa and improve intestinal flora (Li et al., 2007). To grow better, S. cerevisiae requires an acidic pH, which is between 4.0-4.5 (Ahmad, 2005). On this basis, creating an acid condition in the intestine would promote the growth and hence probiotic function of S. cerevisiae on chickens. Sugiharto (2016) pointed out that the application of a mixture of probiotics with other active ingredients can improve the efficacy of probiotics in substituting AGP. In this study S. cerevisiae was combined with formic acid with the hope that the role of S. cerevisiae would be more effective in improving the gut ecology and morphology of IICC. For the note, the acidic conditions in the gut caused by the action of formic acid was expected to improve the development and probiotic functions of S. cerevisiae in the intestine. Also, a synergistic effect was expected to occur between formic acid and S. cerevisiae in improving the intestinal conditions of IICC. To best of our knowledge, the combined effect of formic acid and S. cerevisiae on the intestinal ecology of the IICC has never been published. Hence, this study aimed to examine the gut ecology and morphology of the IICC supplemented with the mixture of formate and S. cerevisiae. Preparations of feeds and supplements The feed was prepared as a basal ration based on yellow corn and soybean meal ( Table 1). The supplements were included at the end of the mixing process of feed. Formic acid (Baymix Latibon ®Plus ME) used was provided by PT. Bayer Indonesia, while S. cerevisiae (Mauripan ® ) was obtained from PT. Jaya Fermex, Jakarta, Indonesia. In vivo Experiment For the in vivo trial, 200 day-old IICC were used. At the initial, they were weighed (average body weight of 38.05 ± 0.35 g) and distributed randomly to four treatments with five replicates per treatment and 10 chicks per pen. Chicks were fed starter (week 1-4) and finisher (week 5-9) diets (Table 1). Dietary treatments were unsuplemented diet (T0), supplemented with 0.2% formic acid (T1 or T0 + 0.2 formic acid), supplemented with 0.3% S. cerevisiae (T2 or T0 + 0.3 S. cerevisiae) or supplemented with the mixture of 0.2% formic acid and 0.3% S. cerevisiae (T3 or T0 + 0.2 formic acid + 0.3 S. cerevisiae). Birds had free access to diet and drinking water. Vaccination was performed at 4 and 30 days using Newcastle disease vaccine. Feed consumption was determined daily, while the weight of chickens was recorded at weekly basis. Sample Collections On wk 8, one bird was randomly taken from each experimental pen (20 birds from a total of four treatments) to exert total excreta collection. The total excreta collection was conducted for seven days until week 9. Fe 2 O 3 was mixed with each experimental diet and futher used as an indicator during the total excreta collection (Sutrisno et al., 2013). The collected excreta was cleaned from feathers and debris, sprayed using HCl 0.2 N and weighed. The excreta was sundried and then homogenized. The homogenized excreta was analyzed for their contents of crude protein using the Kjeldahl method. Nitrogen retention was calculated based on Tillman et al. (1998). At week 9, the IICC used for total excreta collection were slaughtered and dissected. The digestive tract of chicks were taken, and 2 cm of gut segments (duodenum, jejenum and ileum) were removed and put in a sample bottle containing 10% neutral buffered formalin for histopathological analysis. Digesta was collected from the small intestine and cecum for pH measurement (using the pH Test tool brand Eco Testr pH 1) and microbiological analysis. The number of bacteria was calculated based on Sugiharto et al. (2019), at which the total lactic acid bacteria (LAB) was determined on de Man, Rogosa and Sharpe agar (Merck KGaA, Darmstadt, Germany) after anaerobic incubation at 38°C for 48 hours, while the total coliform and lactose-negative enterobacteria were determined on MacConkey agar (Merck KGaA, Darmstadt, Germany) as red and white (colorless) colonies after aerobic incubation at 38°C for 24 hours. Histopathological analysis of the small intestinal segments was carried out by haematoxylin-eosin (HE) staining. The measurements of gut morphology was carried out using an optical microscope fiited to a digital camera (Leica Microsystems GmbH, Wetzlar, Germany). The three best villi were selected for each gut segment in one slide and then measured. Statistical Analysis Data were subjected to analysis of variance (ANOVA) using the statistical software of SPPS (IBM SPSS Statistic version 23) and followed by Duncan multiple range test (SPSS) at 5% significance level (SPSS). The data are presented in a mean value ± standard deviation. Intestinal Ecology of the IICC The data on pH values and selected bacterial populations in the gut of IICC are listed in Table 2. In general, the pH values and bacterial counts in the intestine of IICC were not substantially impacted by the treatments. Intestinal Morphology of the IICC The intestinal morphology of IICC is detailed in Table 3. In duodenum, the chicks in T3 showed higher (P<0.05) villi height and wider (P<0.05) villi width than other chicken groups. The T2 and T3 chicks showed deeper (P<0.05) crypt depth than that of T0, while T1 did not significantly differ from T0 and T2. The jejunal villi was higher (P<0.05) in T3 than in T0, but did not differ from T1 and T2. The T3 chicks also had deeper (P<0.05) crypth depth than that of other treatments. In ileum, the villi height was lowest (P<0.05) in T0 than that in other treated groups. The crypt depth was deeper (P<0.05) in T3 than in other chicken groups. The villi width of jejunum and ileum were not divergent (P>0.05) among the IICC. Protein Digestibility and Nitrogen Retention The digestibility of protein and nitrogen retention of IICC are detailed in Table 4. Crude protein digestibility coefficient was highest (P<0.05), while fecal crude protein was loswest in T1 than in other treatment groups. Crude protein intake, digestible crude protein and nitrogen retention were not significantly divergent among the dietary treatments. Performances of IICC Compared to T0, the treated IICC showed higher (P<0.05) weight gain and feed consumption with T3 possessed the highest gain and consumption. Diet did not affect feed conversion ratio (Table 5). DISCUSSION In the current study, we documented that dietary supplementation of either formic acid, S. cerevisiae or combination of both did not affect the pH values of gut segments of IICC. Previously, Hernández et al. (2006) (Ragaa and Korany, 2016). With regard to S. cerevisiae, such probiotic treatment had no substantial impact on the pH values of gut of IICC in the present study. This result is similar to what reported by Sacakli et al. (2011) who did not see any impact of S. cerevisiae on the pH values of intestine. This was, however, in contrast to Elghandour et al. (2019) confirming that S. cerevisiae could reduce pH values of broiler gut. There is no definite explanation for these divergent data above, but the different strains of chickens, doses of formic acid and S. cerevisiae and rearing conditions may exert different responses of chicks in terms of gut pH. The supplementations using formic acid, S. cerevisiae or their blends resulted in no effect on the bacterial populations in the gut of IICC. It has widely been known that low pH or acidic condition may implicate in the reduced pathogenic bacteria while increase the populations of lactic acid bacteria in the gut of chickens (Ndelekwute et al., 2018). For this reason, the absent difference in pH values seemed to be associated wih the lack difference in the populations of coliform, lactose-negative enterobacteria and lactic acid bacteria in the intestinal segments of IICC. In the former study, Ragaa and Korany (2016) showed the efficacy of formic acid in increasing the villi height of broiler chickens. In this study, formic acid increased villi height of ileum of IICC, while the effect of such organic acid was moderate in villi height of duodenum and jejunum. It was very likely that the antibacterial activity of formic acid may reduce the colonization of pathogenic bacteria and thus 14 diminish the inflammatory process at the mucosa of intestine. The latter condition may consequently increase villus hight (Ragaa and Korany, 2016). With regard to S. cerevisiae, the probiotic activity of the yeast seemed to increase the villus height of ileum of IICC. In agreement with our data, Padihari et al. (2014) reported that S. cerevisiae increased villi height of the intestine in broiler chickens. They further confirmed that in addition to the probiotic activity of S. cerevisiae in inhibiting the proliferation of pathogenic bacteria, S. cerevisiae may also stimulate the development of intestinal villi through improving the mucosal cell proliferation. It is also shown in this study that S. cerevisiae increased the duodenal crypt depth of IICC. In agreement with this result, Peralta et al. (2018) also showed that feeding S. cerevisiae moderately increased the crypt depth of intestine of broilers. According to the latter authors, the deeper crypt depth may be attributed to the increased intestinal tissue turnover due to rapid immune response of chicks against pathogens. Different from our results, other study by Sacakli et al. (2011) did not find any influence of S. cerevisiae on the crypt depth of intestine of broilers. The combination of formic acid and S. cerevisiae resulted in higher villi hight, wider villi width and deeper crypt depth. In this study, the synergistic effect of formic acid and S. cerevisiae seemed to occur. Data in the current work showed that crude protein digestibility coefficient was substantially higher in the IICC supplemented with formic acid. Similar to our finding, Ragaa and Korany (2016) documented that treatment with formic acid improved the digestibility coefficient of crude protein in broilers. They also suggested that formic acid could increase the activity of pepsin and thus enhance gastric proteolysis and raise the digestibility of protein and amino acids. Our data also revealed that formic acid reduced fecal crude protein in IICC. It was very likely that the increased gastric proteolysis may contribute to the increased protein digestibility and utilization, resulting in less content of crude protein in excreta of IICC. In this experiment, dietary supplementation of S. cerevisiae possessed no effect on the protein digestibility and nitrogen retention of IICC. This was in contrast to Elghandour et al. (2019) who documented that S. cerevisiae improved crude protein digestibility in broiler chickens. It seemed that the differences in chicken strains as well as experimetal protocols may be associated with the conflicting data above. It was apparent in this study that treatments using either formic acid, probiotic S. cerevisiae or the blends of both was associated with the improved weight gain of IICC. The higher weight gain was associated with the increased feed intake in the treated IICC. Also, the improved intestinal morphology and thus digestive and absorptive capacity may be attributed to the increased weight gain of the treated chicks. CONCLUSION The inclusion of formic acid and S. cerevisiae in diets improved intestinal ecology and morphology. The IICC chickens fed with formic acid and S. cerevisiae exhibited improved growth performance and nutrient digestibility.
2020-12-24T09:08:43.384Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "5e9754c90751c2f6ce1522597f6e7eb30c972e11", "oa_license": "CCBYSA", "oa_url": "https://ejournal.undip.ac.id/index.php/jitaa/article/download/31200/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "03e95dc8e383a2ee398879e70bffab65daf3f372", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
29564984
pes2o/s2orc
v3-fos-license
Application of Entropy Generation to Improve Heat Transfer of Heat Sinks in Electric Machines To intensify heat transfer within the complex three-dimensional flow field found in technical devices, all relevant transport phenomena have to be taken into account. In this work, a generic procedure based on a detailed analysis of entropy generation is developed to improve heat sinks found in electric machines. It enables a simultaneous consideration of temperature and velocity distributions, lumped into a single, scalar value, which can be used to directly identify regions with a high potential for heat transfer improvement. By analyzing the resulting entropy fields, it is demonstrated that the improved design obtained by this procedure is noticeably better, compared to those obtained with a classical analysis considering separately temperature and velocity distributions. This opens the door for an efficient, computer-based optimization of heat transfer in real applications. Introduction To analyze and improve heat transfer in real devices, e.g., electric machines, the complete flow field has to be analyzed, since it controls energy transport processes and thus heat exchange.In many cases, the local flow properties vary widely, perhaps even encountering transition from laminar to turbulent conditions, which result in a complex 3D flow field.Identifying opportunities to improve heat transfer requires the analysis of an exceptionally large quantity of data after carrying out the corresponding simulations by Computational Fluid Dynamics (CFD).As the trend is moving towards shorter time-to-market periods for products, the development process must be accelerated in order to reach the best design as early as possible.For this purpose, CFD-based Optimization (also called CFD-O) is a promising approach, as discussed in [1].However, the high complexity of alternator systems leads to long computing times and involves a large number of design parameters.Therefore, a very high amount of numerical effort is required for CFD-O.To make this possible, efficient solutions must be designed at each step (automatic design generation, application-driven parameters, efficient optimization techniques, revealing objectives). The alternator itself consists of a rotor and stator domain, each of which possesses different components.During operation, the rotor domain is in motion and the fan blades create a pressure gradient which drives the flow.Air from the engine bay streams through the alternator, passes over the heat sink, and finally leaves the system in a radial direction.The stator domain includes, in particular, electronic components that are glued onto a heat sink to increase the wetted area and thus improve heat transfer.To enable the first aerodynamic studies and to identify the parameters controlling heat transfer, a reduced alternator model (called V-Channel in this work) has been proposed.It only covers the stator domain (the one important for the temperature of the electronic components), as shown in Figure 1.Inlet and outlet are associated in CFD to a constant-pressure boundary condition.For the side, periodic boundary conditions are employed in order to mimic the whole system, which is shown later in this article. The rotor is not directly taken into account.The effect of the fans is represented by its characteristic curve, associated with the quadratic resistance coefficient C R2 resulting from the porous volume artificially placed before the outlet.Based on the real characteristic curve of the full model, a correct (∆p, C R2 )-combination is determined for the rotational speed of each fan and is implemented into the CFD to obtain the correct operating point for all process conditions.At this point, aerodynamic studies can be performed with a strongly reduced numerical effort, while still delivering results relevant for the full system.However, the V-Channel still involves too many design parameters for "brute force" CFD-O.In particular, a suitable indicator is needed to detect which regions within the immense data set describing the three-dimensional flow field are of central importance.Focusing only on these small target regions, analyzing design modifications to improve heat transfer then becomes possible.In this manner, the numerical effort can be suitably reduced, allowing CFD-O, as shown in this work. As discussed in previous publications [2,3], different transport phenomena influence heat transfer.They can, in principle, be quantified by considering classical indicators such as the local wall temperature T W or the wall heat transfer coefficient h for each component.However, these values show only the final outcome, at the very end of all heat transfer processes; they offer no further information on the underlying three-dimensional transport phenomena, which would be required to determine the factors limiting heat transfer and where or why these occur.Regions with a high potential for optimization cannot be directly obtained in this manner.Thus, physically-based indicators must first be identified, building on top of our previous studies [4,5]. Entropy Generation The seminal work of Bejan [6][7][8] explains how the concept of entropy generation minimization (EGM) can be used to intensify heat transfer for different applications.Poulikakos and Bejan [9] derived the theoretical framework for an optimal fin geometry in forced convection using EGM.Fowler and Bejan [10] obtained optimal sizes of bodies for external flows.Carrington and Sun [11] analyzed internal and external flows with the second law analysis (SLA).Ko and Ting [12] investigated the competition between the entropy generation due to dissipation and conduction for laminar forced convection in curved rectangular ducts with external heating.Şahin [13] varied duct geometries, gave an analytical formulation and estimated the best geometry based on EGM.EGM in combination with genetic algorithms for optimization is described in [14,15], for example.Herwig et al. [16][17][18] extended the second law analysis and showed the potential of this approach for a variety of configurations.By additionally considering the fluctuating entropy generation terms, Kock and Herwig [19,20] opened the door for advanced numerical investigations.Giangaspero and Sciubba [21,22] used entropy generation to study thermal management for different electric machines.Several authors [23][24][25][26] have already successfully employed EGM to improve heat sink geometry.Note that a complete overview about the entropy generation minimization (EGM) method can be found in, e.g., [8] or [27].The entropy generation is given by: with the time mean ( ) and fluctuating ( ) components for entropy generation due to conduction ( ṠC ) and entropy generation due to dissipation ( ṠD ), respectively.Thus, Equation (1) includes the temperature and velocity gradients, respectively, and represents the direct entropy generation in the mean and fluctuating flow field.A more detailed explanation of each term was given by Kock [28] and can also be found in Eger et al. [2].The irreversibility ratio φ was first introduced by Bejan [29] and is defined as: Considering total entropy generation ṠGen in the flow field enables a much deeper understanding of all processes controlling heat transfer.Compared to classical indicators such as T or h, entropy generation quantifies regions of dominating processes between near-wall regions (locally) or within the main flow (globally), as well as dominating dissipative or conductive effects within the flow field.This has already been demonstrated in [2].It also lumps the huge data quantity contained in u = (u x , u y , u z ) T as well as temperature T into one single, scalar value.Therefore, the amount of data that must be analyzed is strongly reduced, allowing systematic aerodynamic studies.Since it is defined as a second-law criterion, it is directly influenced by the properties and temperature levels of the fluid. In a previous study [3], the generality of the irreversibility ratio φ in analyzing and improving heat transfer processes has been demonstrated for the simple canonical configuration first introduced in [4].Based on φ, regions with a high potential for optimization have been identified directly, in particular back-flow areas [3].The design optimization automatically modified the sleeve in order to prevent areas with high values of φ, leading to considerable intensification of heat transfer.Regions with φ → ∞ are easily identified, even in a complex three-dimensional flow field, and are promising for heat transfer improvement.Based on this observation, the irreversibility ratio will be considered, as well, in the present work in analyzing and optimizing a real application, which is far more complex than the canonical configuration considered in [3]. Numerical Method The flow is calculated as stationary, considering a single-phase, non-reacting turbulent flow.Power losses on the electronic component are set to a constant value ( QW = const.).Due to the small temperature change in the flow field (approximately 20 K in this study), the working fluid is considered as an ideal gas with constant thermo-physical properties.For the defined ambient pressure of p ∞ = 1 bar, the compressibility factor Z is defined with 1.0000 [30], which ensures that this equation of state is fully appropriate for the current study.The equations of conservation of mass, momentum, and energy are discretized in ANSYS CFX 16.2, relying on the Reynolds-averaged Navier-Stokes equations (RANS) approach.Menter's k-ω-SST model has been chosen due to its ability to account for the low Reynolds regime near the wall without using damping corrections; in addition, it leads to an improved prediction of flow separation [31].The selected advection model uses a second-order scheme wherever possible and blends to a first-order scheme, if needed, to maintain boundedness.The flow equations are solved sequentially with double precision. Grid-Independence Test To ensure that the discretization error is in an acceptable range, a grid-independence test was first performed.Varying the parameters controlling the maximum cell size (s max ) as well as the near-wall cell size (s 12 ) leads to four different grids of increasing dimension, with up to almost 9 million grid points.Figure 2 shows the selected grid for the upcoming studies with approximately 4.8 × 10 6 nodes.The behavior of the dimensionless temperature difference: is shown in Figure 3. Here, T W is the area-averaged temperature of the cooling heat sink and T ∞ is the ambient temperature.Only the smallest grid, grid 1 with 1.3 × 10 6 nodes, leads to a small but noticeable difference in Θ compared to the three other grids.Even more important to this study is the irreversibility ratio.The right hand side in Figure 3 shows the behavior of φ as a volume-integrated value over the whole V-Channel.Confirming the previous statement, the results of grid 1 differ strongly from those obtained for the three finer grids.Overall, grid 3 with 4.5 × 10 6 nodes (associated with the red lines in Figure 3) leads to a very good compromise between numerical effort and accuracy.Additionally, a dimensionless wall distance y + ≈ 1 is achieved for the heat sink wall with grid 3. The relative deviations between grid 3 and the finest grid 4 with 8.7 × 10 6 nodes are only 2% for the dimensionless temperature and 6% for the irreversibility ratio.This level of accuracy is deemed sufficient in driving the optimization process, even more so when considering that optimization only requires correct predictions of trends, not of absolute values. Investigations of Heat Transfer In what follows, two different design modifications will be tested to intensify heat transfer.Based on the first analysis in Section 5.1, CFD-O will be used to optimize the position and alignment angle of two NACA profiles in Section 5.2.In Section 5.3, additional profiles will be manually included to increase the wetted area and hopefully enhance further heat transfer.To quantify the improvement, two values are systematically considered: (1) the temperature difference ∆T = T − T ∞ with T as the volume-averaged temperature of the heat sink; and (2) the wall heat transfer coefficient h = q/(T W − T ∞ ) at the heat sink.Finally, Section 5.4 verifies the validity of the V-Channel results by comparison with a full 3D simulation of the alternator system. Base Design Analysis In this section, a classical analysis considering T W and u is first performed with the objective of manually finding regions with a high potential for optimization.A further analysis based on SLA uses the irreversibility ratio to identify such regions within the flow field.The following temperature and velocity scales are omitted for confidentiality reasons, since a real industrial design is considered.However, the employed ranges are constant, which enables a direct comparison between the following design changes. Figure 4 shows the temperature distribution across the heat sink wall as well as the velocity vectors on the section plane introduced in Figure 1.The figure shows high temperatures appearing in the left part of the domain.The electronic component is glued to the back side of the heat sink in this region.Additionally, the velocity field shows a back-flow region (see zoom in Figure 4).Therefore, optimizing the position of the profiles, or including additional exchange surfaces in this region seems to be a promising choice.Based on these first observations obtained by considering only the temperature and the velocity distributions, the NACA profiles at the farther end of the heat sink (third row from the right, see Figure 1) seem to have a high potential for optimization. Figure 5 shows the irreversibility ratio for the same conditions.Here, high magnitudes of the ratio between Ṡ D and Ṡ C occur mainly at the beginning of the heat sink (first two rows from the right, see Figure 1).According to our previous studies [3], these regions show a high potential for optimization.As shown in if the velocity gradients (quantified indirectly by Ṡ D ) are higher than the temperature gradients (quantified indirectly by Ṡ C ), heat transfer can be improved locally.In the first row, very high values of φ are found in the complete flow field, as shown in Figure 5 (right).Here, every finite volume with φ > 1 is marked in dark yellow.It can be seen that, even for a three-dimensional field, the indicator φ > 1 can easily be post-processed and visualized in a complex geometry; it is a single, scalar value.Therefore, it appears to be an attractive indicator to guide optimization and is much easier to handle than a combination of temperature and velocity fields.Now, the central question remains: observations based on classical indicators would result in an optimization at the farther end of the heat sink (third row from the right), since high temperatures and a back-flow area are found there, while SLA indicates that the first two rows on the right side have a high potential for improving heat transfer.Which one leads at the end to a higher heat transfer rate? Position Optimization The two central NACA profiles within the second heat sink row will now be optimized, as illustrated in Figure 6 (left). These two profiles are located between both previously observed target regions shown in Figures 4 and 5, respectively.Both profiles can change their alignment angle in a range of −20 • to 30 • .Additionally, they can be moved along the chord, with an allowed displacement range of −2 mm to 6 mm.As a result, these NACA profiles are quite free to move within the region of high temperatures or high φ-values, in a process driven by the employed optimizer.The OPtimization Algorithm Library++ (called OPAL++), developed at the Otto von Guericke University Magdeburg, Germany was used as an optimization tool.This software supports parallel execution and includes a variety of multi-objective as well as single-objective optimization algorithms.Further details can be found in [32,33].For the present optimization, a simple single-objective genetic algorithm (called genetic1 in OPAL++) was applied.The minimization of the volume-averaged heat sink temperature was applied as an objective function.In this method, • all variables have a real representation; • each of the 50 generations contains 24 individuals; • a tournament with two cycles is used to select parents; • SBX is used for cross-over with a distribution index of η c = 20 and a probability of p c = 0.8 [34]; • a mutation with a distribution index of η m = 10 and probability of p m = 1/#variables is applied, and • the new generation replaces the old one.For the computation, 3840 cores of a high-performance cluster were used, with 3.5 GB per core.A single CFD simulation takes about approximately 45 min to converge.During optimization, eight individuals are calculated in parallel, so that the total time needed for the optimization is approximately 900 h. Figure 7 shows the irreversibility ratio for the base design (left) and the optimized design (right), which are already found at the 35th generation of the genetic algorithm.It can be seen that one NACA profile has moved forward to the right, towards the area with high irreversibility ratios, whereas the other one maintained its position but changed its angle.Neither of the two profiles moved back towards the third row, where high temperatures occur.The optimization direction chosen by OPAL++ is clear from the point of view of the SLA analysis (Figure 7): the region with large values of φ has been considerably reduced in size. To quantify the improvement obtained by the optimization, Table 1 summarizes the results regarding the temperature difference and the wall heat transfer coefficient.The optimized design shows an improvement for both objective functions.Together with a wall heat transfer coefficient significantly increased by 6.46%, the temperature difference from the heat sink to the fluid can be reduced by −4.83%, without including any additional surfaces, simply by moving and re-orientating the selected NACA profiles towards the region with a high irreversibility ratio.Now, increasing the exchange surface could perhaps present itself as a promising alternative. Increasing the Wetted Area Another possibility to improve heat exchange consists of introducing additional surfaces into the baseline design.Considering the two possibilities discussed at the end of Section 5.1, two additional NACA profiles with a scale ratio of 0.5 will be manually implemented: Though this might initially appear counter-intuitive, it is clearly observed that the second solution (additional profiles in the first row, where the temperature is initially low) results in a lower temperature compared to the solution consisting of increasing the exchange surface in the hot region.The profiles in the first row are much cooler.Thus, heat transfer is intensified in this region.The other two rows, however, are also noticeably cooler than in the other configuration.The averaged values are summarized in Table 2. Table 2. Results of placing small additional NACA profiles in either the third or the first row. Third Row First Row Considering these results in the view of practical purposes, the increasing pressure drop due to the additional NACA profiles in the first row seems to be negligible compared to the resulting, much better heat transfer.Both cases result in a lower cooling heat sink temperature, but the result is noticeably better for the modification obtained based on φ.Concerning the heat transfer coefficient, while introducing additional profiles in the first row increases this value, as is desirable for the application, the opposite is observed for the alternative solution.Considering the definition of h, the wall heat transfer coefficient directly combines the increasing surface area A with the decreasing temperature difference on the surface wall ∆T.For the present conditions ( Q = const.and T ∞ = const.),the unwanted decrease of h when adding profiles in the third row is the result of a lower benefit (decreasing T W ) compared to a higher cost (increasing A).Considering these results, it appears that it is not always beneficial to add supplementary exchange surfaces in hot-temperature regions, since they might have a noticeably lower effect on the heat transfer rate, even if a simpler analysis based on temperature could suggest such modifications.Analyzing the irreversibility ratio seems to be a far more promising solution.Based such an analysis of φ, regions within the flow field that possess a high potential for heat transfer optimization can easily be located, leading to a tremendous reduction of the required number of design parameters and of the size of the parameter space.In this way, optimizing the alternator with CFD-O becomes possible.A last possible design modification is shown in Figure 9. Here, a third NACA profile (scale ratio = 0.25) has been manually placed in the first row, on the inflow side of the heat sink.As illustrated, all three profiles are placed directly in the region where values of φ > 1 initially appear.This modification further increases the heat transfer and thus results in the lowest temperature difference obtained during this study.Table 3 summarizes the obtained progress compared to the original design.Compared to the results obtained by optimizing the position of the original NACA profiles with OPAL++, the obtained enhancement of the heat transfer coefficient is smaller (1.22% in this case), yet leads to the lowest heat sink temperature (−6.72%). Comparison with Full Model All the results in the previous section were solely obtained for the V-channel.It is important to verify that they would also be valid for the full alternator system, using the same workflow.In this case, the pressure drop resulting from the rotating fans is directly simulated.No model is needed to approximate the operating point. Figure 10 shows the irreversibility ratio along the section plane, located at mid-height of the NACA profiles within the full model.To ensure the connection to the motor, the retention arm is made of a solid material; therefore, it prevents fluid from streaming through the nearby NACA profiles.A design modification in this region would have a much smaller effect on heat transfer compared to the other inflow sides.Nevertheless, in order to maintain the symmetry of the setup and to increase the wetted area, additional profiles will be placed in this region as well. Figure 11 shows the temperature fields in the original and in the improved design.As in the previous V-Channel study, the temperature in the inflow side becomes noticeably lower compared to its original value.The original design shows a high temperature peak behind the retention arm, due to the limited flow rate leading to low cooling performance; this problem is much less pronounced in the improved design. Thanks to the manual design modifications driven by the SLA, reduced temperatures are found across the complete heat sink; a more homogeneous temperature distribution is obtained.Table 4 summarizes the enhancements obtained by adding 27 small NACA profiles.The positive trend observed for the V-Channel is confirmed by this full-model study.The observed improvements concerning ∆T are nearly identical in both cases.While the V-Channel indicated an enhancement of −2.31 K (−6.72%), the full model leads to a reduction of −2.00 K (−6.43%), which is still considerable for practical purposes.Larger discrepancies between the V-Channel and the full model simulation are observed for the wall heat transfer coefficient.The positive effect found in the V-Channel (increase of 0.74 W/m 2 K, or 1.22%) is reduced in the full model simulation, with only 0.40 W/m 2 K (0.64%).It should be kept in mind that both models are not identical, due to the periodic assumption employed for the V-Channel.The heat sink is considered as a closed part for the V-Channel, but not in the full model (see Figure 11).In the full model, the position and the amount of inserted NACA profiles differ from the retained configuration in the V-channel.Therefore, the results can not be directly compared with one another. However, both simulations (V-Channel and full model) show the same, positive trend for both objective functions, minimizing the temperature difference ∆T and maximizing the heat transfer coefficient h. Conclusions Based on the fundamental analysis with the canonical configuration in [3], a powerful concept has been applied in this work to analyze heat transfer processes and ultimately optimize practical cooling systems.This has been demonstrated for a real alternator heat sink.Physical values such as temperature or wall heat transfer coefficient (classical indicators) can, in principle, be used as an objective function to quantify the intensity of heat transfer.However, they convey no further information on the details of the complex transport phenomena occurring within the flow field.To compare different transport processes and to find regions where a design modification could lead to an intensified heat transfer, further indicators based on the second law of thermodynamics are more useful.General investigations of heat transfer based on the canonical configuration in [3] and the presented application to a heat sink of an electric machine have shown that the irreversibility ratio is particularly promising for this purpose.Being a scalar quantity and containing information on both convection and diffusion, φ offers a very convenient method to analyze a complex three-dimensional flow field in an easy and efficient manner, much more so than by considering the velocity and temperature distributions separately.Based on this analysis, optimizing regions with φ > 1 leads to better designs than those obtained from a classical analysis derived from temperature and velocity.Based on the irreversibility ratio, CFD-based optimization becomes practical, since the associated numerical effort is highly reduced by concentrating only on key regions of the device. This work confirms the preliminary findings obtained for the canonical configuration in [3], showing that this simple problem can be used for testing alternative approaches.Further investigations based on the irreversibility ratio will now consider further technical applications. Additionally, the same analysis can be applied globally to the complete alternator system, as exemplified in Figure 12. (Left): explosion view of the alternator system and the modeled spherical ambient defined with an opening boundary condition (constant pressure and temperature).The diameter of the ambient sphere is more than six times greater than the alternator diameter; (right): entropy generation terms computed for a full alternator system as a function of rotation speed [35].The red region corresponds to that with the highest temperature of the electronic components, between 2000 rpm to 3000 rpm. Here, the volume integral of each entropy generation term ( ṠC and ṠD ) has been calculated within the complete spherical region shown in Figure 12.The examined alternator system is based on another electrical concept and can therefore not be compared with the alternator designs investigated before.It can be seen that, for the range of 1000 rpm to 10,000 rpm, entropy generation due to conduction ( ṠC ) is the dominating process and reaches its local maxima between 2000 rpm to 3000 rpm.The behavior of ṠC is directly correlated to the volume averaged temperature for electronic components such as cooling heat sinks or rectifiers where its maximum temperature T max is within the same range of 2000 rpm to 3000 rpm (marked in red in Figure 12).Due to a high electrical current and a low volume flow in this range, the maximum temperature as well as entropy generation due to conduction occurs at approximately 3000 rpm.This confirms the correlation between entropy generation due to conduction ṠC and the component temperature T as already investigated in [2].Increasing the rotational speed above 10,000 rpm causes entropy generation due to dissipation to become the dominating factor; at this point, total entropy generation is dominated by flow convection.Based on such an analysis, operation and regulation of the system could be analyzed and optimized.This also offers a comparison between different cooling designs on a global level. Figure 1 . Figure 1.V-Channel setup employed to calculate heat transfer of heat sinks in electric machines.The dark blue color denotes the section plane located at mid-height of the National Advisory Committee for Aeronautics (NACA) profiles later used for the analysis of different indicators. Figure 2 . Figure 2. Section plane and selected grid later used for the analysis within the V-Channel. Figure 3 . Figure 3. Grid-independence test for the V-Channel based on the dimensionless temperature difference (left) and the irreversibility ratio (right). Figure 4 . Figure 4. Classical analysis of heat transfer based on the temperature and the velocity field.The third row of profiles, as counted from the right (zoom), appears as a region with high improvement potential. Figure 5 . Figure 5. Advanced analysis based on the irreversibility ratio in the flow field.The left figure shows the same representation as Figure 4.The right portion of the figure shows a representation of finite volumes with φ > 1 (dark yellow) from a different point of view, the third row now appearing at the bottom of this image.The blue area is part of the protection cap, shown in Figure 1. Figure 6 . Figure 6.(Left): sketch illustrating the position optimization of the two central NACA profiles in the second row; (right): result of the optimization. Figure 7 . Figure 7. Optimized design resulting from the position optimization. • 2 Figure 8 . Figure 8. Temperature impact of two small additional NACA profiles introduced into the initially hot region (third row, left), or into the region with initially high φ-values (first row, right).The color scale is identical for both figures. 3 Figure 9 . Figure 9. Adding three small NACA profiles to increase heat transfer. Figure 10 . Figure 10.(Left): irreversibility ratio along a section plane in the full alternator model; (right): retention arm passing in the axial direction along the heat sink.The flow over the heat sink from the outer radial direction. Figure 11 . Figure 11.Enhancing heat transfer in an electric machine.Comparison of the local temperature distribution in the original design (left) and in the improved design after SLA (right). Figure 12 . Figure12.(Left): explosion view of the alternator system and the modeled spherical ambient defined with an opening boundary condition (constant pressure and temperature).The diameter of the ambient sphere is more than six times greater than the alternator diameter; (right): entropy generation terms computed for a full alternator system as a function of rotation speed[35].The red region corresponds to that with the highest temperature of the electronic components, between 2000 rpm to 3000 rpm. ∂x Table 1 . Results of the position optimization, as difference between the optimized design and the baseline case. Table 3 . Results of the design modification when adding three small NACA profiles. Table 4 . Results of the design improvement for the full model.
2017-07-22T13:07:33.363Z
2017-06-02T00:00:00.000
{ "year": 2017, "sha1": "446f22f9542c8604a47e441a2f2312b81eff1739", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/19/6/255/pdf?version=1496402783", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "446f22f9542c8604a47e441a2f2312b81eff1739", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
119453684
pes2o/s2orc
v3-fos-license
Two-dimensional solid state gaseous detector based on 10B layer for thermal and cold neutrons Two-dimensional solid state gaseous detector for thermal and cold neutrons is created. The detector has active area of 128 x 128 mm2, 10B neutron converter, and gas chamber with thin windows. The resistive charge-division readout is applied to determine the neutron position. The detector was tested using W-Be photoneutron source at the Institute for Nuclear Research, Moscow. The detector efficiency is estimated as ∼4% at neutron wavelength λ = 1.82 Å and 8% at λ = 8 Å. The efficiency of the background detection was less than 10-5 of that for thermal neutrons. The resulting pulse height resolution and the spatial resolution are estimated as ∼15% and ∼2.5 mm, respectively. Introduction A thermal and cold neutron positional sensitive detectors are key devices of small angle neutron scattering (SANS) setup for studying structure and sizes of polycrystalline object in nanotechnology and biology [1,2]. Such detectors are also used as neutron flux monitors [3]. Recently, SANS has been applied for an investigation of charging and discharging cycle of Li-ion battery in situ [4]. Usually, a detectors based on 3 He gas [5] operate at high pressure which require to use a thick entrance window (~1 cm), as the leakage of this expensive gas leads to an efficiency decrease. But a neutron flux reduction in thick window does not permit to apply detector at wavelengths more than 8 Å. An alternative detector with a separation of functions of neutron converting to charged particles in 10 B layer and their further detecting in the ion chamber solves not only the problem of the gas mixture stability but permits to use cheap gases under standard conditions. The localization of an interaction point of neutron in the 10 B layer plane gives a possibility to obtain the good spatial and time resolution. The detector with a thin entrance window, in contrast with a gas filled one under extreme pressure, could be used in experiments with cold neutrons. Recently, a prototype of such detector based on three 1.3 μm layer of 10 B [6] has been created. A detector with six layers of 1.4 μm 10 B is proposed [7]. Its efficiency is estimated as 21% at λ = 1.82 Å. An advantage of our detector is the thin entrance window (3mm) and a long time operation which is provided by the protective semiconducting polymer layer on the boron-aluminium cathode. To avoid a mutual diffusion the boron layer is separated from aluminium by a polyimide layer. n + 10 B→ 11 B*→ 4 He (1776 keV) + 7 Li (1013 keV), BR= 6.4%. Sum of cross section of both reactions up to energy of 1 keV can be estimated by the formula: Operating principle of detector where σ 0 =3837 b, λ 0 =1.82 Å and E 0 = 0.025 eV. The 4 He and 7 Li nuclei are detected in positional sensitive gaseous chamber. The chamber front cathode is founded on the glass disk of 2 mm thickness. Despite the fact that boron is semiconductor his surface quality is not suited to be used as the chamber electrode as was proposed in [6]. Therefore, the front cathode contains 3 μm 10 B layer coated by a polyimid layer and 0.1 μm aluminum layer coated by protective semiconducting polymer upon the glass disk. Multiwire and multipad ion chamber A design of multiwire and multipad ion gas chamber is shown in figure 1. A housing of the detector consists of front and rare covers of 20 mm duralumin and a side cylindrical wall of stainless steel. The housing is sealed hermetically with a vacuum rubber. Each cover has a 3 mm entrance window for neutrons. There are placed inside the housing the following elements:  2-mm glass disk with 3 μm 10 В layer and 0.1 μm aluminium layer. The disk serves as a front cathode.  Anode of 64 parallel 20 μm wires of tungsten rhenium coated by gold with 2 mm spacing.  Rare cathode of 1 mm fiberglass with the 63 copper insulated pads of 2 mm width. A distance between the anode and each cathode is equal to 2 mm. Wires and pads are placed perpendicular to one other. The electrode assembly is enclosed in fluoroplastic housing. 1front and rare housing covers; 2cylindrical side wall of housing; 3windows; 4glass disk; 5boron layer; 6aluminum layer; 7wire anode of X coordinate; 8pad rare cathode of Y coordinate; 9fluoroplastic housing of detection assembly. A both anode wires and rare cathode pads are connected in series to each other through a 20 Ω resistor. The both ends of the anode resistor chain through the high voltage capacitors are connected to the X 1 and X 2 preamplifiers for measuring X coordinate. The both ends of the rare cathode resistor chain are connected to the Y 1 and Y 2 preamplifiers for measuring Y coordinate. As soon as the pulse height from any Y 1 or Y 2 preamplifier output exceeds a threshold of the discriminator (CAEN C808) the trigger occurs. The trigger starts a conversion in the amplitude-to-digital converter (ORTEC AD811). Coordinates X and Y are determined from the X 1 , X 2 , Y 1 and Y 2 pulse heights. Accumulated data from ADC is written on computer disk. The front cathode is connected to preamplifier and discriminator. A discriminator pulse could be used as a trigger and stop signal in a time-of-flight measurement. The positive anode voltage is set from 620 V to 920 V for the gas mixture Ar+25%CO 2 + 0.3%CF 3 Br under pressure of 1.05 bars. An internal volume of detector is equal to 3.5 liters. Experimental data analysis Detector was tested using a neutron source. Tungsten beryllium photoneutron source (IN-LUE) was created on the base of industrial 8 MeV electron linac LUE-8, tungsten electron-gamma convertor, photoneutron beryllium target and polyethylene moderator of fast neutrons. The maximal flux density of thermal neutrons in the center of the source is about of 10 7 cm -2 s -1 . Detector is located at a distance of 6 m from the source at an angle of 60º relative to the electron beam axis. The two-dimensional diagram of pulse heights X 1 and X 2 at the anode voltage 700 V is shown in figure 2. An event is registered if any pulse height Y 1 or Y 2 exceeds a preset threshold. It means that the secondary charged particle has produced ionization in both gaps of the chamber. The normalized pulse height sum spectrum is shown in figure 3. The pulse height resolution as a full width at half maximum (FWHM) is equal 15% at 700 V. The coordinate spectrum is shown in figure 4. The periodical structure in the spectrum at 700 V can be explained by a variation of electrical field near and between wires. A fact of observation of the structure leads to an estimation of spatial resolution ~ 2.5 mm. One can see from figure 5 that the form of pulse heights X 1 and X 2 at 800 V becomes broader. A height pulse corresponds to ionization loss in the gas gap. The X 2 height pulse and the simulated ionization loss spectrum are shown in figure 6. Comparing the measured and simulated spectra we can estimate the energy threshold as ~0.2 MeV and maximum position as ~0.55 MeV. Thus, the periodical structure in coordinate spectrum in figure 7 is almost disappeared. Taking into account the geometry of the detector, its solid angle, and neutron flux of the source an estimation of the detector efficiency ~ 4% is obtained. Simulated efficiency for two energy thresholds is presented in table 1. Figure 7. Coordinate spectrum at 800 V. Then a cadmium shield of 2 mm thickness is installed in front of detector so that the left edge of the detector sensitive area was open but the right edge was closed on axis X. The opened width of detector area was 75 mm. The pulse height X 1 and X 2 2D-diagram at 650 V is shown in figure 8. It can be seen that the main part of events (99.99%) is located inside the dashed ellipse. The X coordinate spectrum for events located within the dashed ellipse is shown in figure 9. The shadow of the cadmium shield is observed on the right side. On the contrary, the coordinate spectrum for 0.01% events out of the dashed ellipse is shown in figure 10. They are related to the 4 He and 7 Li nuclei which produce a long track and move at small angle to the anode wire plane. Counting rate without Be-target (only electrons and gamma) was less than 0.001% of that with Be-target (thermal neutrons). Conclusions Position-sensitive thermal and cold neutron detector with 3 μm sensitive 10 B layer and 128 x 128 mm 2 gas chamber was created and studied. Positions are determined by a division charge method. The detector efficiency is estimated as 4% to 8%. Ratio of background efficiency to thermal neutron efficiency is less than 10 -5 . Pulse height resolution is about 15% and X coordinate spatial resolution is estimated as 2.5 mm at 700 V for gas mixture Ar+25%CO 2 +0.3%CF 3 Br in standard conditions.
2019-04-13T16:51:23.182Z
2016-12-20T00:00:00.000
{ "year": 2016, "sha1": "e8a167cf5a79130f2a973a7eedafd2aea348e685", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/798/1/012160/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0a012fffa23c4c6283958fd18377eb8564479459", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17255011
pes2o/s2orc
v3-fos-license
Antiangiogenic tyrosine kinase inhibition related gastrointestinal perforations: a case report and literature review Anti-VEGF (vascular endothelial growth factor) therapy with the monoclonal antibody bevacizumab can cause gastrointestinal (GI) perforations. In recent years it became apparent that GI perforations also occur during treatment with antiangiogenic tyrosine kinase inhibitors (TKIs). It is of clinical importance to consider (vague) abdominal complaints during antiangiogenic treatment as a sign of a GI perforation. To illustrate this serious complication, we report four cases of antiangiogenic treatment related GI perforations. In three cases this was due to antiangiogenic TKI treatment. Reported risk factors of GI perforations due to bevacizumab include the presence of a primary tumor in situ and recent history of endoscopy or abdominal radiotherapy. Pathology assessments of surgical removal of the perforated intestinal part reveal that perforations are predominantly seen at the tumor or anastomotic site, in case of carcinomatosis or diverticulitis or when GI obstruction or an intra-abdominal abcess is present. Whether the same risk factors may be involved in antiangiogenic TKI related GI perforations is unknown. The underlying mechanisms responsible for GI perforation during antiangiogenic treatment is unknown, but disturbance of host cell homeostasis of immune cells as well as platelet-endothelial cell interactions may play an important role. In conclusion, while clinical awareness that antiangiogenic treatment can cause GI perforations is critical for current medical practice, it is also very important to get more insight in its underlying mechanisms so that this life-threatening complication may be prevented in the near future. Introduction Malignant tumors depend on the formation of new blood vessels from the pre-existing vasculature for their growth and dissemination [1]. This process, called angiogenesis, is regulated by pro-and antiangiogenic factors. One of the main angiogenic factors is vascular endothelial growth factor (VEGF), which exerts its function by activation of VEGF tyrosine kinase receptors [2]. Multiple agents that target these angiogenic growth factor signaling pathways have been developed. Since these agents only interfere with growth factor signaling pathways in proliferating endothelial cells, serious toxicities from these agents were not expected. Normally, more than 99% of the endothelial cells are quiescent in the absence of malignancy and angiogenesis only occurs during wound healing or in the menstrual cycle [3]. However, in contrast to preclinical tumor models, incidental severe toxicities were observed during clinical development of these agents. For example, incidences of 1.5-5.4% were reported on GI perforations induced by treatment with the humanized monoclonal VEGF-antibody bevacizumab [4,5]. Only a few cases of GI perforations have been reported for antiangiogenic tyrosine kinase inhibitors (TKIs) such as sunitinib or sorafenib. In this report we present four cases of antiangiogenic treatment related GI perforations, of which in three cases an antiangiogenic TKI was responsible for this complication. In addition, we discuss current views on the potential risk factors and mechanisms of antiangiogenic treatment related GI perforations. Case reports Bevacizumab A 74-year-old man with a medical history of right hemicolectomy and hepatectomy for metastasized colon carcinoma was treated in adjuvant setting with oxaliplatin, capecitabine and bevacizumab. Because of rectal blood loss during the second chemotherapy cycle, a colonoscopy with subsequent band ligation of observed hemorrhoids was performed. Three weeks later, the patient was admitted to the hospital with persistent diarrhea, severe anal pain and malaise. Body temperature and blood pressure were normal, but pulse frequency was increased (105 bpm). Anal examination was very painful, but no abnormalities were palpable. Laboratory and faeces examination as well as abdominal and chest X-rays revealed no abnormalities. At colonoscopy multiple deep colonic and perianal ulcers were found and considered as drug induced enterocolitis (Fig. 1). Therefore capecitabine treatment was immediately terminated. Despite this treatment interruption, the patient got worse and subsequently a laparotomy was performed. At the site of the previously placed band ligations (3 weeks before), peri-anal and -rectal necrotic cavities connected to the anal canal were found. The patient recovered within a few weeks after extended necrotectomies, a Hartmann-procedure and antibiotic treatment. No further adjuvant chemotherapy was administered. Sorafenib A 68-year-old man with a medical history of metastasized renal cell carcinoma (RCC) started treatment with sorafenib, after previous nephrectomy and immunotherapy (interferon-alpha), upon disease progression. Sorafenib treatment resulted in a rapid partial response. However, the patient developed fever and abdominal pain 5 months after start of sorafenib treatment and his condition deteriorated within hours. The patient suffered from diarrhea and substantial rectal bleeding. Physical examination revealed fever, abdominal pain and hepatomegaly. Laboratory results showed anemia and signs of inflammation (hemoglobin 11.8 g/dl (normal value between 13.5 and16.5 g/dl), C-reactive protein (CRP) 57 mg/l (normal value between 0 and 10 mg/l) and Leukocyte counts 9.5 9 10 9 /l (normal value between 4.0 and 10 9 10 9 /l). Computed tomography (CT), performed because of progressive diarrhea together with substantial rectal bleeding with a decrease in hemoglobin to 8.5 g/dl, revealed colonic perforation into the necrotic liver metastasis (Fig. 2). Based on these findings, sorafenib treatment was terminated and antibiotics were prescribed. Because a surgical resection of these necrotic liver metastases was impossible, a terminal ileostomy with a slime fistula was constructed. Within a few days the patient recovered rapidly and could be discharged from the hospital. Two months later, when the patient was fully recovered from this episode, an mTOR inhibitor was prescribed because of disease progression. Bevacizumab plus an antiangiogenic TKI After optimal interval debulking and extensive treatment with standard chemotherapy, a 67-year-old woman with advanced ovarian cancer and extended peritonitis carcinomatosa participated upon progression in a phase I trial with bevacizumab combined with an experimental antiangiogenic TKI. Targets of this TKI include VEGFR-2 and PDGFR. One month after start of treatment she was admitted to the hospital because of progressive pain in groin and lower left abdominal part accompanied by fever and elevated inflammation parameters (CRP 290 mg/l, Leukocyte counts 13.8 9 10 9 /l). A CT-scan revealed a necrotic tumor mass in the pelvis with, secondary to the tumor response, retroperitoneal perforation. Surgical resection of this retroperitoneal complication was not feasible and optimal palliative care was initiated. Sunitinib A 62-year-old woman with a medical history of metastasized RCC, for which she underwent surgery and radiotherapy, started sunitinib upon disease progression. During the second 6 week treatment cycle, she developed pain at lower back and bottom accompanied by fever. At physical examination, a peri-anal fistula was found and laboratory results supported systemic inflammation (CRP [ 500 mg/l). Magnetic Resonance Imaging revealed widespread perianal abscesses and fistulas. As extensive surgical resection was no reasonable option, optimal palliative care was provided. Discussion The incidence of sunitinib or sorafenib related GI perforation is unknown, since only few cases were reported in trials [6][7][8][9][10][11][12][13] and case reports [14][15][16][17][18][19][20]. Because of the potential serious outcome, it would be extremely helpful if we could predict patients at-risk on basis of risk factors and underlying biological mechanisms. In addition, more insight in these underlying mechanisms is important to develop potential novel agents with an improved toxicity profile. Since the first observations of GI perforation during bevacizumab treatment, the risk factors of primary tumor in situ and recent history of endoscopy or abdominal radiotherapy [5,[21][22][23][24][25][26] were described. Pathological findings, frequently associated with observed perforations, include perforation at the tumor or anastomotic site, abdominal carcinomatosis, diverticulitis, GI obstruction and intra-abdominal abcess [5,[22][23][24]. Different biological mechanisms of bevacizumab related perforations have been theorized, which we have outlined in the next part. Whether the same risk factors and mechanisms may be involved in TKI related perforation is unknown, but seems very likely, because both type of agents inhibit VEGF signaling. We have summarized in Table 1 that for both type of agents, gastrointestinal perforations were reported in diverse parts of the gastrointestinal tract. In the reported cases for sunitinib and sorafenib, tumor cells at the site of perforation and previous radiation treatment were frequently mentioned similar to bevacizumab reports [6,[13][14][15][16][17]. Possible mechanisms of GI perforations due to angiogenesis inhibition Tol et al. [27] suggested a relationship between bevacizumab treatment and ulcer development, which may eventually cause a GI perforation. In a phase III study with 755 patients receiving chemotherapy with bevacizumab plus or minus cetuximab, twelve GI perforations were observed of which four were located in an ulcer. The high incidence of ulcers in this study (1.3 vs. 0.1% in the general population), the occurrence of perforations early in treatment, the established role of VEGF in ulcer healing [28][29][30] and the inhibitory effect of bevacizumab on wound healing support their hypothesis. Since the majority of perforations were located at the primary tumor site, preexistent mucosal lesions were expected as preferential localizations. In another report it was speculated that bevacizumab induced VEGF inhibition might result in the cholesterol emboli syndrome (CES), which may consequently give rise to GI perforations due to mesenteric ischemia [31]. Hypertension in combination with eosinophilia is a feature of CES. All three out of twenty-two prospectively observed patients who developed hypertension during bevacizumab treatment had atherosclerotic risk factors, an increased heart rate and eosinophilia at onset of hypertension. In this report it was hypothesized that CES might cause all acute bevacizumab related complications in atherosclerotic patients, including GI perforations as a consequence of mesenteric ischemia. Alternatively, Saif et al. [22] postulated that GI perforation is caused directly by regression of normal blood vessels in the GI tract, induced by excessive VEGF inhibition. The authors extrapolated data from animal models in which VEGF inhibition has shown to reduce vascular density in the small intestinal villi as well as in other organs [32]. In a recent editorial on the risk of bevacizumab associated GI perforation in ovarian cancer it was speculated that bevacizumab induces necrosis of malignant ovarian cells that invade the bowel serosa resulting in GI perforation [4]. In addition, in this editorial it was suggested that increased pressure due to abdominal carcinomatosis or adhesions from prior surgeries might lead to micro-perforations in vulnerable areas of the bowel, with subsequent delayed healing due to bevacizumab. Finally, loss of nitric oxide (NO) release due to VEGF inhibition, leading to decreased blood flow to the splanchnic vasculature, was proposed to result in bowel infarction and perforation at areas with marginal blood supply. On account of early closure of the ORBIT trial, evaluating bevacizumab treatment in platinum resistant ovarian cancer, tumor involvement of the bowel was suggested [33]. Five out of 44 patients developed GI perforation and showed radiographic evidence of bowel involvement at study entry. A significant association of GI perforations with increased number of prior chemotherapy regimens (respectively three) and a non-significant relation with bowel wall thickening/obstruction were found. In contrast, in another study with twenty-five heavily pretreated (median of five prior chemotherapy regimens) patients with advanced ovarian cancer, treatment with bevacizumab did not cause any GI perforations [34]. We recently discussed the role of platelets in antiangiogenic treatment related toxicity [35,36]. Platelets contain VEGF in their a-granules which they secrete upon activation and on the other hand VEGF activation of the endothelium results in platelet binding and subsequent activation [37][38][39]. In addition, we found that bevacizumab is taken up by platelets, leading to VEGF neutralization [35]. Since VEGF is an endothelial cell survival factor [2,40,41], we postulated that the subsequent disturbed platelet-endothelial cell interaction might be involved in GI perforation, disturbed wound healing and bleeding complications [36]. The platelet-endothelial cell homeostasis may be disturbed by antiangiogenic treatment. Therefore an increased leakiness and extravasation of inflammatory cells may cause submucosal inflammation and subsequent ulcer formation. It is of clinical importance to study underlying biological mechanisms of bevacizumab related GI perforation. In addition, it is expected that these underlying mechanisms and risk factors might account for antiangiogenic TKI treatment as well. Risk factors of tumors at the primary site and recent history of endoscopy or abdominal radiotherapy should be taken into account before treatment initiation with angiogenesis inhibitors. In ovarian cancer patients it is recommended to consider the number of prior chemotherapy regimens and abdominal surgeries and to exclude tumor involvement of the bowel by physical examination and CT-scan upon start of treatment with angiogenesis inhibitors. Endoscopic evaluation is advised in patients with symptoms possibly related to GI ulcer during treatment [27]. In addition, based on this report, rubber band ligation should be prevented until bevacizumab or TKI treatment is interrupted or terminated. The third case of GI perforation during combined bevacizumab and TKI treatment emphasizes a possible increased perforation risk related to combination treatment with antiangiogenic agents with different biological mechanisms. Although most of the current preclinical and clinical knowledge on potential underlying mechanisms of angiogenesis inhibitor induced gastrointestinal perforations is on bevacizumab, based on preclinical and clinical studies potential underlying mechanisms as described may hold true for TKIinduced perforations as well. In conclusion, we would like to advocate to include GI perforation in the differential diagnoses, when patients complain of (vague) abdominal pain during treatment with TKIs as well as with bevacizumab.
2014-10-01T00:00:00.000Z
2010-12-29T00:00:00.000
{ "year": 2010, "sha1": "0ef0f3d697ac2dd5f57fbefb1df36de73bfbbff6", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10456-010-9197-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0ef0f3d697ac2dd5f57fbefb1df36de73bfbbff6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208729818
pes2o/s2orc
v3-fos-license
NCPCDA: network consistency projection for circRNA–disease association prediction A growing body of evidence indicates that circular RNAs (circRNAs) play a pivotal role in various biological processes and have a close association with the initiation and progression of diseases. Moreover, circRNAs are considered as promising biomarkers for disease diagnosis owing to their characteristics of conservation, stability and universality. Inferring disease–circRNA relationships will contribute to the understanding of disease pathology. However, it is costly and laborious to discover novel disease–circRNA interactions by wet-lab experiments, and few computational methods have been devoted to predicting potential circRNAs for diseases. Here, we advance a computational method (NCPCDA) to identify novel circRNA–disease associations based on network consistency projection. For starters, we make use of multi-view similarity data, including circRNA functional similarity, disease semantic similarity, and association profile similarity, to construct the integrated circRNA similarity and disease similarity. Then, we project circRNA space and disease space on the circRNA–disease interaction network, respectively. Finally, we can obtain the predicted circRNA–disease association score matrix by combining the above two space projection scores. Simulation results show that NCPCDA can efficiently infer disease–circRNA relationships with high accuracy, obtaining AUCs of 0.9541 and 0.9201 in leave-one-out cross validation and five-fold cross validation, respectively. Furthermore, case studies also suggest that NCPCDA is promising for discovering new disease–circRNA interactions. The NCPCDA dataset and code, as well as the detailed readme file for our code, can be downloaded from Github (https://github.com/ghli16/NNCPCD). Introduction Circular RNAs (circRNAs), a new category of noncoding endogenous RNA molecules, are generated by back-splicing of a single pre-mRNA and have a closed loop structure. 1 For many years, circRNAs were initially thought to be splicing errors. 2 Nonetheless, as high-throughput sequencing technology has developed, circRNAs have been shown to be widespread in various living organisms and garnered wide attention. [3][4][5][6] Previous studies showed that circRNAs play a part in regulating the expression of genes as they function as microRNA sponges. 7 For instance, Cdr1as has been experimentally veried to work as a miR-7a sponge and to be involved in regulating the expression of SP1 and PARP. 8 Importantly, the expression levels of circR-NAs are generally tissue-specic and cell-type-specic. 9 Consequently, circRNA misexpression can lead to abnormal physiological processes and account for the initiation and progression of most diseases. 10 In recent years, an increasing number of circRNAs have been shown to function as tumor suppressors or oncogenes in various cancers. 11,12 For example, Han et al. found that hsa_-circ_0007874 inhibits the progression of hepatocellular carcinoma and promotes p21 expression by sponging miR-9. 13 Likewise, hsa_circRNA_000479 serves as a sponge for miR-6809 and miR-4753 to modulate the expression of oncogene BCL11A, which can promote the proliferation of triple-negative breast cancer cells. 14 CircCCDC66 is found to be correlated with poor prognosis of colorectal carcinoma and is up-regulated in various tumor tissues. 15 High expression of circPVT1 in gastric cancer is closely related to a longer survival rate, suggesting that it is a prognostic marker for the disease. 16 To summarise, both down-regulation and up-regulation of circRNAs in tumor cells shows that they may have the potential to be novel biomarkers and therapeutic targets. However, the current research on disease-circRNA relationships is highly dependent on biological experiments, such as qRT-PCR and circRNAs chips, which are time-consuming and costly. In this case, only a limited number of relationships can be discovered. Encouragingly, several manually curated databases of disease-circRNA interactions have become available, such as circRNADisease 17 and CircR2Disease, 18 which both collect experimentally veried associations by reviewing published literature. The establishment of disease-circRNA association datasets could provide an important foundation for predicting potential disease-related circRNAs using computational models. Recently, a lot of effort has gone into mining latent disease-circRNA pairs under the hypothesis that similar circR-NAs are likely to have similar association proles with the same disease. Lei et al. 19 conducted a pioneer study in which they integrated a known disease-circRNA interaction network and multiple similarity networks for circRNAs and diseases into a heterogeneous network and presented a path-weighted method to excavate underlying disease-related circRNAs by counting the accumulative weights from paths with limited lengths in the constructed network. Likewise, Fan et al. 20 devised a KATZ-based model to quantify the association probability for each disease-circRNA pair by counting the number of walks with limited lengths between them on an established heterogeneous network, which was made up of a known disease-circRNA interaction matrix, a disease similarity matrix and a circRNA similarity matrix. Aerwards, Yan et al. 21 designed a semi-supervised model based on Kronecker regularized least squares, which made predictions on a single circRNA-disease space by Kronecker product and capitalized on a preprocessing step to improve predictions for new circRNA nodes and disease nodes. Xiao et al. 22 developed a novel model to recover the missing disease-circRNA interactions based on a low-rank approximation algorithm, which effectively combined manifold regularized constraints and produced reliable predictions. Recently, Wei et al. 23 constructed a circRNAdisease association probability matrix based on the neighbor interaction proles. Specically, this method prioritized disease-associated circRNAs by applying matrix factorization to the reconstructed association probability matrix. Zhang et al. 24 used a linear neighborhood to reconstruct the disease and circRNA similarity data, and then employed label propagation to measure the relevance between disease nodes and circRNA nodes. In addition, the advances in link prediction research in bioinformatics have also provided some valuable insights into the development of disease-circRNA interaction prediction (e.g., synergistic drug combinations, 25 disease-lncRNA, 26,27 disease-miRNA, 28,29 and drug-target interaction prediction). 30 However, because of the incompleteness of the current datasets, it is still a challenge to achieve sufficiently accurate results for the prediction task. In the present study, we advance a network consistency projection method (NCPCDA) for undiscovered circRNAdisease interaction predictions. In particular, NCPCDA implements a network consistency projection on the integrated circRNA similarity and disease similarity network to score circRNA-disease pairs. Simulation results under leave-one-out cross validation and ve-fold cross validation evidently demonstrate that NCPCDA performs better than previous models. Moreover, the case study carried out on lung cancer also suggests that our method is promising for identifying novel prognostic biomarkers. Materials and methods Human circRNA-disease associations The known circRNA-disease association dataset was retrieved from the CircR2Disease database, 18 which contains 739 experimentally conrmed interactions for 100 diseases and 661 circRNAs. Aer removing redundant entries from different literature and those relationships associated with mice and rats, we nally obtained a dataset consisting of 88 diseases, 585 circRNAs and 650 associations for humans. Formally, let C ¼ {c 1 , c 2 , ., c m } and D ¼ {d 1 , d 2 , ., d n } be the sets of m circRNAs and n diseases in the dataset, respectively. Thus, the binary matrix Y˛R mÂn of circRNA-disease interactions can be constructed, where Y(i, j) ¼ 1 if circRNA c i is connected to disease d j , and 0 otherwise. Disease semantic similarity Inspired by the successful application of disease semantic similarity in prioritizing reliable disease-associated ncRNAs, [31][32][33][34][35][36] we also capitalize on this similarity to enhance our predictions. As described in, 37 semantic similarities among diseases can be calculated according to their corresponding disease ontology, 38 which is organized as a directed acyclic graph. The disease ontology term for each disease in our analysis is retrieved from http://disease-ontology.org/. For two sets of disease ontology terms, we computed their similarity scores by using the "doSim" function in the DOSE soware package. 39 For convenience, we use SS˛R nÂn to represent the semantic similarity matrix among n diseases. CircRNA functional similarity To quantify the functional similarity between circRNAs, the previous methods used for calculating the functional similarity between lncRNAs or miRNAs are extended. 34,37 According to the previous work, evaluating the semantic similarity of two disease sets, which are linked with two circRNAs, can infer the function similarity of these two circRNAs. Particularly, we assumed that D i and D j were respectively the disease groups associated with circRNA c i and circRNA c j . Denote FS as the circRNA function similarity matrix, then the similarity between circRNA c i and circRNA c j can be computed by the following formulas: where S(d p , D j ) is the similarity between disease d p related to circRNA c i and disease set D j related to circRNA c j . As stated in the previous section, the disease semantic similarity can be calculated based on disease ontology terms. However, we cannot obtain a disease ontology term for each disease. This means we are unable to measure the semantic similarities for those diseases without disease ontology terms. Therefore, association prole similarity is further introduced. Association prole similarity for circRNAs and diseases Association prole similarity is an effective topology similarity for diseases and circRNAs. For a specic circRNA c i , the association prole of c i is a binary vector, which is extracted from the i-th row vector of the circRNA-disease interaction matrix Y, i.e. Y(i, :). Then, according to the Gaussian kernel function, we calculate the similarity between circRNA c i and circRNA c j as follows: where g c , which is used to control the kernel bandwidth, is computed by normalizing the average number of diseases related to each circRNA. Similarly, we also dene disease association prole similarity as follows: where Y(:, i) indicates the interaction prole of disease d i and g d is computed similarly to g c . Integrated similarity for circRNAs and diseases Considering that we cannot obtain circRNA functional similarity for all circRNAs in our dataset, we integrate functional similarity FS and association prole similarity KC to construct the circRNA similarity matrix CS. Particularly, for a given circRNA c i and circRNA c j , the value of CS( The integration can be written as follows: Similarly, for disease, we combine semantic similarity SS with association prole similarity KD to obtain the disease similarity matrix DS, which can be presented as follows: NCPCDA method In this work, we develop a novel computational method NCPCDA to identify undiscovered circRNA-disease interactions by using network consistency projection, 40,41 which is under the assumption that similar circRNAs (or diseases) may well associate with the same disease (or circRNA). Fig. 1 illustrates the implementation framework of NCPCDA, which is implemented based on known circRNA-disease association information and the integrated circRNA similarity and disease similarity. NCPCDA is composed of disease space projection and circRNA space projection. Specically, we use disease space projection and circRNA space projection to denote the projection of the disease similarity network and the circRNA similarity network on the disease-circRNA interaction network, respectively. By using vector form, circRNA space projection can be computed by: where CS(i, :), which indicates the similarities between circRNA c i and all circRNAs, is the i-th row vector of matrix CS; Y(:, j), which encodes the correlations between disease d j and all circRNAs, is the j-th column of matrix Y; |Y(:, j)| denotes the norm of vector Y(:, j). As a result, the vector projection of CS(i, :) on Y(:, j) can be obtained, represented as CSP(i, j), and we use CSP˛R mÂn to denote the circRNA space projection matrix. According to vector space theory, the projection score CSP(i, j) is positively related to the similarities between circRNA c i and all circRNAs, to the number of circRNAs associated with disease d j , while it is negatively related to the angle between CS(i, :) and Y(:, j). In a similar manner, disease space projection can be presented as follows: DSPði; jÞ ¼ Y ði; :Þ Â DSð:; jÞ |Y ði; :Þ| where DS(:, j) and Y(i, :) are two vectors extracted from the j-th column of disease similarity matrix DS and the i-th row of interaction matrix Y, respectively. As a result, the vector Fig. 1 The overall workflow of the NCPCDA method. projection of DS(:, j) on Y(i, :) can be obtained, denoted as DSP(i, j), and we use DSP˛R mÂn to represent the disease space projection matrix. Based on network consistency projection theory, the above two projections scores CSP and DSP could be integrated and normalized by the following formula: NCPði; jÞ ¼ CSPði; jÞ þ DSPði; jÞ |CSði; :Þ| þ |DSð:; jÞ| where NCP(i, j) is the nal predictive score of circRNA c i and disease d j . Since i and j represent any row and column in matrix NCP separately, we can simultaneously obtain the relevance of each circRNA-disease pair. Evaluation metrics We used leave-one-out cross validation and ve-fold cross validation to investigate the general prediction performance of NCPCDA. In each leave-one-out cross validation trial, we select a known disease-circRNA association from our dataset in turn as the test sample and suppose this selected pair is unknown in our training samples. All other labeled disease-circRNA pairs and those unobserved pairs are taken as the training set and candidate samples, respectively. For ve-fold cross validation, all labeled disease-circRNA pairs are partitioned into ve parts at random. One of them is chosen as the test data and the other four parts as training data in turn. In order to eliminate the sampling deviation, we performed ten repetitions of this process. The predictive performance is explained by the receiver operating characteristic (ROC) curve, which draws the false positive rate (FPR) and the true positive rate (TPR) over different score thresholds. Then, we can calculate the area under the curve (AUC) and utilize it as the main metric for prediction accuracy. Given that association prole similarity and circRNA functional similarity depend on known disease-circRNA relationships, they should be recalculated in each fold. Comparison with other methods To comparatively illustrate the superiority of NCPCDA, we compare it with PWCDA, 19 KATZHCDA, 20 DWNN-RLS, 21 and CD-LNLP 24 as state-of-the-art disease-circRNA interaction prediction approaches. All ve prediction methods are evaluated based on the CircR2Disease dataset by adopting leave-one-out cross validation and ve-fold cross validation. In Fig. 2 Case studies In order to examine the ability of NCPCDA to prioritize novel circRNA biomarkers for some cancers, we mainly investigated the following two groups of case studies of lung neoplasms. In the rst group, we build the NCPCDA model by using all known disease-circRNA associated pairs from the CircR2Disease dataset and then verify our predictions in another two databases: circRNADisease and Circ2Disease. 42 Meanwhile, the experimental literature was searched using PubMed for evidence. The top 20 candidate circRNAs for lung cancer are detailed in Table 1, and we conrm four candidates contained in circRNADisease. These four candidate circRNAs, Fig. 2 The ROC curves of different models under leave-one-out cross validation. Fig. 3 The ROC curves of different models under five-fold cross validation. This journal is © The Royal Society of Chemistry 2019 hsa_circ_0043256, hsa_circ_0013958, circHIPK3, and circRNA_100876, are all found to be up-regulated in lung cancer cells, [43][44][45][46] three of which are also found in Circ2Disease. Besides, we found literature to support nine predicted circRNAs; see the prediction lists marked as 'PMID' in Table 1. As a result, 13 of 20 predictions are validated to be associated with this disease. In the second group, by removing all known associated pairs of a certain disease from our training samples, we establish the NCPCDA model and make some necessary predictions for such a disease. The top-ranked predictions for lung cancer are listed in Table 2. As the results show, 4 of the top 20 potential circRNAs are known to be associated in CircR2Disease. Note that there are only six known circRNAs associated with this cancer in our benchmark dataset. Thus, the recall rate is 66.67% for the top 20 candidates. Moreover, circRNAs circHIPK3 and circZFR are supported by the two aforementioned databases (i.e., circRNADisease and Circ2Disease) or the literature. In addition, we select all known associated pairs of each disease in turn as test samples and carry out predictions. Finally, NCPCDA obtains comparable results with an AUC of 0.9147. These case studies further manifest the applicability of NCPCDA in predicting unobserved disease-circRNA relationships with Table 1 The top-20 newly discovered circRNAs for lung cancer predicted by NCPCDA Fig. 4, among the 650 true positives, 539 (or 82.92%) interactions are successfully detected in the top 20 predicted pairs. Additionally, we count the results based on the circRNADisease dataset, which collects 332 human disease-circRNA interactions between 40 diseases and 313 circRNAs. As shown in Fig. 5, NCPCDA can detect 260 (or 78.31%) true positives in the top 20 predicted pairs. In order to demonstrate the robustness of our model, ve-fold cross validation is also implemented on the circRNADisease dataset. As a result, the average AUC of NCPCDA is up to 0.9367, which is superior to those of three state-of-the-art predictors (KATZHCDA: 20 0.8608; MRLDC: 22 0.8798; CD-LNLP: 24 0.9007). This nding illustrates that NCPCDA is effective in identifying true disease-circRNA associations with high rankings. Complexity analysis of NCPCDA The running time of the NCPCDA algorithm is mainly dominated by the computation of the similarity matrix and the network consistency projection score. With regard to similarity data, constructing the circRNA similarity matrix and the disease similarity matrix needs O(m 2 n) and O(n 2 m), respectively, where m is the size of the circRNA set and n is the size of the disease set in our dataset. For the NCPCDA method, computing the circRNA space projection matrix and the disease space projection matrix also requires O(m 2 n) and O(n 2 m), respectively. Thus, the computational complexity of the NCPCDA algorithm is O(m 2 n + n 2 m). Conclusions It has been found that circRNAs are associated with various human diseases and have introduced a new dawn in disease diagnosis and prognosis. In this paper, circRNA functional similarity, disease semantic similarity, and association prole similarity are integrated to construct the integrated circRNA similarity and disease similarity. Subsequently, a network consistency projection model is employed to uncover the potential connections between circRNAs and diseases by projecting circRNA space and disease space on the circRNA-disease association network, respectively. We compared NCPCDA with PWCDA, KATZHCDA, and DWNN-RLS. The comparative experiments illustrate that our method is powerful in inferring more disease-associated circRNA candidates. Besides, two groups of case studies on lung cancer were implemented, which further showed the good prediction ability of NCPCDA. The superiorities of NCPCDA over other alternatives are three-fold: (1) it inherits the advantages of a network algorithm, which can fully make use of the topological information of a heterogeneous network; (2) it is a non-parametric algorithm, which can simplify the process of prediction and shorten the prediction time; and (3) it can simultaneously excavate underlying circRNAs for all diseases in our dataset, especially for isolated diseases. Though NCPCDA is simple and effective, it still has several limitations. For starters, the nal integrated score is obtained by averaging the circRNA space projection and the disease space projection, which may result in suboptimal predictions. In addition, as the calculation of circRNA similarity is connected with known circRNA-disease links, NCPCDA fails to infer interactions for new circRNAs that do not have any relationship with diseases. Therefore, integrating different types of circRNA data sources, like circRNA sequence data and miRNA-circRNA association data, may aid in expanding our model to predict new circRNAs and improve prediction accuracy. Conflicts of interest There are no conicts to declare.
2019-10-24T09:15:31.808Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "6a81245f34d4e6b5719e27bf08434945e345dd24", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra06133a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3bff98785f2c3e574d641179f9b60fc023e8b928", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
234378116
pes2o/s2orc
v3-fos-license
Determining the Number of Optimum Servers in The XYZ Restaurant Queue System with Queuing Theory Queue is a natural phenomenon that occurs when the demand for a service at a certain time exceeds the capacity of service at the same time. In this paper, the problem is solved using the queuing theory which is an analytical tool that is very helpful in solving queuing problems. This theory includes mathematical studies that produce important information needed in decision making with the help of forecasting various characteristics of the queue line. The queuing model in this restaurant queue system is (M / G / S). The current queuing system has System utility level (p) 52.23%, average time of customers in the queue (Wq) 0.4634 minute, The average time a customer in a system (W) 2.8074 minutes, Average Number of Visitors in the Queue (Lq) = 0.2104 ≈ 1 person, and Average Number of Visitors in the System (Ls) 1.2750 ≈ 1 person. The total waiting time on server 2 is below the maximum waiting time of 15 minutes and has the greatest system utility level, thus the optimum number of servers chosen is server 2 which is already optimum. Introduction Queue is a natural phenomenon that occurs when the demand for a service at a certain time exceeds the capacity of service at the same time. Queuing problems are often found in daily life both in industry and in trade, finance, social and others. Waiting for service situations will form a waiting line. These waiting lines are often called queues (queues), because service facilities (servers) are relatively expensive to fulfil service requests and are very limited. For example, in purchasing tickets in cinema, complaints are often encountered because the queue is too long where customers have to wait for a very long time where the payment counter can be opened but not opened and sometimes too many open counters that cause employees to idle. Extremely long queues and too long to get service are very annoying. On average, the waiting time depends on the average rate of service [1]. For this reason, the manager of the xyz restaurant wants to know the queuing system at the xyz restaurant is it good or needs to be repaired. The objective of this paper is to help the manager analyze and determine the number of servers that need to be opened in order to work more optimally. In this paper we analyze and optimize the queuing system using Queuing theory. Queuing theory is an analytical tool that is very helpful in solving queuing problems. This theory includes mathematical studies that produce important information needed in decision making with the help of forecasting various characteristics of the queue line. Application of queuing theory study conducted by Dimas Dwi Prayogo entitled "Analysis of queue systems and optimization of teller service At PT. Bank Sulutgo" discusses the analysis of the ICI_ME 2020 IOP Conf. Series: Materials Science and Engineering 1003 (2020) 012115 IOP Publishing doi:10.1088/1757-899X/1003/1/012115 2 application of the M / M / S model in the main branch of the North Sulawesi SulutGo queue system. The method used in this journal is M / M / S which is used like the average number of customers in the system, the average time customers spend in the system. It can be seen the Queue Discipline used by Bank SulutGo main branch implements the First Come First Serve (FCFS) system where customers who come first will be served first. The average number of customers in the queue occurred in the period of 12.00 -13.00 where the average number of customers waiting in the queue was 1,385 people or = 1. But in the performance results table in the discussion of the average number of customers in the queue there was no waiting for the teller to be served immediately because it caused one teller to rest and the average standard of service level was 5 minutes then 60 minutes for 12 people were serviced. While the optimal number of tellers at the main SulutGo Bank branch is 5 tellers and it can be concluded that the performance of the queuing system at the main SulutGo Bank branch is optimal [2]. Another application of queuing theory study conducted by Seigha Gumus entitled" Application of Queuing Theory to a Fast Food Outfit: A Study of Blue Meadows Restaurant". This study evaluated the queuing system in Blue Meadows restaurant to determining its operating characteristics and to reduce customers' waiting time with queuing theory. In this study, they used M/M/S model for the queue, they evaluate the arrival rate 40 customers per hour, service rate was about 22 customers per hour per server. The servers that available is 2 with the utilization is highly above average at 0.909 that can be conclude very effective. The result of this study can be used as a reference to analyze the current system an improve next system [3]. Next application of queuing theory study conducted by Agustian Suseno entitled "Analysis of Queue System to Optimize Teller Services in BRI Bank Cibadak Branch, Sukabumi". Due to the increase in the number of customers over time. responsiveness is needed to achieve customer satisfaction. Banking as a financial service in dealing with customers is closely related to the speed of service provided to customers, sometimes whether or not customers are served quickly is constrained by unpredictable queue times. The purpose of this study is to model the current queue system and compare it with the proposed improvement with the optimal queue size for the number of tellers at BRI Cibadak branch, Sukabumi Regency in order to improve service time effectiveness. When compared to the installation system for 5 tellers and 6 tellers, the results obtained are better for 6 teller installation systems with average server utilization of 67%, average number in the queue of 0.57 customers, average number in the system of 4.57 customers, average time in the queue is 0.01 hours and the average time in the system is 0.11 hours [4]. Another application of queuing theory is conducted on a bank too by Cut I.Setiawati entitled "Counting Teller Quantity For Better Queue In Financial Institution: Case of Bank Central Asia, Metro Indah Mall Branch Office-Bandung". In this bank, which initially had 4 tellers, had the intention to improve the queuing system, because according to the company waiting customers are a disadvantage, by developing this system it is hoped that customer satisfaction can be achieved. After conducting this study showed that each teller could serve 10 persons per hour and teller busy the level is 97%. The average time spent by each customer in the system is 63 minutes and in the queue line is 57 minutes. This research results showed that there should be 7 tellers to get an optimal result, meaning that BCA Bank should provide three more tellers per day. Providing more tellers will indicate better benefits for the customer [5]. Based on the studies conducted earlier, Queuing Theory method can be used as a tool to analyze Queuing system, improve queuing system, determine the optimal number of server.Therefore in this study the author will use Queuing theory to determine the optimum server in XYZ restaurant queuing system. Method The research approach used in this research is a quantitative approach as an approach in which is real and directly collected by observation. These data are arrival time data, service time data and exit time data which are observed with digital clock and recorded on the worksheet between arrival time data, ICI_ME 2020 IOP Conf. Series: Materials Science and Engineering 1003 (2020) 012115 IOP Publishing doi:10.1088/1757-899X/1003/1/012115 3 service level time and customer exit time from the queuing system for 8 hours and maximum customer waiting time by asking directly to the customer. The queuing system consists of a queue line and a servant station. Customers who need service come from a source called the calling population entering the queue system from time to time. Customers come to the system and join in forming a queue. At a certain time, one of the members of the queue is chosen to be served. The selection is based on certain rules called disciplinary services. Services are provided to customers through a service mechanism and after they are served, the customer leaves the queuing system. Losses experienced by companies can be caused by many things, both errors caused by humans, machines, raw materials, work methods, and work environment. Therefore, we need a method that can support quality improvement with the aim of being able to avoid more losses and produce quality products [6]. There are four forms of service discipline commonly used in practice, namely: • First In First Out (FIFO), which means that the first comes the first served. • Last In First Out (LIFO), which means the one who arrives last, comes out first. • Service In Random Order (SIRO), which means the summoning is based on random chance, it doesn't matter who arrives first. • Priority Service (PS), which means service priority is given to those who have higher priority [7]. There are 4 basic structures of the queuing model that commonly occur in a queuing system: • Single Channel Single Phase: indicates there is only one system entry service and there is only one service facility • Single Channel Multi Phase: indicates that there is only one service entry line and there are two or more service facilities in series in the line • Multi Channel Single Phase: indicates there are two or more system entry lines service and there is only one service facility in each line • Multi Channel Multi Phase: indicates there are two or more system entry lines service and there are also two or more service facilities in series in each the path [8]. The data processing steps in this study are as follows: • Testing the distribution of arrival frequency data, time between arrival data and service level time data with EasyFit Software. • The queuing model is determined by Kendall Notation. Kendall notation is a standard notation which is a universal standard of combining arrival processes with services in the following format: Results and Discussion The data collected will look for the type of distribution using easyfit software. First, arrival frequency data will be grouped at 6 minute intervals. Distribution test results on the arrival frequency can be seen in Table 1. It can be seen from the table above that the closest distribution of Arrival Frequency is the Poisson distribution and the following graph that can be seen in Figure 1. Figure 1. Poisson graph on arrival frequency Second, the Service Level Time data is changed to minutes and sorted, then the distribution type will be tested. Distribution test results on the Service level time can be seen in Table 2. It can be seen from the table above that the closest distribution of Service Level time is the Weibull distribution and the following is the graph of Service Time level than can be seen on Figure 2. The average time a customer in a system (W) is actually the time calculated from the customer entering the waiting line until the service process is complete and can be formulated as follows: Next is to calculate and comparing current amount of server with several amount of server such as 1 and 3 servers. The result of the calculation can be seen on Table 3. As can be seen in the table above, the time of the customer in the queue with 2 servers is 0.4634 minutes, this result is smaller than the customer maximum waiting time of 15 minutes and the system utility is 0.5323 which indicates that the system is good enough. Thus the optimum number of servers chosen is 2 servers. Conclusion Based on the results and discussion above, it can be concluded that Distribution type of Arrival Frequency is Poisson distribution and Distribution type of Service level time is Weibull distribution, and the The queuing model with Kendall notation is (M / G / 2) :( FCFS / 4 / ∞). The current queuing system has System utility level (ρ) 52.23%, average time of customers in the queue (Wq) 0.4634 minute, The average time a customer in a system (W) 2.8074 minutes, Average Number of Visitors in the Queue (Lq) = 0.2104 ≈ 1 person, and Average Number of Visitors in the System (Ls) 1.2750 ≈ 1 person. The total waiting time on 2 server is below the maximum waiting time of 15 minutes and has the greatest system utility level,thus the optimum number of servers chosen is 2 servers.
2020-12-31T09:05:10.600Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "bc6ebe7fed5e9431720a424e3f74896e344e3e44", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1003/1/012115", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "63c95f964f95a917434072642d469b61d1696c56", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
9368724
pes2o/s2orc
v3-fos-license
Pattern and process in the evolution of the sole dioecious member of Brassicaceae Background Lepidium sisymbrioides, a polyploid New Zealand endemic, is the sole dioecious species in Brassicaceae and therefore the closest dioecious relative of the model plant Arabidopsis thaliana. The attractiveness of developing this system for future studies on the genetics of sex determination prompted us to investigate historical and developmental factors surrounding the evolution of its unisexual flowers. Our goal was to determine the evolutionary pattern of polyploidization of L. sisymbrioides and the timing and process of flower reproductive organ abortion. To that end, we used a combination of phylogenetics to place this species within the complex history of polyploidization events in Lepidium and histology to compare its floral ontogeny to that of its closest hermaphroditic relatives and to A. thaliana. Results Using a nuclear locus (PISTILLATA), we reconstructed the gene tree among Lepidium taxa and applied a phylogenetic network analysis to identify ancestral genomes that contributed to the evolution of L. sisymbrioides. Combining this phylogenetic framework with cytological and genome size data, we estimated L. sisymbrioides as an allo-octoploid resulting from three hybridization events. Our investigations of flower development showed that unisexual flowers appear to abort reproductive organs by programmed cell death in female flowers and by developmental arrest in male flowers. This selective abortion occurs at the same floral developmental stage in both males and females, corresponding to Arabidopsis stage nine. Conclusions Dioecy in Brassicaceae evolved once in L. sisymbrioides following several allopolyploidization events, by a process of selective abortion of reproductive organs at intermediate stages of flower development. Different developmental processes, but similar timing of abortions, affect male versus female flower development. An increased understanding of how and when reproductive organs abort in this species, combined with our estimates of ancestral genome contributions, ploidy and genome size, lay the foundation for future efforts to examine the genetic mechanisms involved in the evolution of unisexual flowers in the closest dioecious relative of the best studied model plant. Electronic supplementary material The online version of this article (doi:10.1186/2041-9139-5-42) contains supplementary material, which is available to authorized users. Background The family Brassicaceae contains approximately 338 genera and 3,709 species [1,2] and includes the model plant Arabidopsis thaliana, which has a floral morphology representative of the vast majority of the family. Only 5% of genera within Brassicaceae show deviations from the basic floral plan of four sepals, four petals, six stamens (four medial and two lateral), and two fused carpels [3]. One of these genera that diverge from the basic Brassicaceae floral morphology, Lepidium (230 species [1]), is widely distributed in temperate and subtropical areas [4]. Earlydiverging lineages in the genus comprise outcrossing diploid species from the Old World that exhibit the basic floral plan of the family, whereas derived lineages tend to be selfing allopolyploids from the New World, Australia, and New Zealand with reduced flowers (that is, fewer stamens and/or reduced petals) [5]. Among the latter, the New Zealand endemic Lepidium sisymbrioides is the only dioecious species in the whole Brassicaceae [4,[6][7][8][9][10]. Staminate flowers of L. sisymbrioides consist of four to six stamens and a reduced ovary [10,11], whereas carpellate flowers have three to seven staminodes and a functional pistil with style and stigma [10]. Nonfunctional reproductive organs of unisexual flowers of L. sisymbrioides have been loosely described as 'abortive' [8,11], but the exact timing and process of the abortions remain unknown. Incongruence between phylogenetic trees using nuclear versus chloroplast DNA regions suggests reticulate evolution within the genus [12]. It appears that hybridization, followed by whole genome duplication, resulted in predominantly allopolyploid Lepidium species in the Americas, Australia, and New Zealand [5]. These hybridization events may have contributed to the reduced stamens and petals observed in these species [5], in which case dioecy could represent another example of organ reduction. In angiosperms, dioecy often follows polyploidization, presumably due to chromosomal rearrangements that facilitate the evolution of sex chromosomes or the breakdown of gametophytic self-incompatibility, followed by inbreeding depression [13][14][15]. Correlations between island habitat and dioecy are also common, through selection for outcrossing in small, colonizing hermaphroditic populations, [16]. In fact, New Zealand taxa in general show a higher incidence of gender dimorphism compared to their continental sister taxa [17,18]. Combined evidence for reticulate evolution and polyploidy in New Zealand Lepidium [5,12] suggests that L. sisymbrioides may also be an allopolyploid. We therefore hypothesize that this species represents another case of dioecy evolving in an island species, following hybridization and polyploidization. The lineages containing Lepidium and Arabidopsis diverged from each other relatively recently, approximately 35 million years ago (mya) [19]. Lepidium sisymbrioides, therefore, offers the potential to uncover the genetic mechanisms involved in the evolution of unisexual flowers by being the closest dioecious relative to the most thoroughly investigated model plant. Determining the developmental stage and process of reproductive organ abortion should facilitate the identification of candidate genes involved in the evolution of unisexual flowers in this species as genes involved in sporogenesis and gametogenesis have been identified in Arabidopsis [20,21]. Six developmental processes leading to reproductive organ abortion in unisexual flowers are recognized: cell death, programmed cell death, parenchymatization, arrest of development, change in timing of otherwise normal developmental events, and inviable pollen [22]. Identifying which of these processes contributes to the development of male and female flowers in L. sisymbrioides, as well as estimating this species' genome size and ploidy history, will facilitate future efforts to uncover the genetic mechanisms involved in the evolution of dioecy. The overall goal of this study was to investigate the pattern of polyploidization and the developmental processes underlying the evolution of separate sexes in L. sisymbrioides, the sole dioecious member of Brassicaceae. To that end, we 1) identified ancestral genomes within L. sisymbrioides and close relatives, 2) estimated ploidy and genome size for L. sisymbrioides and close relatives and 3) investigated the timing and process of organ abortion by comparing its floral development to that of its close hermaphroditic relatives and to Arabidopsis thaliana. Sampling methods Three subspecies of L. sisymbrioides were originally recognized: kawarau (Petrie) Thell., matau (Petrie) Thell., and sisymbrioides. All have been listed as nationally endangered because of a steep reduction in their distribution and abundance [23] and are difficult to sample. We sampled L. sisymbrioides subsp. sisymbrioides only because, despite reported habitat and morphological differences, all three subspecies are closely related [10]. In addition to published DNA sequences of Lepidium available from GenBank (Appendix 1), we sampled L. sisymbrioides subsp. sisymbrioides and three hermaphroditic close relatives endemic to New Zealand, L. kirkii, L. naufragorum and L. tenuicaule [9,10,12,24,25]. We sampled L. kirkii from an herbarium specimen (Appendix 1) and the remaining three species from cultivated accessions at the University of Washington (UW) greenhouse from wild-collected seed provided by P. Heenan (Landcare Research, Lincoln, New Zealand). Voucher specimens are listed in Appendix 1. Molecular methods Because we were primarily interested in the polyploidization history of L. sisymbrioides and its close relatives (L. kirkii, L. naufragorum, and L. tenuicaule), we investigated reticulation events using the single-copy nuclear gene PISTILLATA (PI). PI had been previously used to detect reticulation among other Lepidium taxa [5] and therefore sequences were readily available (Appendix 1). Genomic DNA was extracted from one to two accessions for each of our four study species using the FastDNA Kit (MP Biomedicals, Solon, OH, USA) for cultivated accessions or following the protocol of Hughey et al. [26] for the herbarium specimen. We amplified and sequenced the first intron of PI using PI-ITF and PI-ITR primers [5]. Polymerase chain reaction conditions were 95°C for 2 min, followed by 35 cycles of 94°C for 30 s, 60°C for 1 min, and 72°C for 1 min, with a final extension step at 72°C for 5 min. Amplified DNA was purified using ExoSAP-IT (USB Corporation, Cleveland, OH, USA). To distinguish among allelic variants, PCR products were cloned into the pCRII or pCR2.1 vector using the TA Cloning Kit (Invitrogen Corporation, Carlsbad, CA, USA). Plasmids were extracted using the FastPlasmid Mini Kit (5 Prime Inc., Gaithersburg, MD, USA). Three to 27 positive clones were sequenced per accession (UW Biochemistry DNA Sequencing Facility or GENEWIZ, Seattle, WA, USA) for a total of 20 to 35 clones per taxon. Phylogenetic analyses PISTILLATA sequences were edited in Sequencher 4.9 (Gene Codes Corporation, Ann Arbor, MI, USA). We incorporated sequences we generated from our four study species plus available sequences from other taxa in Gen-Bank to the entire alignment of the PI first intron, provided by J. L. Bowman [5]. We then aligned all sequences manually using MacClade 4.08 [27]. Ambiguously aligned regions were excluded from subsequent analyses. Our data set is available through TreeBASE (http://purl.org/phylo/ treebase/phylows/study/TB2:S11886). In order to detect whether PCR-mediated recombination had occurred among the PI copies within a species, we checked for recombination using RDP4 [28]. The PI alignment was analyzed by the automated exploratory recombination analysis, which employs eight different recombination methods: RDP [29], BootScan [30], GEN-ECONV [31], MaxChi [32], Chimaera [33], SiScan [34], 3Seq [35], and LARD [36]. Analyses were run under the default general settings but as linear sequences and disentangling overlapping signals. The default settings for each method were used except for the following models: Felsenstein 1984 for BootScan and reversible process for LARD. Five recombinant sequences were identified and removed from the PI alignment before subsequent phylogenetic analyses. We reconstructed the phylogeny for the Lepidium PI data set using Bayesian and likelihood analyses. For taxa that had multiple clonal PI sequences, we chose one clonal sequence from each monophyletic group of sequences representing a given taxon that was recovered in a 50% majority rule consensus tree from preliminary Bayesian analyses of all clones and that was representative of the majority of clones from a group. All other clonal sequences not forming monophyletic groups with other clones from the same taxon were included in the final analyses (Appendix 1). For Bayesian and likelihood analyses, the model of evolution for the PI data set was determined by jModelTest 2.1 [37,38]. The model selected under the Akaike Information Criterion [39] was TVM + I + Γ. We specified L. phlebopetalum and L. perfoliatum as outgroups, as previously identified in various studies [5,9,25,40]. Bayesian analyses were conducted in MrBayes 3.2.2 [41,42] via the CIPRES Science Gateway 3.3 [43]. We used default priors of no prior knowledge for the parameters of the model. Bayesian analyses were conducted with three independent Markov Chain Monte Carlo [44] analyses of 10 million generations each. Metropolis coupling for each analysis was conducted under the default settings. Convergence was determined when the average standard deviation of split frequencies remained less than 0.01. The first 10% of trees was discarded before convergence. The remaining trees from each run were pooled to construct a 50% majority rule consensus tree to obtain posterior probabilities (pp) and visualized with FigTree 1.4 [45]. Likelihood analyses were conducted in GARLI 2.0 [46]. Analyses were run under the default settings and included five search replicates to determine the maximum likelihood tree. To assess the reliability of clades in the resulting likelihood tree, we conducted 1,000 nonparametric bootstrap (bs) replicates [47] in GARLI. Bootstrap replicates were conducted under the above settings, but included one search replicate and 10,000 generations as the first part of the termination condition. Bootstrap trees were summarized with SumTrees 3.3.1 [48] and visualized with FigTree. Since multiple PI copies in polyploid Lepidium taxa had been previously ascribed to allopolypoidy [5], we wanted to identify potential hybridization events leading to the evolution of dioecy in L. sisymbrioides. To facilitate visualization of potential ancestral genomes that contributed to the evolution of L. sisymbrioides and its close relatives (L. kirkii, L. naufragorum, L. tenuicaule), we conducted network analyses using the 50% majority rule consensus tree from Bayesian analyses of the PI data set as a multilabeled tree (MUL tree). The MUL tree was imported into Dendroscope 3.2.10 [49] and transformed into a phylogenetic network using the Huber et al. [50] algorithm, which minimizes the number of hybridization nodes. Cytology Chromosome counts were obtained from pollen mother cells (PMCs) from freshly collected floral buds from one to two cultivated accessions of L. sisymbrioides and L. tenuicaule using a modified protocol of Kato [51]. Floral buds were treated according to the protocol of Matsushita et al. [52] and Wright et al. [53], modified with an N 2 O treatment for 3 hours at 206 PSI and an enzyme digestion for 3 hours. PMCs were mounted in VECTASHIELD with DAPI (Vector Laboratories, Burlingame, CA, USA), observed and photographed using a Nikon Microphot-FX microscope (Nikon Instruments, Inc., Melville, NY, USA) and a Retiga 1300 monochrome camera (QImaging, Surrey, British Columbia, Canada). Genome size estimation Three accessions from three species cultivated at the UW greenhouse (L. naufragorum, L. sisymbrioides, and L. tenuicaule) were analyzed to obtain relative holoploid genome size, expressed as a 1C-value as defined in [54]. Nuclei were extracted from fresh leaf tissue and combined with chicken erythrocyte nuclei (CEN singlets, BioSure, Grass Valley, CA, USA) before staining with propidium iodide and analyzed with a flow cytometer, as outlined in Davison et al. [55]. CEN, with a 1C-value of 1223 Mbp or 1.25 pg [56], were used as an internal calibration standard. Animal standards have been discouraged by some authors for plant studies because (1) they cannot account for the huge range of plant genome sizes, (2) their nuclei structure may be different from plant nuclei, and (3) their precise genome size is unknown [57]. However, for our purposes, the genome size of CEN falls within the range of Lepidium genome size estimates. Additionally, our goal was to produce relative holoploid genome size estimates, since absolute estimates are not feasible due to the lack of complete genome coverage in most model taxa because of repetitive regions in the genome [55,57]. Samples were analyzed on a FACScan flow cytometer (Becton, Dickinson and Company, Franklin Lakes, NJ, USA) with FlowJo software (Tree Star, Ashland, OR, USA) at the UW Department of Immunology Cell Analysis Facility. The 2C median nuclear peak of propidium iodide fluorescence in Lepidium samples was compared to that of the CEN standard to estimate the 2C nuclear DNA content of Lepidium in Mbp, then converted to pg using the equation from Dolezel et al. [58]. The relative timing of floral organ initiation and growth in Lepidium taxa had been previously shown to be comparable to that of its model Brassicaceae relatives (that is, Arabidopsis thaliana and Brassica napus), with the exception of petal growth and stamen number [59,60]. Therefore, floral developmental stages for our three Lepidium study species were designated by the 13 characterized stages of the closely related model Arabidopsis thaliana [20,61,62]. For histological observations, inflorescences were fixed in FAA, then dehydrated through an ethanol series ending in Citrisolv (Fisher Scientific, Kent, WA, USA), embedded in Paraplast Plus (McCormick Scientific, LLC, St. Louis, MO, USA), and sectioned (5 or 8 μm) according to the protocol of Kramer [63]. Slides were deparaffinized with CitriSolv, hydrated through an ethanol series, stained in 1% Safranin O for 24 hr [64] and counterstained with 0.5% Fast Green FCF for 30 sec or stained in 0.05% Toluidine Blue O in dH2O for 1 to 2 min, and dehydrated through an ethanol series ending in Citri-Solv. Histological sections were mounted in Cytoseal™ 60 (Richard-Allan Scientific, Kalamazoo, MI, USA) and observed using a Leica TCS SP5 II laser scanning confocal microscope (Leica Microsystems Inc., Buffalo Grove, IL, USA) with an excitation of 488 nm and an emission of 500 to 560 nm for Safranin O and an excitation of 561 nm and an emission of 625 to 690 nm for Fast Green FCF or using a Leitz Orthoplan 2 microscope (Ernst Leitz, Midland, Ontario, Canada) and photographed with a MicroPublisher 3.3 Real-Time Viewing camera (QImaging, Surrey, British Columbia, Canada). TUNEL assays We conducted TUNEL assays to determine whether programmed cell death (PCD) was occurring in aborted stamens from L. sisymbrioides female flowers. Paraffinembedded tissue sections were prepared as outlined above from female L. sisymbrioides and male L. sisymbrioides and hermaphroditic L. tenuicaule for comparison. We used the DeadEnd Fluorometric TUNEL System (Promega Corporation, Madison, WI, USA) according to manufacturer's instructions, including positive controls, and washed slides in PBS containing 0.1% Triton® X-100 and 5 mg/ml of BSA after terminating reactions to reduce background as recommended. Slides were mounted in VECTASHIELD with DAPI, except for negative control slides that were untreated and mounted in Cytoseal™ 60. Slides were observed using a Leica TCS SP5 II laser scanning confocal microscope using an excitation of 405 nm and an emission of 430 to 550 nm for DAPI and an excitation of 488 nm and an emission of 500 to 535 nm for fluorescein. PISTILLATA gene duplication history suggests allopolyploidy in Lepidium sisymbrioides and relatives In order to identify hybridization events leading to the evolution of the dioecious species L. sisymbrioides and its closest hermaphroditic relatives, L. kirkii, L. naufragorum and L. tenuicaule, we amplified and sequenced the first intron of the single-copy nuclear gene PI from these species and aligned them to other Lepidium sequences available in GenBank or unpublished (provided by J. L. Bowman; Appendix 1). Phylogenetic analyses recovered five, strongly supported clades (A1-D; pp ≥0.99, bs ≥94%; Figure 1) representing five major copies of the PI intron from American, Australian, and New Zealand (AANZ) taxa. Multiple copies of the PI intron within a taxon were previously suggested as representing multiple genomes from allopolyploid hybridization [5]. Clades A1 and A2 were denoted here because they had been previously recognized as a single clade ' A' , but with low support [5]. In our analysis, these two clades are strongly supported as distinct (pp = 1.00, bs = 100%) and indicative of two separate genomes, as evidenced by sequences from our four study species in both clades ( Figure 1, colored dots). Therefore, we found at least four distinct copies of the PI intron in L. kirkii Figure 1). In addition, multiple sequences of L. kirkii, L. naufragorum, L. pseudohyssopifolium, and/or L. tenuicaule within clades A1, C, and D ( Figure 1) suggest that hybridization, gene duplication and/or allelic divergence are at play. None of the New Zealand taxa studied fell into the fifth clade B, which consists entirely of American taxa. Our results therefore suggest that all four New Zealand species are allopolyploids (and potentially allo-octoploids at minimum), originating from at least four divergent genomes (represented by clades A1, A2, C, and D). We further used the Bayesian majority rule consensus tree from the PI data set (Figure 1) to estimate a phylogenetic network to aid in the identification of hybridization Four PI clades previously identified from a number of taxa within the genus are denoted as A-D (after [5]). Clade 'A' is not monophyletic in our study, and new clades identified by our study are denoted as A1 to A2. Sequences from Lepidium taxa that were generated in our study are in bold and indicated by a colored dot: L. kirkii (yellow), L. naufragorum (purple), L. sisymbrioides (red), L. tenuicaule (green). Four PI copies from L. sisymbrioides are indicated by arrows. Posterior probabilities ≥0.90 and maximum likelihood bootstrap values >50% are displayed above and below branches, respectively. nodes and potential ancestral genomes contributing to our study species (Figure 2). According to the network, our sampling includes 21 allopolyploid Lepidium taxa ( Figure 2, tree branches originating from curved lines), which are confirmed polyploids from the literature [5] and this study (L. sisymbrioides, L. tenuicaule). The remaining 31 taxa in our analyses do not show evidence of reticulation, and this may be due to diploidy, autopolyploidy, or gene loss. The evidence suggests that Lepidium sisymbrioides is derived from four distinct ancestral genomes (Figure 2, red lines): (1) a hybrid between (a) a descendant from the common ancestor of the L. monoplocoides group and (b) the common ancestor of the group that includes L. pseudotasmanicum and L. hyssopifolium (strong support), (2) a descendant from the common ancestor of the L. vesicarium group (low support), and (3) a descendant from the common ancestor of the L. dictyotum and L. quitense group (strong support). Biogeographically, the contribution of these genomes to L. sisymbrioides implies hybridization among Australian and New Zealand (ANZ) taxa (1a and 1b, above), followed by hybridization with American (3) and potentially (with low support) Asian (2) species. The other three New Zealand study species, L. kirkii, L. naufragorum, and L. tenuicaule, show contributions from four, five, and four distinct ancestral genomes, respectively ( Figure 2). Of these three close hermaphroditic relatives, L. sisymbrioides shares the most reticulation history with L. kirkii, followed by L. tenuicaule, then L. naufragorum ( Figure 2). Ancestral genomes contributing to our study species L. kirkii, L. naufragorum, L. sisymbrioides, and L. tenuicaule are indicated in colors matching Figure 1: yellow, purple, red, and green lines, respectively. Four distinct ancestral genomes (red) contributing to L. sisymbrioides are numbered on the right (1a, 1b, 2 and 3) with their respective biogeographical origin: Am, America; As, Asia; Au, Australia; ANZ, Australia/New Zealand. Cytological observations reveal octoploidy in dioecious L. sisymbrioides and its hermaphroditic relative L. tenuicaule In order to confirm our results from the PI data set, we conducted chromosome counts in PMCs. Seed of L. kirkii was not available, so its chromosome number remains unknown. Both L. sisymbrioides and L. tenuicaule had 2n = 64 chromosomes (Figure 3), corresponding to a ploidy of 8x (x = 8 is the base chromosome number for the genus [2,4]). Octoploidy in these two species is consistent with having four distinct copies of the PI intron as shown by our phylogenetic and network analyses (A1, A2, C, D, Figures 1, 2). These four copies would therefore represent four distinct diploid genomes in these species' history of hybridization and polyploidization events. Genome size estimations confirm ploidy estimates in Lepidium sisymbrioides and relatives We examined holoploid genome size by calculating 1C-values for three of our New Zealand Lepidium study species to confirm our estimates of ploidy and to inform future genomic sequencing plans. Lepidium sisymbrioides and L. tenuicaule had similar holoploid genome sizes of 0.63 and 0.66 pg, respectively, whereas L. naufragorum's holoploid genome size was slightly over double that of the other two species at 1.41 pg (Table 1). Material from which the holoploid genome size of L. naufragorum was obtained had published chromosome counts from the same population, indicating a ploidy of approximately 18x [65]. In conclusion, our holoploid genome size estimations are consistent with L. sisymbrioides and L. tenuicaule both being 8x and with L. naufragorum being 18x, more than double the ploidy of the former two species. Female and male flowers of Lepidium sisymbrioides abort reproductive organs at comparable developmental stages but due to different processes In order to assess the developmental stage and process of abortion of reproductive organs in L. sisymbrioides, we examined floral morphology and ontogeny of this species in comparison to the two closest hermaphroditic relatives available, L. naufragorum and L. tenuicaule. Since floral developmental stages of hermaphroditic Lepidium species are comparable to the A. thaliana ontogenetic staging [20,61,62], we cross-referenced to this system for convenience and reproducibility. Flower morphology of L. naufragorum and L. tenuicaule differed from A. thaliana in petal size, number and arrangement of stamens, and ovule number. Lepidium naufragorum flowers had petals approximately as long as sepals and two lateral and two medial stamens ( Figure 4A), whereas L. tenuicaule flowers had highly reduced petals, unnoticeable to the naked eye, and four medial stamens ( Figure 4B). All Lepidium taxa produced a single ovule per locule. In contrast to L. naufragorum and L. tenuicaule, both male and female flowers of L. sisymbrioides generally exhibited six stamens (two lateral and four medial; Figure 4C-F) with a few exceptions where only four medial ( Figure 5O) or five stamens were found [see Additional file 1]. Petals were reduced (that is, shorter than sepals; Figure 4C-F, arrowheads), and four to six nectaries were present among the stamen filaments in both sexes ( Figure 4D, F, asterisks; [see Additional file 1]). In staminate flowers, the gynoecium arrested its development at intermediate stages, after differentiation of the anther locules ( Figure 4C) and remained as a pistillode while stamens expanded normally ( Figure 4D), as in L. naufragorum and L. tenuicaule ( Figure 4A-B). In young carpellate flowers, stamens and carpels looked normal ( Figure 4E). In later stages, however, stamen development was visibly arrested resulting in staminodia, whereas the A B gynoecium developed normally ( Figure 4F) as in L. naufragorum and L. tenuicaule ( Figure 4A-B). From these morphological observations, both carpels and stamens from male and female flowers of L. sisymbrioides, respectively, appeared to abort at intermediate stages of flower development. SEM of flower development in all three species showed flower meristems that initiated from the inflorescence meristem in a similar fashion to Arabidopsis ( Figure 5A-B, D, Arabidopsis stages 1 to 2). As expected, sepal primordia developed first ( Figure 5A-D, stages 3 to 4), followed by petals ( Figure 5B), then presumably stamen and gynoecium primordia. Stamen filaments and anther locules differentiated within the androecium and the gynoecium developed as a tube through postgenital fusion of two carpels ( Figure 5E-F, stages 7 to 8). Subsequently, L. naufragorum started to show more petal expansion than the other two species (compare Figure 5E-H). The gynoecial tube then closed at completion of postgenital fusion and began to differentiate a stigma with papillae ( Figure 5I-J, L, stage 11). In staminate flowers of L. sisymbrioides, after filaments and anther locules of the androecium had differentiated from one another, the gynoecium was arrested in its development ( Figure 5K). The carpels of functional gynoecia expanded laterally, elongating and reaching full maturity with a clearly differentiated style and stigma ( Figure 5M-N, P, stage 12). In L. sisymbrioides staminate flowers, the gynoecium remained aborted at maturity ( Figure 5O) in comparison to functional gynoecia described above. Stamen filaments continued to elongate ( Figure 5M-O, stage 12), except in carpellate flowers of L. sisymbrioides where they remained much shorter than the gynoecium ( Figure 5P). In L. naufragorum, the only species with noticeable petals when mature, petals continued to expand, reaching the length of stamens ( Figure 5M). In the other two species, petal primordia were initiated ( Figure 5F-H) but never expanded, remaining small throughout development ( Figure 5J-L) and not visible at maturity (Figure 5N-P). Histological sections were performed to further investigate the anatomical development of stamens and carpels. Lepidium naufragorum [see Additional file 2] and L. tenuicaule revealed comparable reproductive organ development with no evidence of loss of organ function. Therefore, only data from L. tenuicaule is compared here against L. sisymbrioides ( Figure 6). In hermaphroditic flowers of L. tenuicaule, after stamen filaments and anther locules differentiated ( Figure 6A), anthers consisted of PMCs, tapetum and two outer anther This study, estimated based on chromosome counts (Figure 3). c [65]. wall layers (middle layer and endothecium; Figure 6D, J, stage 9) and ovules began to develop in gynoecia ( Figure 6E, K, stage 9). After meiosis of PMCs, anther wall layers degenerated, microspores underwent mitosis, and integuments enclosed the ovule [see Additional file 2, stage 12]. Subsequently, the androecium and gynoecium matured ( Figure 6P). At this stage, the stamen filaments elongated ( Figure 6P) and pollen sacs were composed of a single endothecium layer with secondary wall thickenings ( Figure 6S, Y, stage 13). Pollen grains could be visualized with evident exine and the tapetum had degraded ( Figure 6S, Y, stage 13). By this stage, the gynoecium had fused, elongated, and differentiated a style and stigma ( Figure 6P, stage 13), and ovules had differentiated within each carpel ( Figure 6T, Z1). Apical ovules consisted of an elongated funiculus and an embryo sac, surrounded by the nucellus and two integuments ( Figure 6T, Z1, stage 13). In staminate flowers of L. sisymbrioides, histological sections revealed that after initiation of the gynoecium ( Figure 6B), sporogenous tissue (PMCs) was present in stamen locules ( Figure 6F, L) and ovules had been initiated ( Figure 6G, M, [see Additional file 2, stage 9]). However at later stages ( Figure 6Q, stage 12; [see Additional file 2, stage 11]), as microspores matured within the anthers and the tapetum degenerated (Figure 6 Z2), the gynoecium failed to elongate and differentiate a style and stigma, and ovules did not grow nor differentiate ( Figure 6U-V, Z3, [see Additional file 2]). Since the gynoecium arrest occurs before microsporogenesis (the production of tetrads from PMCs, stage 9), which normally precedes megasporogenesis (stage 11) in Arabidopsis, we conclude that the process for the loss of gynoecium function in male flowers is the arrest of development at a pre-meiotic, intermediate stage (stage 9). In young carpellate flowers of L. sisymbrioides ( Figure 6C), sporogenous tissue (PMCs) inside the stamen locules ( Figure 6H, N) ( Figure 6A, D-E, J-K, stage 9). By the time the gynoecium closed and a stigma and ovule began to differentiate ( Figure 6R, Z5, [see Additional file 2, stage 11]), vacuolated cells pervaded anthers and PMCs had degenerated ( Figure 6W-X, Z4). Stamen filaments did not elongate and neither tetrads, microspores, nor pollen were produced; pollen sacs appeared shrunken, filled with vacuolated cells, and no endothecium layer developed ( Figure 6W-X, Z4). Based on the above observations, we propose that the developmental process for loss of androecium function in female flowers is likely cell death, as evidenced by vacuolated cells (absence of stained cytoplasm) and nuclear degradation ( Figure 6W-X, Z4) following the development of PMCs ( Figure 6H, N). In conclusion, androecium abortion in female flowers occurs at a comparable pre-meiotic stage to gynoecium abortion in male flowers (stage 9) but due to different processes, that is, cell death versus developmental arrest, respectively. Programmed cell death in anther walls is involved in the degradation of pollen mother cells of female L. sisymbrioides flowers Because we were finding evidence of cell death in stamens of female L. sisymbrioides, we wanted to determine whether PCD could be responsible for this abortion of stamens. To look for evidence of PCD, as characterized by DNA fragmentation, we conducted TUNEL assays on histological sections of carpellate L. sisymbrioides and, for comparison, staminate L. sisymbrioides and L. tenuicaule flowers. The TUNEL assay attaches fluorescein to fragmented DNA, eliciting a green fluorescent signal in nuclei undergoing DNA degradation. Lepidium tissue autofluoresced in the absence of staining under both DAPI and fluorescein excitation and emission ranges: cell walls, chloroplasts, nuclei, and pollen grains showed background signal (compare Figure 8A-B to C-D, G-H to I-J, and M-N to O-P). This autofluorescence contributed additional histological evidence that cell death was occurring in stamens of female L. sisymbrioides, as evidenced by the absence or degradation of cell walls, nuclei, and pollen in the center of anther locules, where sporogenous tissue leading to pollen normally develops (compare Figure 8I-J to C-F and O-R). In spite of this autofluorescence, the use of negative and positive controls allowed us to observe strong, above-background, fluorescein signal in certain tissues at certain stages that indicate DNA degradation. For example, all nuclei in the endothecium of mature, functional anther sacs of L. tenuicaule at stage 13 showed a strong, above-background, fluorescein signal indicating PCD (compare Figure 8B to D, red arrow denotes one exemplary nucleus). More importantly, we observed strong, above-background, fluorescein signal in all nuclei throughout all anther wall layers of mature anthers from female L. sisymbrioides (compare Figure 8H to J, red arrows denote exemplary nuclei from each layer). This was taken as evidence that these nuclei are undergoing DNA degradation, as observed in positive controls (treated with DNase) showing higher than abovebackground signal (compare Figure 8B to L, red arrow denotes one exemplary nucleus). When comparing anthers from female L. sisymbrioides that abort at stage 9 to functional anthers from L. tenuicaule and male L. sisymbrioides at the same stage (compare Figure 8I-J to E-F and Q-R), it appeared that PCD in anther wall layers was contributing to the degradation of PMCs in carpellate L. sisymbrioides flowers, in which tapetum, endothecium, and tetrads do not develop as in functional anthers from L. tenuicaule and male L. sisymbrioides at the same stage. In summary, using the TUNEL assay as a proxy for PCD, we find evidence for PCD in the anther wall layers of carpellate L. sisymbrioides flowers, which likely contributes to the degradation of PMCs and abortion of anthers at stage 9. Discussion Lepidium sisymbrioides is the sole dioecious member of Brassicaceae, and our phylogenetic analyses show that it is closely related to three other New Zealand hermaphroditic species: L. kirkii, L. naufragorum and L. tenuicaule (Figures 1, 2). Increased phylogenetic sampling of the PI first intron among AANZ taxa allows us to identify reticulation events leading to L. sisymbrioides, which resulted from three past hybridization events ( Figure 2). Of the three close relatives, the shared reticulation history with L. kirkii is a novel finding. Molecular, cytological, and genome size analyses provide evidence that L. sisymbrioides is an allo-octoploid (Figures 1, 2 and 3, Table 1), with 64 chromosomes and an average holoploid genome size (1C-value) of 621 Mbp. By comparing the floral ontogeny of unisexual flowers in L. sisymbrioides to that of its close relatives and to Arabidopsis thaliana, we show that unisexual flowers in this species arose from selective abortion of reproductive organs at a similar floral developmental stage (Figure 7 stage 9) but by different processes in males and females. Differential abortion of the gynoecium in males appears to result from developmental arrest, while in females anther sterility results from programmed cell death (Figure 6, 8). Evolution of dioecy and unisexual flowers within Brassicaceae The evolution of dioecy in Brassicaceae occurred only once in the genus Lepidium. We infer that in L. sisymbrioides, dioecy evolved from hermaphroditism via selective abortion of reproductive organs, as in type I flowers [66], as all other members of Lepidium are hermaphroditic. Our floral ontogeny observations confirmed that reproductive organs are initiated and differentially aborted at the same floral developmental stage in male and female flowers of L. sisymbrioides. The timing of abortion corresponds to Arabidopsis stage 9 (Figures 4, 5, 6 and 7), which is broadly considered an 'intermediate' stage of floral development [22], after primordia initiation but before meiosis. Our ontogeny shows that L. sisymbrioides is representative of the majority of angiosperms with type I unisexual flowers that selectively abort reproductive organs at significantly correlated developmental stages between the two sexes [22]. This evidence suggests that similar regulatory switch points underlie male and female developmental pathways as proposed by Diggle et al. [22] and comparable selective forces are at play in the two sexes. However, the developmental stage and process of reproductive organ abortion in unisexual flowers across angiosperms vary widely with different stages and processes occurring at equal frequencies [22]. With regard to the developmental process of organ abortion in L. sisymbrioides, while sporogenous tissue (PMCs) in stamen locules differentiates ( Figure 6C, H, N), it quickly degenerates during development of carpellate flowers and becomes vacuolated with degraded nuclei ( Figure 6R, W-X, Z4). Programmed cell death, which is involved throughout normal flower development [67], is primarily due to endogenous factors and is evidenced by cell death at a predictable time and location during tissue differentiation [68]. During normal flower development, the tapetum degenerates during microgametogenesis via PCD for proper microspore development and differentiation of pollen, [67]. Other studies have shown that premature tapetal degeneration can lead to male sterility [reviewed in 67]. Therefore, because we observe PCD in anther wall layers before microgametogenesis, this premature tapetal degeneration is likely leading to male sterility in L. sisymbrioides females (Figures 6R, W-X, Z4, 8I-J, [see Additional file 2]). Two types of PCD occur in plants: autolytic and nonautolytic. The former generally occurs during normal plant development, whereas the latter occurs during plantpathogen interactions [69,70]. Moreover, since loss of cell Figure 7 Model for flower development in Lepidium study species, with proposed timing of reproductive organ abortion in dioecious L. sisymbrioides. The model is informed by our SEM and histological observations (Figures 5, 6) and follows Arabidopsis thaliana stages, noted on the right [20,61,62]. Entire flower shown in stages 1 to 6, only one stamen (left) and one carpel of the bicarpellate gynoecium (right) shown in stages 7 to 13. A filament differentiates within the stamen in stage 8, and elongates in stage 9. ⊢ stage of abortion for female (left) and male (right) flowers of L. sisymbrioides. walls and cytoplasm, nuclear condensation, and increase in vacuolar volume are characteristic of autolytic PCD [70], this type of cell death is also likely involved in the degeneration of PMCs in L. sisymbrioides females ( Figures 6R, 8I-J, [see Additional file 2]). In male flowers of L. sisymbrioides, on the other hand, the development of ovules and gynoecia is arrested shortly after initiation of ovule primordia. We found no evidence of cell death, parenchymatization, or change in timing of otherwise normal developmental events in arrested gynoecia. Ovule primordia remain evident in mature male flowers ( Figure 6Q, U-V, Z3, Additional file 2). Therefore, of the six developmental processes reviewed in Diggle et al. [22], arrest of development best characterizes the abortion of the gynoecium in L. sisymbrioides males. Whole genome duplication events via hybridization in the evolution of dioecy in Lepidium Two different copies of the PI first intron were previously identified among the ANZ taxa (clades A, C); only one copy (clade C) was strongly supported [5]. Our PI phylogeny recovered at least four divergent copies of the first intron in L. sisymbrioides and its close relatives (Figure 1), suggesting ancient allopolyploidization events, followed by divergence of PI alleles. Based on our phylogenetic network analyses, L. sisymbrioides has a history of three allopolyploidization events: hybridization (1) between an Australian and Australian or New Zealand species, (2) with an Asian species, and (3) with an American species (Figure 2). Our study provides new evidence of an additional genome within the ' A' clade of PI [5] and shows an additional PI copy in taxa from this clade (as in, L. kirkii, L. naufragorum, L. sisymbrioides and L. tenuicaule; A1-A2, Figure 1), which would be expected of four divergent genomes contributing to several allopolyploidization events in our study species. Based on our cytological and genome size estimates, L. sisymbrioides is an octoploid, which would require several whole genome duplications. Together with our PI data, this evidence suggests that L. sisymbrioides is an allo-octoploid composed of four different genomes. Australian Lepidium appear to have undergone a rapid radiation during the Pliocene and Pleistocene, when the arid and cooler regions of the southeastern temperate biomes were expanding [71]. Previous studies suggested at least one dispersal event each from California and South Africa to Australia or New Zealand; most likely colonizing Australia first, with at least two subsequent dispersal events to New Zealand [12]. The majority of Lepidium species produce mucilaginous seeds that adhere to birds [4], which may have facilitated long-distance dispersal among the Americas, Australia, New Zealand and the Old World [72][73][74][75]. Our results suggest that a hybridization event occurred either within Australia or between an Australian and a New Zealand ancestor, followed by hybridization with an Asian colonist and an American colonist, resulting in the evolution of L. sisymbrioides (Figures 1, 2). Colonization by an Asian ancestor is not well supported by our data and conflicts with previous studies indicating colonization by an African ancestor [12]; this contradictory evidence could be due to the use of different nuclear DNA regions. In spite of this, our results confirm at least two dispersal events to Australia or New Zealand from the New World and Old World that resulted in allopolyploidization, but exact New and Old World ancestry is uncertain. Additionally, we infer a hybridization between ANZ taxa not previously suggested. Our 1C-value estimates for L. sisymbrioides and L. tenuicaule fall within the reported range for the family (0.15 to 2.43 pg [76][77][78]). Lepidium naufragorum lies outside the high end of the range, consistent with it being highly polyploid (18x [65]). Even though the 1C-value of L. sisymbrioides (0.635 pg) is almost fourfold that of Arabidopsis thaliana (0.16 pg [79]), it is comparable to the size of other model plants such as rice (0.5 pg [80]), making it a likely candidate for whole genome sequencing. Genomic resources for this species would facilitate the investigation of sex determination and of the putative chromosomal rearrangements that contributed to the evolution of dioecy after polyploidization. As new technologies and approaches are being developed [81,82], sequencing this octoploid will become more feasible in the near future. Conclusions The developmental process leading to the evolution of dioecy in Lepidium sisymbrioides was placed in the broader context of the historical patterns conditioning the evolution of separate sexes in this unique dioecious relative of Arabidopsis. We have characterized the developmental stage and process of its unisexual flowers, paving the way for future studies aimed at unraveling the genetic basis underlying reproductive organ abortion. Having placed L. sisymbrioides in a phylogenetic context, determined its ploidy, hybridization history, and genome size, and compared it to Arabidopsis thaliana flower development will facilitate the investigation of the role of polyploidy and of potential candidate genes in the evolution of dioecy in Brassicaceae.
2016-05-12T22:15:10.714Z
2014-11-12T00:00:00.000
{ "year": 2014, "sha1": "1b591de94f9a64a9a3c34c7149b2c16ad091c170", "oa_license": "CCBY", "oa_url": "https://evodevojournal.biomedcentral.com/track/pdf/10.1186/2041-9139-5-42", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "339017be7081c48125f5654d47819fcfdf0ab34d", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
222149429
pes2o/s2orc
v3-fos-license
Hermansky-Pudlak syndrome-associated pneumothorax with rapid progression of respiratory failure: a case report Background Hermansky-Pudlak syndrome (HPS) is an extremely rare disease with pulmonary fibrosis (PF), oculocutaneous albinism, induced platelet dysfunction, and granulomatous colitis. Although patients with HPS-associated PF (HPS-PF) often receive treatment with anti-fibrotic agents, including pirfenidone, many HPS-PF cases are progressive. The development of pneumothorax is known to be rare in HPS-PF. Pneumothorax development is generally important for prognosis in patients with interstitial pneumonia. However, there are few reports regarding the development of pneumothorax in patients with HPS-PF. Case presentation A 50-year-old Japanese man with chestnut hair, white skin, and light brown squint eyes visited our hospital for interstitial pneumonia examination. Chest high-resolution computed tomography (HRCT) demonstrated diffuse bilateral reticular opacities along the bronchovascular bundles and traction bronchiectasis predominantly in the upper lung fields. He was definitively diagnosed with HPS because genetic analysis showed that he had a homozygous mutation, c.398 + 5G > A, in the HPS-1 gene. After diagnosis with HPS-PF, he initiated home oxygen therapy due to gradually progressive hypoxemia. Three months after the HPS-PF diagnosis, the patient suddenly developed severe chest pain and dyspnea and was admitted to our hospital on emergency. He was diagnosed with pneumothorax by chest radiological findings. He immediately received chest drainage; however, his pneumothorax did not improve. Therefore, he underwent video-assisted surgery by thoracic surgeons. The leak point was not detected, but multiple bullae were found, mainly in the upper lung lobes. Thus, the surgeons did not perform bullectomy and only covered the apical areas. Fifteen days after the surgery, the patient developed high fever and dyspnea with a new diffuse reticular shadow found through HRCT. We first initiated the patient on broad-spectrum antibiotics; however, the symptoms and radiological findings worsened. Therefore, we started treatment with pirfenidone for inhibition of PF progression. The patient re-developed pneumothorax with severe respiratory failure. Although he re-underwent chest drainage, he died of progressive respiratory failure. Conclusions We herein report the case of a rare HPS patient who developed pneumothorax with progressive PF. Pneumothorax may cause rapid progressive respiratory failure and may be associated with PF progression in HPS-PF. Background Hermansky-Pudlak syndrome (HPS) [1] is an exceedingly rare disease with autosomal recessive inheritance and is complicated with oculocutaneous albinism, platelet dysfunction induced by secondary inhibition of platelet aggregation, granulomatous colitis, and pulmonary fibrosis (PF) [2,3]. The incidence of HPS is one in 500,000 to 1 million. HPS-related PF (HPS-PF) is characterized by usual interstitial pneumonia in pathological findings and slow progression of pulmonary fibrosis in clinical course, with poor prognosis [2]. The prevalence age in HPS is younger compared to that in idiopathic pulmonary fibrosis (IPF). Steroid treatment is reported to be non-effective [4,5]; therefore, anti-fibrotic agents, including nintedanib and pirfenidone, are usually used to treat patients with IPF, even though these drugs have not been approved for HPS-PF. However, these agents are not commonly used for the treatment of HPS-PF. Although a few patients with HPS-PF have been reported to respond to pirfenidone [6,7], it did not have a good effect. The development of pneumothorax is rare in patients with HPS-PF. In general, the development of pneumothorax leads to a decrease in patient quality of life and worsens any respiratory symptoms in patients with interstitial pneumonia. We herein report the development of severe refractory pneumothorax in a patient with HPS-PF and review the previously published case reports associated with the development of pneumothorax in patients with HPS-PF. Case presentation A 50-year-old Japanese man, with natural white skin, chestnut hair, and light brown squint eyes, visited our hospital with a suspicion of interstitial pneumonia based on a chest X-ray (Fig. 1a). He had no smoking and history. Further, he had no family history associated with Fig. 1 a Chest X-ray at the first visit of the patient to our hospital. Chest high-resolution computed tomography (HRCT) findings in the b upper and c lower lung fields at the first visit to our hospital. d Chest X-ray 2 years after the first visit to our hospital. Chest HRCT findings in the e upper and f lower lung fields 2 years after the first visit to our hospital interstitial pneumonia. He worked as an office worker until this admission. High-resolution computed tomography (HRCT) of the chest at the first visit to our hospital showed diffuse bilateral reticular opacities along the bronchovascular bundles and traction bronchiectasis predominantly in the upper lung fields (Fig. 1b and c). These findings on chest X-ray and HRCT gradually progressed in approximately 2 years (Fig. 1d, e, and f). Laboratory data demonstrated that he showed a prolonged bleeding time without decreasing serum platelet number, abnormal coagulation, and elevated levels of Krebs von den Lungen-6 (KL-6) in serum. A pulmonary function test revealed restrictive ventilatory impairment and decreased diffusion capacity for carbon monoxide. He was also diagnosed with oculocutaneous albinism by skin biopsy and ophthalmologic findings. Genetic analysis showed that the patient had a homozygous mutation (c.398 + 5G > A) in the HPS-1 gene; thus, he was definitively diagnosed with HPS. Further, his interstitial change on HRCT was considered to be associated with HPS-PF. After the diagnosis with HPS-PF, hypoxemia gradually progressed; thus, he started home oxygen therapy (HOT). Three months after the diagnosis with HPS, the patient was admitted to our hospital on emergency due to a sudden development of chest pain and acutely worsening dyspnea. Chest X-ray showed a collapsed right lung (Fig. 2a), and the patient was diagnosed with secondary pneumothorax caused by the progression of HPS-PF. Findings of the chest HRCT showed a worsening of the reticular shadow with traction bronchiectasis, right lung collapse, and multiple bullae at the lung apex ( Fig. 2b and c). We performed chest drainage; however, pneumothorax did not improve. Therefore, the patient underwent video-assisted surgery by thoracic surgeons; the leak point was not detected, but multiple bullae were found, mainly in the upper lung lobes (Fig. 3a). Thus, the surgeons did not perform bullectomy but only covered the apical areas with absorbable polyglycolic acid felt (Fig. 3b). Platelet concentrate was transfused before the operation because of the risk of bleeding associated with the patient's prolonged bleeding time; the total bleeding was only 20 mL of blood without additional platelet transfusion. After the surgery, the right lung almost fully expanded, and there were no specific operation-induced problems detected within 10 days post-operation. Fifteen days post-operation, the patient developed high fever and complained about dyspnea. HRCT findings demonstrated the appearance of new ground-glass opacities in the right lower lung lobes. We first initiated broad-spectrum antibiotics due to the suspicion of bacterial pneumonia; however, the symptoms and hypoxemia got worse. The reticular opacities also spread gradually on the chest radiograph, along with elevation of the serum KL-6 level. There were no significant findings regarding bacterial infection during this admission. We considered this as the progression of HPS-PF triggered by the development of pneumothorax or by lung surgery. We initiated pirfenidone for the inhibition of HPS-PF progression; however, it was not effective in suppressing HPS-PF progression. On postoperative day 33, pneumothorax recurred in the right lung. Although we re-performed chest drainage, Fig. 2 a Chest X-ray of the patient upon the development of pneumothorax. b Axial and c coronal views of chest computed tomography at the development of pneumothorax pneumothorax did not improve due to continuous air leakage. Due to severe respiratory failure, we could not re-operate. Forty-two days after pneumothorax diagnosis, the patient died of respiratory failure. Discussion and conclusions Our patient was diagnosed with HPS based on HPS-1 genetic analysis. Many gene subtypes are reported to be associated with HPS. Patients with HPS-1, HPS-2, and HPS-4 often show PF complications [8]. Particularly, patients with HPS-1 have a high incidence of PF [8] and more progressive fibrosis [9]. Our patient showed HPS-1 mutation; thus, the findings from our patient are consistent with those from previous cases. Our patient showed complications of oculocutaneous albinism, platelet dysfunction, and a history of granulomatous colitis; hence, our patient had almost all features associated with HPS. This patient developed pneumothorax; however, this manifestation is rare in HPS. During the observation period, PF and the respiratory condition worsened rather rapidly. In general, survival time has been reported to be approximately 30 years in HPS [2]. In contrast, the survival time was 3 months from the diagnosis of HPS in our patient. The prognosis in our case is shorter than that in previous reports because of the rapid progression of PF and the development of pneumothorax. There have been only two cases reported on the development of pneumothorax in patients with HPS-PF ( [10,11], Table 1). A patient who developed pneumothorax underwent bullectomy and died of respiratory failure 2 years after surgery [10]. Another patient underwent chest drainage and pleurodesis under local anesthesia because of the worsening respiratory condition and died of respiratory failure one year after surgery for pneumothorax [11]. In contrast, our patient first developed pneumothorax three months after diagnosis with HPS-PF and died of respiratory failure 1 month after surgery for pneumothorax. Our patient underwent patch closure to the visceral pleural fistula. The survival time from surgery for pneumothorax was short compared to that in previously reported cases. In all these cases, the patients died of respiratory failure in their fifties. However, lung biopsy could not be performed because of the rapid progression of chronic respiratory failure before surgery; therefore, the prognosis was poorer in our case compared to previous cases. Our patient had already started receiving HOT upon HPS-PF diagnosis. In contrast, the patients in the two published cases did not receive HOT upon the development of pneumothorax. The difference Fig. 3 a Video-assisted thoracic surgery showed bullae at the lung apex in the right upper lobe of the patient. b The right lung apex was covered using absorbable polyglycolic acid felt in respiratory conditions and severity of interstitial pneumonia between our case and those previously reported may be associated with the difference in the time from the development of pneumothorax to death. For the treatment of HPS-PF, due to HPS-PF progression, our patient received pirfenidone. However, lung fibrosis sub-acutely worsened when the patient received the anti-fibrotic agent. We judged that pirfenidone was not effective against PF in our patient. We also considered treatment with other anti-fibrotic agents such as nintedanib. However, the patient had a prolonged bleeding time and pneumothorax; thus, we did not choose this agent due to the possibility of the condition worsening, as well as other adverse effects. Corticosteroids are used for treating any interstitial pneumonia; however, we cannot administer corticosteroids for IPF [12]. HRCT scanning demonstrated a few findings, including ground-glass opacity and consolidation, honeycombing, traction bronchiectasis, and reticular shadow, in our patient. We considered these HRCT findings to be near usual for interstitial pneumonia. Then, our patient developed pneumothorax and underwent lung surgery; thus, we considered the possibility that steroid-induced delayed wound healing might have induced this negative effect. Therefore, we did not select corticosteroids; rather, we chose the anti-fibrotic agent, pirfenidone. Lung transplantation is known as the only life-extending therapy for patients with HPS-PF [13,14]. In our case, the patient had already been registered for lung transplantation; however, the patient developed pneumothorax and died before his turn for transplantation. In conclusion, we report here a rare HPS case of pneumothorax development with progressive PF. The development of pneumothorax may cause the rapid progression of respiratory failure and PF. Clinicians should be made aware that pneumothorax development indicates a poor prognosis in patients with HPS-PF.
2020-10-07T14:15:53.063Z
2020-10-06T00:00:00.000
{ "year": 2020, "sha1": "4890ce21431d75704a586f5cdd58079f33f202f2", "oa_license": "CCBY", "oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/s12890-020-01302-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4890ce21431d75704a586f5cdd58079f33f202f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2919050
pes2o/s2orc
v3-fos-license
Tracing "Fearbola": Psychological Predictors of Anxious Responding to the Threat of Ebola. Serious illnesses such as Ebola are often highly publicized in the mass media and can be associated with varying levels of anxiety and compensatory safety behavior (e.g., avoidance of air travel). The present study investigated psychological processes associated with Ebola-related anxiety and safety behaviors during the outbreak in late 2014. Between October 30 and December 3, 2014, which encompassed the peak of concerns and of the media’s attention to this particular outbreak, 107 university students completed a battery of measures assessing fear of Ebola, performance of safety behaviors, factual knowledge of the virus, and psychological variables hypothesized to predict Ebola-related fear. We found that while our sample was generally not very fearful of contracting Ebola, the fear of this disease was correlated with general distress, contamination cognitions, disgust sensitivity, body vigilance, and anxiety sensitivity-related physical concerns. Regression analyses further indicated that anxiety sensitivity related to physical concerns and the tendency to overestimate the severity of contamination were unique predictors of both Ebola fear and associated safety behaviors. Implications for how concerns over serious illness outbreaks can be conceptualized and clinically managed are discussed. Introduction Health anxiety refers to inappropriate or excessive preoccupation and concerns about one's health status relative to his or her actual state of health (Abramowitz and Braddock 2010). People with health anxiety also engage in a number of behaviors that function to reduce their distress, such as frequently visiting doctors, excessively researching diseases and their symptoms on the internet, or seeking reassurance from loved ones. Although such health-related ''safety behaviors'' may reduce associated distress in the short-term (Abramowitz and Moore 2007), research suggests that these behaviors maintain anxiety in the long-term (see Helbig-Lang and Petermann 2010). Although a diagnostic entity in itself (i.e., Illness Anxiety Disorder [IAD]), health anxiety may be present in a number of psychological disorders, including obsessive-compulsive disorder (OCD), somatic symptom disorders, and other anxiety disorders (APA 2013). Clinically severe health anxiety and associated safety behaviors may result in significant distress and functional impairment (APA 2013). Researchers and clinicians have long observed increases in health anxiety referrals during times of mass media coverage of serious diseases, such as during the Ebola outbreak in late 2014. First discovered in 1976 in the former Zaire (now the Democratic Republic of the Congo), the Ebola virus is a rare yet deadly animal-borne disease transmitted through direct contact with contaminated objects (i.e., needles) or bodily fluids (i.e., vomit, feces). Although experimental vaccines and treatments are in development, there is no FDA-approved medicine available for Ebola to date. The United States (U.S.) Centers for Disease Control (CDC) and Prevention have chronicled 35 Ebola virus outbreaks between 1976 and 2014, varying in severity (CDC 2014). The 2014 multinational outbreak in West Africa (i.e., Guinea, Liberia, Sierra Leone) has been deemed the largest outbreak, with at least 21,000 affected human cases accounting for nearly 9000 deaths worldwide (CDC 2014). Between September 30, 2014, andOctober 23, 2014, the CDC documented two imported cases, including one death, and two locally acquired cases in the U.S. (Dallas, TX, and New York, NY). Accordingly, the CDC activated its Emergency Operations Center, deployed public health experts to the affected regions, and issued an advisory against nonessential travel to West Africa (i.e., Level 3 travel notice). The 2014 global outbreak and the single confirmed Ebolarelated death in the U.S. prompted media coverage that may have both improved and compromised the U.S. public's knowledge and perception of Ebola. Specifically, constant communication regarding the disease may have promoted desirable health behaviors (e.g., hand-washing) and simultaneously (or alternatively) incited panic akin to that observed during the SARS, avian flu, and Swine flu epidemics over the last decade (Sandman 2009; Van den Bulck and Custers 2009). For example, during the height of the Ebola concern, certain U.S. shops, school districts, and state governments enacted substantial ''precautionary measures'' against a seemingly remote danger (Fox 2014). In Maine, one healthcare worker contested forced quarantine in her own home for 21 days after returning from Sierra Leone despite showing no symptoms and twice testing negative for Ebola. As of October 14, 2014, approximately two-thirds of U.S. residents surveyed reported fears about an Ebola outbreak in the U.S (Dennis and Craighill 2014). There is empirical evidence that publicizing disease outbreaks can lead to mass hysteria and health anxiety even among the medically healthy (Taylor and Asmundson 2004). Anecdotally, in our clinics we assessed multiple patients whose presentation of OCD and IAD included concerns about contracting Ebola. Some criticized the media for exaggerating the risk of Ebola spreading to the U.S. and obfuscating the CDC's message that ''Ebola poses no substantial risk to the U.S. general population'' (CDC Health Alert Network 2014). Other groups in the popular press (e.g., Robbins 2014) even coined the term ''Fearbola'' to refer to the U.S. public's exaggerated collective response to the very low threat of a domestic Ebola outbreak. To this end, the American Psychological Association (APA 2014) disseminated tips for ''managing your fear about Ebola'' (i.e., keep things in perspective, get the facts); yet it is unclear if-or how-such counsel was effective. Understanding the psychological factors that predict anxiety in response to the threat of a disease outbreak is vital, as it may inform treatment and prevention strategies for health-related anxiety (Bish and Michie 2010). Elevated levels of health anxiety may also be accompanied by safety behaviors performed to minimize the possibility or severity of illness (e.g., avoidance, excessive washing, or overutilization of medical resources), which may compound distress and functional impairment (e.g., Olatunji et al. 2011). Accordingly, we designed the present study during the height of U.S. concerns about Ebola to better understand the psychological factors associated with Fearbola and engagement in Ebola-related safety behaviors. Informed by the limited body of recent research on anxiety among students in response to pandemic illnesses such as SARS (e.g., Wong et al. 2007), avian flu (e.g., Lau et al. 2008), and H1N1 (e.g., Wheaton et al. 2012), we considered a variety of constructs that might predict the fear of Ebola, as we describe next. One possible predictor is general distress (i.e., anxiety and depressive symptoms). Not only can general distress be associated with poor physical health (Scott et al. 2007;Niles et al. 2014), but both anxiety and depression involve negative interpretive biases that are often involved in anxiety related to health and illness (e.g., catastrophizing; Reif et al. 1998). Second, because Ebola is transmitted through bodily fluids, it is possible that those holding dysfunctional beliefs about contamination are more vulnerable to excessive fear of Ebola. In other words, Fearbola may reflect an overestimation of the likelihood and severity of contamination during a global outbreak. Third, disgust sensitivity, or one's propensity to experience disgust across multiple domains, has also been identified as a key feature of contamination fear (Cisler et al. 2010;Olatunji and Sawchuk 2005). Thus, we also considered heightened disgust responding as a potentially important process that may be related not only to contamination aversion broadly, but also to Ebola fear specifically. Body vigilance, the tendency to carefully monitor body sensations (Schmidt et al. 1997), is also a candidate predictor of Ebola fear. That is, frequent and intense body scanning may increase opportunities to notice otherwise benign changes in the body (as well as its byproducts) and misinterpret them catastrophically (Olatunji et al. 2007a). Indeed, Olatunji et al. (2007a) found that body vigilance was strongly correlated with health anxiety symptoms in both clinical and nonclinical adult samples. Relatedly, anxiety sensitivity, the tendency to misconstrue benign anxious arousal sensations as dangerous (Taylor et al. 2007), may also predict Ebola fear and safety behavior performance. Specifically, the degree to which someone (mis)interprets unexplained body sensations (e.g., nausea) as catastrophic may be associated with his or her proclivity to register anxious arousal as a symptom of Ebola, which might generate anxiety and urges to engage in a variety of safety behaviors. In light of the APA's suggestion that ''getting the facts'' would recalibrate U.S. residents' anxiety over Ebola (APA Cogn Ther Res (2015) 39:816-825 817 2014), we also considered that factual knowledge of the disease (e.g., means of Ebola transmission) as well as the particulars of the 2014 outbreak (e.g., countries affected) might predict Ebola-related fear and engaging in excessive safety behaviors. Because cognitive models of pathological anxiety emphasize the therapeutic effects of corrective information (e.g., Abramowitz et al. 2011;Clark 1986), one might predict that those possessing greater understanding about Ebola would report less anxious responding to the possibility of a domestic outbreak. As summarized above, the extant literature provides clues to the factors that might predict Ebola-related fear and safety behaviors. Accordingly, we hypothesized that less factual knowledge about the virus, but greater levels of general distress, contamination cognitions, disgust sensitivity, body vigilance, and anxiety sensitivity, would predict greater Ebola-related fear and engagement in Ebolarelated safety behaviors. Method Participants One hundred and thirty-seven undergraduate psychology students at the University of North Carolina at Chapel Hill (UNC-CH) participated in this study for course credit. The study was open to all Introductory Psychology students and was advertised through the Psychology Department-monitored online participant pool. Following data screening (described further in the ''Method'' Section below), 30 participants were excluded, bringing the final sample size to 107. The sample was mostly male (n = 60; 56.1 %) with a mean age of 18.93 years old (SD = 1.08, range 18-22). The majority of participants identified as white (n = 85; 79.4 %), with 8.4 % identifying as African American (n = 9), 5.6 % identifying as Asian (n = 6), and 5.6 % identifying with another racial/ethnic group (n = 6). Procedure Data were collected from October 30th through December 3rd, 2014. Undergraduate psychology students who consented online to participate in this study were directed to a survey link hosted by Qualtrics, a secure online survey development tool. Participants completed the measures described below in randomized order, followed by a demographics questionnaire. Three distractor items (e.g., ''please answer Always True for this item'') were also included among the measures to increase the probability that only valid responses from attentive participants would be included in analyses (Meade and Craig 2012). This study was approved by the university's Institutional Review Board and informed consent was obtained from all individual participants included in the study. Ebola Fear Inventory (EFI) The EFI is a nine-item measure designed for the present study to assess fear associated with the Ebola virus (psychometric and factor analytic properties are presented in the ''Preliminary Analyses'' Section below; items listed in Table 2). Items are rated from 1 (not at all) to 5 (very much) and were inspired by those used by Wheaton et al. (2012) to assess H1N1 (swine flu) fears. The EFI demonstrated good internal consistency (a = .86) in the current sample. Ebola Safety Behavior Checklist (ESBC) The ESBC is a nine-item checklist assessing respondents' utilization of safety behaviors designed to prevent contracting Ebola (e.g., washing hands, checking the internet for information about Ebola, avoiding people). This instrument was also inspired by a similar measure designed by Wheaton et al. (2012). Participants rated the extent to which they engaged in activities due to concerns about Ebola on a 0 (none) to 10 (extreme amount) scale. The ESBC demonstrated good internal consistency (a = .84) in the current sample. Depression Anxiety Stress Scales-21 (DASS-21; Antony et al. 1998) The DASS-21 is a short-form version of the 42-item DASS (Lovibond anf Lovibond, 1995) that assesses subjective distress over the past week along three subscales: depression, anxiety, and stress. Participants rate how each of the 21 statements (e.g., ''I found it hard to wind down'') apply to them on a 0 (rarely) to 4 (very much, or most of the time) scale. The DASS-21 has demonstrated good reliability and construct validity in both clinical and non-clinical samples (Henry and Crawford 2005). The DASS-21 showed excellent internal consistency (a = .93) in the current sample. Contamination Cognitions Scale (CCS; Deacon and Maack 2008) The CCS is a measure of respondents' tendency to overestimate the likelihood and severity of contamination from a variety of commonplace objects (e.g., stairway railings). Participants separately rate the likelihood and severity of contamination for each item on a 0 (not at all) to 100 (extremely) scale. Because Ebola is an objectively serious illness, yet the prevalence was extremely low in the United States, we treated the likelihood (CCS-L) and severity (CCS-S) scales as independent constructs. Separate CCS-L and CCS-S subscale scores were formed by computing the average response for items falling on the CCS-L and CCS-S subscales, respectively. The internal consistency was excellent for the CCS-L (a = .96) and CCS-S (a = .97) in the current sample. Disgust Scale-Revised (DS-R; Olatunji et al. 2007b) The DS-R, revised from the original DS (Haidt et al. 1994), is a 25-item measure of respondents' propensity to experience disgust across multiple domains. Participants rate the degree to which they might find a number of scenarios (e.g., ''you see maggots on a piece of meat in an outdoor garbage pail'') disgusting on a scale of 0 (strongly disagree) to 4 (strongly agree). The DS-R has demonstrated adequate internal consistency and convergent validity in previous work (Olatunji et al. 2007b) and showed good internal consistency (a = .81) in the current sample. Body Vigilance Scale (BVS; Schmidt et al. 1997) The BVI is a four-item measure of one's tendency to attend to anxiety-related body sensations. The first three items assess attentional focus to, sensitivity to changes in, and amount of time devoted to monitoring body sensations on a 0 (not at all) to 10 (extremely) scale. The fourth item requires the respondent to separately rate the extent to which he or she pays attention to 15 body sensations (e.g., heart rate) on a 0 (none) to 10 (extreme) scale, which are averaged to yield a single item score. The BVS has demonstrated good internal consistency and test-retest reliability in previous research (Olatunji et al. 2007b;Schmidt et al. 1997). The BVS showed excellent internal consistency (a = .97) in the current sample. Anxiety Sensitivity Index-3, Physical Concerns Subscale (ASI-3; Taylor et al. 2007) The ASI-3 (derived from the original ASI; Reiss et al. 2008) is an 18-item measure of beliefs regarding the dangerousness of anxiety along physical (e.g., ''it scares me when my heart beats rapidly''), cognitive (e.g., ''it scares me when I am unable to keep my mind on a task''), and social (e.g., ''it scares me when I blush in front of other people'') domains. Participants rate their agreement with these statements on a 0 (very little) to 4 (very much) scale. The ASI-3 has demonstrated good three-factor structure with good internal consistency, convergent validity, discriminant validity, and criterion-related validity in previous research (Taylor et al. 2007). Because the social and cognitive subscales are not conceptually relevant to Ebola concerns addressed in the current study, only the physical concerns subscale was used in the below analyses. The ASI-3 physical concerns subscale showed good internal consistency (a = .82) in the current sample. Ebola Facts Quiz (EFQ; USA Today) The EFQ is an eight-item multiple choice measure of knowledge about the Ebola virus and 2014 global outbreak. Participant responses are scored on a 0 (incorrect) to 1 (correct) coding scheme (possible scores range 0-8, with higher scores indicating greater knowledge about the Ebola virus and 2014 outbreak). The quiz was originally posted online on October 9, 2014, by USA Today (http://www. usatoday.com/story/news/nation-now/2014/10/09/ebola-virusfacts-quiz/16956413/). Data Analytic Strategy An item analysis (i.e., corrected item total correlations, internal consistency) of the EFI was first conducted to evaluate the measure's psychometric properties and suitability for further analyses. We then correlated the EFI and ESBC with all other study variables to explore the relationship between both Ebola fear and Ebola safety behavior use with relevant psychological constructs. Finally, to determine which psychological variables were significant and meaningful predictors of Ebola fear and Ebola safety behavior use, we tested a simultaneous linear regression model separately for each outcome measure, including the assessed psychological constructs as statistical predictors. Data Screening Of the 137 participants who completed the survey, 27 did not pass all three distractor items and were consequently excluded from further data analyses. Data were further screened to assess concordance with statistical assumptions. One case fell outside the possible range on a CCS-S item (participant reported 790; possible item range 0-100) and so was excluded from analyses. Distributions of scores on all of the study measures were free of significant skew (all values \2) and kurtosis (all values \4). No univariate outliers were detected, 1 but two multivariate outliers were noted (Mahalanobis distances fell beyond critical v 2 df=8 value of 26.125). Multivariate outlier status was driven by 1 One participant scored [3.29 standard deviations above the sample mean on the DASS. Visual inspection of the data showed that this score was an extension of the sample distribution, so this observation was retained. Cogn Ther Res (2015) 39:816-825 819 unusual combinations of scores on the DASS, BVS, CCS-L, and CCS-S for both participants. These two multivariate cases were excluded from analyses due to the possible bias of regression point estimates and sufficiently large sample. Score distributions of the remaining 107 participants were again tested after deleting the problematic cases; no significant skew, kurtosis, univariate outlier indices, or multivariate outlier indices were detected (see Table 1). Results Descriptive Statistics Table 1 suggests that although participants were not highly fearful of the Ebola virus on average, the range in scores on the EFI indicated that some participants were at least moderately fearful. Our sample also varied in the degree to which they endorsed performing a number of safety behaviors due to the global Ebola outbreak, with the range in scores suggesting that overall, our participants were performing a moderate amount of Ebola related safety behaviors during the peak of U.S. Ebola concerns. Scores on our other measures fell within the typical range for nonclinical samples. Finally, Table 1 shows that participants had a variable degree of factual knowledge about the Ebola virus and the 2014 outbreak. Preliminary Analyses Item analyses were conducted according to guidelines set forth by DeVellis (1991) (Nunnally and Bernstein 1994). Further, total scale reliability indices (Cronbach's a) were substantially improved following deletion of both items. These empirical findings, paired with the fact that a separate measure was administered to assess actual Ebola knowledge (i.e., the EFQ), justified exclusion of these three items from the EFI. The final 9-item EFI showed good reliability (a = .87). The distribution of scores on the final EFI was also free of significant skew (1.36) and kurtosis (1.39) ( Table 2). Zero-Order Correlations Two-tailed zero-order correlations were conducted to examine the relationship between Ebola virus concerns, Ebola virus safety behaviors, and other study variables. First, we found that the date of study completion was not significantly correlated with EFI scores, r(107) = -.12, p = .229. Next, as seen in Table 3, scores on the EFI were significantly associated with scores on the DASS, CCS-L, CCS-S, DS-R, BVS, and ASI-3 Physical; but not the EFQ. The ESBC was significantly related to all other variables except the EFQ, suggesting that there was not a statistically significant relationship between knowledge of the Ebola virus and either Ebola fear or Ebola safety behaviors. In fact, no significant relationship between Ebola knowledge and any study measure was detected. Regression Analyses Predicting Ebola Fear A simultaneous linear regression was conducted to explore which psychological variables independently predicted Ebola fear (see Table 4). Indices of multicollinearity were acceptable (all tolerance values C.57 and all VIF B 1.75), suggesting a lack of redundancy in model predictors. The overall regression model was significant and accounted for approximately 27 % of variance in EFI scores, F(7, 98) = 5.09, p \ .001. Within the full model, only CCS-S and ASI-3 Physical Concerns Subscale scores uniquely and significantly (ps B .05) predicted fear of the Ebola virus. Specifically, concerns regarding the severity of contamination uniquely accounted for 7.3 % of variability in EFI scores and anxiety sensitivity accounted for 3 % of variability in EFI scores. Neither the DASS, CCS-L, DS-R, BVS, nor EFQ were uniquely significant predictors of Ebola fear in the current sample (all ps C .20). Regression Analyses Predicting Ebola Safety Behaviors A simultaneous linear regression was conducted to explore which psychological variables independently predicted Ebola-related safety behaviors (see Table 5). Indices of multicollinearity were also acceptable (all tolerance values C.57 and all VIF B 1.75). The overall regression model Specifically, concerns regarding the severity of contamination uniquely accounted for 8.4 % of variability in ESBC scores, and disgust sensitivity uniquely accounted for 2.6 % of ESBC score variance. Neither the DASS, CCS-L, BVS, ASI-3 Physical Concerns Subscale, nor EFQ were significant individual predictors of Ebola safety behaviors in the current sample (all p values C.060). Discussion The present study was designed to identify psychological predictors of anxious responding to the 2014 Ebola outbreak. We hypothesized that in our unselected U.S. university student sample, greater anxiety sensitivity, body vigilance, disgust sensitivity, contamination concerns, and general psychological distress would predict greater Ebola fear and engagement in safety behaviors (e.g., avoiding airports) with the intention of preventing infection. We also hypothesized that having more factual knowledge about the Ebola virus and the details of the 2014 outbreak would predict less Ebola-related fear and safety behavior use. To address the study's aim, we developed brief measures of Ebola related fear and safety behavior use, which both demonstrated acceptable psychometric properties. Consistent with our predictions, Ebola fear and safety behaviors were correlated with general distress, contamination cognitions, disgust sensitivity, body vigilance, and anxiety sensitivity related to physical concerns. Contrary to our predictions, fear of the disease was not associated with knowledge about the Ebola virus and the 2014 outbreak. When considered simultaneously in our regression model, the tendency to overestimate the severity of contamination emerged as the only significant predictors of both Ebola fear and associated safety behaviors. Physical anxiety sensitivity concerns significantly predicted Ebola fear but only marginally predicted Ebola safety behaviors, and disgust sensitivity only significantly predicted safety behavior use. Body vigilance only marginally significantly predicted engagement in Ebola related safety behaviors. Overall, our findings provided partial support for our hypotheses. To date, little research has been conducted on anxious responding to the threat of a serious illness outbreak, such as that associated with the 2014 Ebola outbreak in West Africa. Given our access to a large unselected population of young adults exposed to media coverage of the Ebola outbreak, we were well positioned to identify the predictors of Ebola-related fear and safety behaviors. Clarifying the factors that might contribute to such anxiety is valuable in understanding how the public responds to large-scale illness threats more generally and identifying individuals who might be vulnerable to maladaptive responses (i.e., health anxiety). It may also be of service in developing prevention programs and clinical intervention strategies should the threat of another global panic surface. That is, our study's findings suggest that contamination concerns and physical concerns related to anxiety sensitivity may be especially important in the experience of Ebola-related anxiety and safety behaviors, regardless of accurate factual understanding of the disease. Anxiety sensitivity along physical domains significantly predicted Ebola fear, but only marginally predicted Ebola safety behavior use. Although our findings are cross-sectional, one way anxiety sensitivity might contribute to fearful responding to the Ebola virus is through the misperception of benign (and universal) body sensations as dangerous. Such a perception might especially lead to fear considering that many body sensations associated with anxiety mirror the symptoms of Ebola (e.g., nausea). It is also not surprising that concerns regarding the severity of contamination significantly predicted Ebola fear and safety behavior use. Ebola is indeed a severe illness with extremely unpleasant symptoms (e.g., fever and hemorrhaging), but it is possible that the frequent, widespread media coverage in the U.S. led residents to overestimate the severity of the disease. Similar possibilities were discussed in a study of undergraduates' fearful responding to the heavily publicized H1N1 pandemic in 2009(Wheaton et al. 2012. Findings from our study cannot, however, provide causal evidence for the role of increased media coverage on increased Ebola fear and safety behavior use. It is interesting that factual knowledge about Ebola was unrelated to respondents' degree of Ebola fear and engagement in related safety behaviors. Our findings, however, can be seen as consistent with previous work suggesting that accurate information (e.g., illness incidence statistics) is unrelated to anxiety symptoms (e.g., Moritz and Pohl 2009). Our EFQ items were derived from a quiz posted to a popular online media source (www.usatoday. com). Although the psychometric properties of this measure are unknown, the distribution of scores in our sample was free of skew or kurtosis and approximated normality. Ebola is a serious disease, and the 2014 outbreak was appropriately declared a ''public health emergency of international concern'' by the World Health Organization (WHO Ebola Response Team 2014). Yet as National Institute of Allergy and Infectious Diseases director Anthony Fauci noted, ''what we're seeing is a catastrophic health crisis in West Africa, and an epidemic of fear here'' (C-SPAN2 2014). In this sense, the U.S. Fearbola outbreak was arguably a greater threat to the wellbeing of U.S. residents than the actual Ebola virus itself. Our findings that knowledge did not predict Ebola-related fear or safety behaviors suggest that increasing awareness and understanding about a remote epidemic may not, in fact, alleviate exaggerated fear of its local outbreak. This possibility carries clinical relevance, for cognitive models posit that dysfunctional beliefs (e.g., threat overestimates) are meaningful factors in the etiology of pathological anxiety that should be targeted during treatment (e.g., Fergus 2014; Obsessive-Compulsive Cognitions Working Group 1997Group , 2003Group , 2005Salkovskis and Warwick 2001;Taylor and Asmundson 2004). An alternative explanation for our finding is that some participants coped with their fear of Ebola by seeking out knowledge and information about the disease (akin to reassuranceseeking in health anxiety), thus washing out the hypothesized effect. Therefore, findings from our study may inform cognitive interventions, as providing accurate information regarding the nature and severity of focal disease outbreaks may be insufficient to adequately challenge dysfunctional health-and illness-related beliefs. Because our study included a nonclinical sample, however, future research utilizing healthy and health anxious individuals would help determine whether providing accurate information about the risk of disease outbreaks in the U.S. is sufficient to mitigate such illness fears in clinical practice. Our study measures were largely inspired by those designed by Wheaton et al. (2012) in their investigation of anxious responding to the H1N1 influenza outbreak of 2009-2010. These authors reported that contamination cognitions (both likelihood and severity overestimates), disgust sensitivity, and health anxiety significantly predicted H1N1 fear, but that physical concern-related anxiety sensitivity, body vigilance, and general distress did not. Therefore, our findings are somewhat consistent with Wheaton and colleagues. One possible explanation for the discrepant findings relates to disgust-related elements (e.g., sympathetic magic) differentially associated with the H1N1 and Ebola viruses. The peak of the 2014 Fearbola panic saw only four total U.S. human cases (only two of which were contracted locally), yet 2009-2010 witnessed approximately 60.8 million H1N1 U.S. cases (Shrestha et al. 2011). Therefore, Ebola may have been perceived as more distant and remote a threat than the H1N1 virus was in 2009-2010. Another consideration is that H1N1 is a more communicable virus than Ebola, which would make disgust a more powerful predictor of anxious responding to H1N1 than Ebola. In contrast, mechanisms such as the laws of contagion, similarity, or sympathetic magic (see Cisler et al. 2009) may have been more relevant for U.S. residents reporting greater Ebola fear. Unfortunately, this study did not assess participants' perceptions of contagion related to the Ebola outbreak specifically, so such explanations are purely speculative. Future research investigating the degree to which certain outbreaks are specifically perceived as disgusting, dangerous, and controllable are warranted. This study's findings should be interpreted with some caution in light of the following limitations. First, participants were undergraduate students; as such, this sample was presumably healthy on average-physically and psychologically. Therefore, our findings may not apply to individuals with clinical levels of health anxiety (e.g., hypochondriasis, OCD) or medical vulnerability (e.g., autoimmune diseases). However, this study's findings may nevertheless be useful in informing clinical interventions for disease outbreak-related fears among otherwise healthy individuals. A second limitation is that participants in our study were recruited from a single southeastern university. Individuals living in less populated areas (e.g., Wyoming) or in states with confirmed Ebola cases (e.g., Texas) may have experienced different levels of concern. Similar studies utilizing more geographically representative samples would be desirable. A third limitation is that all data were obtained via self-report, which might inflate associations among variables. Future studies utilizing multi-method assessment are warranted. Finally, the cross-sectional design of this study precludes causal or directional conclusions. It is possible that individuals with greater contamination concerns are more prone to respond fearfully to Ebola, that those with higher Ebola fear were more likely to develop contamination concerns, or that one or more other factors (e.g., observational learning, informational transmission) contributed to both high contamination concerns and Ebola fear. Future longitudinal studies are necessary to determine which constructs prospectively predict the onset of health anxiety in response to the threat of a serious disease. Similarly, the possibility that safety behaviors generate or exacerbate Ebola concerns is especially worthy of consideration in light of research showing that deliberately engaging in health-related safety behaviors (e.g., avoiding public contaminants) causes individuals to become more concerned with the risks of contamination (Deacon and Maack 2008;Olatunji et al. 2011). Although these limitations somewhat qualify the generalizability of our findings, the present study offers data relevant to understanding the psychological predictors of anxious responding to publicized epidemics. Conflict of Interest Shannon M. Blakey, Lillian Reuman, Ryan J. Jacoby, and Jonathan S. Abramowitz declare that they have no conflict of interest. Informed Consent All procedures performed in this study, which involved human participants, were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Institutional Review Board (IRB) approval was obtained and informed consent was obtained from all individual participants included in the study. Animal Rights This article does not contain any studies with animals performed by any of the authors.
2017-08-02T22:18:49.822Z
2015-06-19T00:00:00.000
{ "year": 2015, "sha1": "900ef1c3563a6920069036d69658f270a9485a7a", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7088101?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "900ef1c3563a6920069036d69658f270a9485a7a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
228829651
pes2o/s2orc
v3-fos-license
Analysing the environmental websites of the world’s greatest polluters: a multimodal ecolinguistic approach Abstract This paper develops a visual analysis of the environmental webpages of 20 global companies, considered to be the world’s greatest polluters in terms of their carbon emissions into the atmosphere. Our aim is to determine how these companies build a public reputation as environmentally concerned agents in relation to climate change. The analysis is based on the theoretical and methodological propositions put forward by Critical Discourse Analysis, ecolinguistics and multimodal analysis. More specifically, we take into account Kress and van Leeuwen’s grammar of visual design, which enables us to describe and classify the images on webpages and to determine how these images are used to enforce certain narratives and ideologies. The paper also develops a comparative study of promotional strategies and the level of development and communicative efficiency of the sustainability webpages of Western and non-Western companies, on the one hand, and of global companies and environmental NGOs on the other. Introduction Over the last three decades, public opinion has become increasingly concerned with the preservation of the environment and the need to fight against climate change. A global study carried out by Ipsos in 28 countries between February and March 2019 found that climate change is the most important environmental issue for more than one-third (37%) of citizens around the world (Ipsos, 2019). As might be expected, corporate reputation is affected by this public concern. The 2019 UK Authenticity Gap Report found that two-thirds of consumers want companies to make a greater contribution to the welfare of society, with 53% mentioning climate change as one of the main issues that companies should address (FleishmanHillard, 2019). In fact, more than half of those who were surveyed declared that their attitude towards a brand as consumers was influenced more by the information that they received on how the management behaved, and on how the company engaged with social issues, than by the products and services themselves. These conclusions suggest that it is vital for companies to build a public discourse on citizens' social concerns, including environmental matters, if they want to preserve their reputational value as an essential part of the brand assets. On the other hand, we should not forget that large corporations are frequently considered to be the 'real culprits' of the environmental crisis (Byskov, 2019;Kvaric, 2019). This makes it even more necessary for them to build a persuasive and well-articulated corporate social responsibility (CSR) discourse on climate change and the environment, which can be disseminated to all customers and stakeholders. The creation of sustainability webpages is one of the most readily available means for this public relations purpose, as they constitute cost-effective systems to reach large audiences. Other instruments, such as corporate reports, may also be used for public relations, but they tend to focus on more specialised audiences. This paper develops a visual analysis of the environmental webpages of 20 global companies, considered to be the world's greatest polluters in terms of their carbon emissions into the atmosphere, according to the research published by the Climate Accountability Institute in the US (2019). Our aim is to examine how these companies deal with climate change issues on their webpages, in order to disassociate themselves from their image as environmentally harmful businesses. More specifically, we intend to establish what types of visual narratives are used to convince stakeholders about the companies' engagement with environmentally responsible policies and how these corporate narratives differ from other social accounts. In essence, we try to answer the following research questions: What degree of attention do these global companies pay to the discussion of climate change on their webpages? Can we find any differences in the level of development and communicative efficiency of sustainability webpages from Western companies and from those companies whose major basis of operation lies outside the West? What visual attributes are used on these corporate websites to communicate the company's stance? How do the images selected contribute to reinforce underlying discourses and narratives on climate change? How do the visual narratives created by the biggest polluters differ from the visual narratives endorsed by non-profit organisations? What are the practical implications of these differences for global companies? Before we address these questions, however, we should clarify how we understand the notions of 'discourse' and 'narrative', as these concepts can be interpreted in different ways. Basically, in this study we consider discourses according to an ecolinguistic framework, as 'standardized ways [in which] particular groups in society use language, images and other forms of representation' (Stibbe, 2015, p. 22). The construction of a corporate discourse represents an attempt to communicate a certain ideological position (taken as a specific worldview) and to create a structured system of relationships both within and outside the domain of the company (Jaworska, 2020). As part of corporate environmental discourses, climate change narratives are to be understood, following Fløttum (2010), as 'verbal constructions or stories that present climate change as a certain type of problem, with implicit or explicit suggestions for action' and 'with a more or less clear evaluation component ' (pp. 13-14). In corporate contexts, the creation of specific climate change narratives works to legitimise the actions undertaken by the corporation in relation to social welfare and to bid a positive appraisal of the company's identity . The relevance of webpages for corporate communication, and more specifically for CSR discourses, has been mentioned in several academic studies (see, for example, Atli et al., 2018;Hetze & Winist€ orfer, 2016;Tenca, 2018). Websites are visual cultural expressions. Therefore, special attention should be given to those images that appear on the webpage and to the way in which these visual resources are used to communicate. The present study seeks to examine how global companies disseminate their views on the environment by using certain types of images, which promote specific discourses. The investigation is based on the analytical tools proposed by Kress and van Leeuwen (1996), who offer a systematic and comprehensive framework to determine how images communicate meaning. Our study examines all the images used on the webpages of the 20 greatest polluters and contrasts them to the visual discourse reproduced on NGO environmental websites. We think that this contrast may be useful for two reasons: first, the comparison between these two types of visual narratives reveals ideological and conceptual differences in the description of the environment and the relationship between humans and the natural world. This helps us to identify the attitude that the greatest polluters take towards environmental matters. At the same time, the discursive and communication strategies used by NGOs can be useful for those companies that want to improve their environmental communications and reach the general public more effectively. The remainder of this paper is structured as follows. First, a brief review of the literature on climate change and environmental CSR communications is presented (section 2). Section 3 describes the theoretical framework, based on discourse analysis, ecolinguistics and multimodal discourse analysis, a branch of communication studies which extends the methodological proceedings of linguistics to other semiotic modes, such as images (Jewitt, 2014(Jewitt, , 2016Kress & van Leeuwen, 1996;O'Halloran, 2011). Section 4 explains the methodology that has been used, based on the classification of all the images found on the corporate webpages. Then, research findings are discussed (section 5), before the paper concludes with the academic and practical implications of the investigation and possible lines for future research (Section 6). Literature review Previous scholarship has acknowledged the importance of corporate discourses on climate change as part of the attempt to orientate the stakeholders' perception of a brand (Calabrese et al., 2019;Solomon et al., 2011). To have positive effects on stakeholders, however, climate change disclosing should be planned carefully according to the company's public relations strategy, for example by releasing this information gradually through the media (Lee et al., 2015) and by taking into account possible differences in how different stakeholders will receive environmental information (Radhouane et al., 2018). A number of factors have been found to influence a company's readiness to be publicly engaged with climate change. Among them, size (C ordova et al., 2018;Eleftheriadis & Anagnostopoulou, 2015;Nartey, 2018;S anchez-Infante Hern andez et al., 2020), internal organisation systems (C ordova et al., 2018;Nartey, 2018;Rankin et al., 2011), international presence (Halkos & Skouloudis, 2016), the sector in which the companies operate (Weder et al., 2019), and external pressure (Haque & Islam, 2015;Littlewood et al., 2018), including media exposure (Wang et al., 2013) and extraordinary natural disasters (Pollach, 2018). Research also suggests that those companies and countries which implement environmental issues in their everyday practices, through patents or industrial and economic activity, tend to be more successful in terms of competitive advantages, economic growth and innovation (Ferreira et al., 2020;Mohammadi et al., 2018;Singh et al., 2019;Skare & Golja, 2012). Several researchers have focused on how corporations use communication policies as 'greenwashing'. While the meaning of the term is disputed, greenwashing is generally understood as false advertising or partial disclosure of environmental data (Gatti et al., 2019). One of the most frequently quoted definitions states that greenwashing is 'the practice of promoting environmentally friendly programs to deflect attention from an organization's environmentally unfriendly or less savoury activities' (Marquis & Toffel, 2011, 9. 19). Greenwashing refers to information policies and it should be distinguished from the environmental performance achieved by a corporation. Investigations show that 'hard greenwashing' (developing environmental communications without implementing real CSR policies) may actually be detrimental to the reputation of a firm (Bazilier & Vauday, 2013). In this paper, we consider greenwashing strategies in relation to discursive formation, i.e. we do not evaluate the environmental practices that companies implement, or the degree of falsehood of the environmental information that they disclose. Rather, we draw on the philosophy of ecolinguistics and multimodal studies to reveal how companies use website images to promote certain assumptions and narratives on technology, consumerism and business activity which prevent or moderate ecological activism. Thus, as this investigation shows, the visual narratives reproduced on the environmental webpages of the greatest polluters induce social conformity in relation to environmental damage, downplay the urgency of the fight against climate change and suggest that actions to avoid ecological destruction should be subordinated to industrial and economic development. Some scholars have discussed the role that environmental CSR reporting plays on corporate websites, as we do in this study. Morhardt (2010), for example, carries out a comprehensive study of all Fortune 500 and Fortune 1000 companies in 25 industrial sectors and ranks them in terms of reporting quality according to the Pacific Sustainability Index. Jayanti (2018) analyses the webpages of the 100 most sustainable global companies to conclude that although there is significant variation in sustainability topics, environmental concerns are firmly anchored to corporate communications, with most companies dealing with topics associated with climate change, such as energy consumption (68%) or transport (88%). Weder et al. (2019), for their part, analyse the energy sector, where, as expected, most companies dedicate considerable communication efforts to CSR reporting. These previous investigations focus on the criteria that influence CSR reporting or, alternatively, on the major topics and types of actions that companies include on their websites. Our paper contributes to the scientific literature in a different way. Its purpose is to disclose the visual narratives on climate change which 'the biggest polluters' circulate among the general public and to compare them with the narratives created by non-profit organisations. We innovate, therefore, by adopting a multimodal approach, which deals with ecological visual discourse, an area which is under-researched. Gong, for example, recently referred to the need to undertake more 'multimodal ecological discourse analyses' (2019, p. 50). We also innovate by adopting a comparative method which contrasts visual narrative patterns in business and NGO contexts. Generally, the relationship between NGOs and corporations in terms of sustainability practices has been studied as an example of partnership (Bitzer & Glasbergen, 2015;Idemudia, 2017;Moosmayer et al., 2019;Shumate et al., 2018) or social proximity (Joensuu et al., 2015). However, with some exceptions (Ferguson et al., 2016; Fern andez-V azquez & Sancho-Rodr ıguez, 2020a), scholars have not yet compared the use of communication strategies in corporate and NGO websites. As we have said, we believe that this comparison can be useful to identify examples of good practice in NGO environmental communications that can be transferred to the corporate world. To identify the visual narrative patterns used on corporate websites, we should consider rhetorical strategies and the types of discourses that have been found to be prominent in environmental CSR reports. 1 Often, references to profit and financial return occupy a central position in corporate discussions on climate change (Ferguson et al., 2016;Laine, 2010). This suggests that companies are more interested in preserving their business and reputation (by pretending to be environmentally concerned agents) than in undertaking real efforts to face the climate crisis. The prevalence of certain rhetorical strategies in corporate discourses on climate change seems to confirm this hypothesis of 'greenwashing'. Tregidga et al. (2013), for example, show how corporate communications on sustainability resort to the metaphor of a journey without destination, which at first sight suggests a commitment for change (the journey), but actually reinforces 'the business as usual', thus deflecting 'attention away from the destination, that is, sustainability' (Tregidga et al., 2013, p. 121). Interestingly, Ihlen and Roper (2014) perceive an evolution in sustainability corporate discourses, which have started to discard the 'journey' metaphor, implying that companies have already arrived to the desired destination of being 'sustainable', which allows them to 'balance' their business with environmental care. Lischinsky (2015) explores the use of the natural environment as a stakeholder in CSR reporting and concludes that the environment is represented as an entity without agency, unlike other stakeholders, in an effort, once again, to obscure company responsibilities. Wright and Nyberg (2017), for their part, argue that corporations use framing (orienting the debate on an issue in a certain way), localising (making the conceptual frames relevant for local contexts) and normalising (realigning practices and activities with dominant organisational discourses) to prevent climate change concerns from altering 'business as usual'. Similarly, Ferns et al. (2019) and Jaworska (2018) conclude that companies resort to myth-making, hedging and distancing strategies to 'simulate' commitment to climate change, while refusing to adopt specific solutions to solve the problem. Nik Ahmad and Hossain (2019) also find that recent efforts on the part of Malaysian companies to include allusions to climate change in their CSR reports are mostly rhetorical exercises and do not correspond to a real concern. Regarding discourse typology, Nik Ahmad and Hossain (2015) follow Itanen (2011) to identify three kinds of CSR discourses in relation to climate change: business discourses, related to financial issues and strategic management; caring discourses, i.e., statements suggesting that the company is concerned with social problems; and sharing discourses, showing the company's efforts to establish alliances with other social agents. Shrivastava and Guimarães-Costa (2017) also perceive the tendency of corporations to include cooperation with other social agents as part of their corporate discourse in an attempt to gain legitimacy. In addition, their investigation shows a development in the corporate discourse on sustainability, which used to be based on 'eco-efficiency' and has now evolved to more blended or 'hybridised' forms, as firms try to make their interests converge with those of other social stakeholders. For example, O'Connor and Gronewold (2013) examine the sustainability reports of the main global oil companies and point out that these corporations combine institutional and competitive advantage language in their discourses. Dahl and Fløttum (2019), for their part, focus on energy companies, where they also find a variety of discourses, with different emphasis on the topics of risk, responsibility and opportunity. In a more critical vein, Ferguson et al. (2016) showed that corporate website communication tends to use vague claims, visual and linguistic contents in comparison with NGO website messages, which are more factual and verifiable. Gong (2019) finds that environmental corporate reports frequently contain 'destructive' discourses, which limit some actions that are beneficial to the environment but which could damage the economic benefits of the company. Fern andez-V azquez and Sancho-Rodr ıguez (2020a), for their part, show how Spanish Ibex 35 corporations build climate change discourses which emphasise passivity on the part of the addressee and which reinforce technocentric attitudes, as a way to prevent drastic actions against climate change. Our investigation draws on this last study from the methodological point of view, but it expands the scope to a larger international context by focusing on companies which are based in different countries and which have a global influence in terms of economic activity. Multimodal analysis Contemporary linguistics and social theory consider discourse as a social construction (Fairclough, 1989). In the 1980s, a group of scholars developed a branch of linguistics which they called Critical Discourse Analysis (CDA) to determine how language contributes to uphold and challenge dominant ideologies. CDA concentrates on linguistic elements with the intention of disclosing 'their generally hidden determinants in the system of social relationships, as well as hidden effects they may have upon that system' (Fairclough, 1989, p. 5). To reveal these 'hidden effects', CDA focuses the lens on the linguistic choices that appear in a text and discusses how these choices, be they lexical, grammatical or structural, contribute to the persuasive effect that the author intends to achieve. More recently, researchers have pushed the limits of CDA beyond the linguistic sign. As a development of CDA, Multimodal Analysis argues that, in order to understand the meaning of an act of communication, we should look at different semiotic modes, including visual resources. Working within a multimodal perspective, Kress and van Leeuwen (1996) claim that the conception and presentation of images (the visual choices that we make) influence the way in which we perceive reality. To study how visual structures affect our perceptions, Kress and van Leeuwen put forward a theory of visual grammar, a multimodal approach to communication which distinguishes three types of meaning: representational, interactional and compositional. Representational meaning covers narrative representations and conceptual representations. In narrative representations the elements reproduced in the image (which Kress and van Leeuwen call 'participants') show some sort of interaction. They are 'doing something to or for each other ' (1996, p. 56). They are connected by some sort of transactional relation or 'vector'. By contrast, in conceptual representations the participants are static. They are presented 'in terms of their generalized and more or less stable timeless essence' (1996, p. 56). Interactive meaning, for its part, accounts for the contact between the producer and the viewer of the image. It can be explained by three dimensions: gaze, size and perspective. In terms of gaze, we should distinguish between those images which appeal to the viewer, for example by looking directly at the recipient of the image, and those in which the participant being represented is not interacting with the viewer (absence of gaze). In the first case, we speak of a 'demand' (by interacting with the viewer the participant symbolically demands something for the recipient). Conversely, when there is no interaction with the viewer, we speak of an 'offer' (the participant is an object for contemplation). Compositional meaning attends to the way in which the participants are arranged according to certain patterns in order to form a meaningful whole. This implies looking at information value (the placement of elements), salience (elements in the foreground are given more importance) and framing (imaginary lines which connect the elements in the composition). The following table summarises some of the major functions of Kress and van Leeuwen's grammar of visual design. Ecolinguistics Ecolinguistics is a branch of CDA that links the study of discourse with ecology. Ecolinguistics generally uses the same tools as CDA, but it understands ideology and power relations as notions that cover both human and non-human subjects (Dash, 2019). The words used to describe animals or plants, for example, may contribute to present them as instrumental entities, according to an anthropocentric philosophy, or may endow these non-human beings with agency, as autonomous subjects. A crucial area of research for ecolinguistics is how language contributes to create specific stories or narratives: what Stibbe calls 'the stories-we-live-by' (2015, p. 6). These are cognitive structures which influence the way in which we perceive the relationship between humans and nature, economic growth and technological progress, and which, consequently, determine how we will act towards the ecosphere. In their ecolinguistic study of UK national newspaper editorials, for example, Norton and Hulme (2019) identify several narratives that are relevant for the analysis of climate change corporate discourse, among them: the 'smart growth reformer story', which defends that capitalism and market solutions can stop climate change from becoming a catastrophe; the 'ecomodernist' story, which suggests that technological innovations can mitigate climate change; and the 'ecoactivist story', which contends that humans are reaching the limit of natural resources and destroying the natural world upon which they depend. Ecocriticism claims that the hegemonic narratives to which we are exposed appear between the lines of a text (Stibbe, 2015). They are not necessarily self-evident. These narratives can be identified, however, by analysing the language used in a certain discourse. The ecolinguistic framework can also be extended to other semiotic codes, following the path of multimodal analysis. Thus, in this investigation we apply the ecolinguistic method to the study of website images in order to identify the underlying narratives (or stories) that environmental corporate discourses promote. Data collection This study was conducted in two phases: data collection and visual analysis. In the first stage (data collection), Google Chrome was used to track down the corporate websites of the world's 20 greatest polluters, according to the research published by the Climate Accountability Institute in the US (2019). These global companies are Saudi Aramco, Chevron, Gazprom, ExxonMobil, National Iranian Oil Company, BP, Royal Dutch Shell, Coal India, Pemex, Petr oleos de Venezuela (PDSA), PetroChina, Peabody Energy, ConocoPhillips, Abu Dhabi National Oil Company, Kuwait Petroleum Corporation, Iraq National Oil Company, Total SA, Sonatrach, BHP Billiton and Petrobas. The researchers browsed through the webpages to locate the information that they contained on climate change. This information was downloaded. Data were collected in January, 2020. When a company had several websites, only the main corporate global site in English was taken into account. All website images were analysed. We discarded only those images that showed charts, diagrams or abstract realities of some sort, as they have an 'objective' nature which makes them unsuitable for analysis according to visual grammar (Kress & van Leeuwen, 1996). 130 images were identified on the company websites. Since one of our aims was to compare corporate discourses on climate change with the discourses enacted by other social actors, we followed the same procedure for the websites of 12 global non-profit organisations (NGOs) which take the protection of the environment as one of their main objectives. These NGOs were selected on the basis of a sample compiled by the University of California Berkeley Library, excluding those organisations which do not have a specific section on their websites to discuss the effects of climate change (https://guides.lib.berkeley.edu/NGOs). The NGOs selected were Greenpeace, Earth Island Institute, Earth Justice, Environmental Defense Fund, Fauna and Flora International, Nature Friends International, Global Footprint Network, International Union for Conservation of Nature, Nature Conservancy, National Resources Defense Council, World Agroforestry Center and World Wildlife Fund (WWF). The websites on climate change from these NGOs contained 311 images to be classified. Again, all images were analysed according to the theories of visual grammar. Data analysis After collecting the data that we needed for our research, we tried to determine the importance that the world's greatest polluters give to climate change in their corporate discourses, as manifested through their public webpages. To do so, we distinguished three levels of involvement. Those webpages which contained no specific reference to climate change, or which at best reproduced a formal report that had to be downloaded by the Internet user, were considered to be at the first level. In this sense, we understood that formal and legal reports, deprived of additional explanations, are not appropriate tools to address non-specialised audiences. The second level of involvement corresponds to those companies which make reference to climate change within a general section on environmental sustainability, designed so that it can be understood by a general audience. Those webpages which contain a specific section for climate change were considered to be at the third level. All pictures were analysed according to the parameters established by visual grammar (see Table 1). As the first step of the investigation, we identified the participants in each picture, which enabled us to determine the main topic being represented. Following the methodology laid out by Fern andez-V azquez and Sancho-Rodr ıguez (2020a), we distinguished three main topics, with mixed combinations: nature, technology and people. When nature appeared as a participant, we tried to determine if there were negative connotations which could be inferred from the visual representation, as is the case with some of the extreme consequences produced by climate change (drought, floods, pollution, etc.). The representational and interactional functions were also analysed. Thus, we established if there was any kind of interaction among the participants (narrative representation) or if they were static (conceptual representation), and if there was an appeal to the viewer (demand). An intercoder reliability test was performed by an independent observer (a colleague who had not participated in the research) on a 10% sample (13 pictures for the global companies and 32 pictures for NGOs). Cohen's Kappa measures were calculated for each variable. There was substantial agreement concerning topic (kappa ¼ 0.950), the representational function (kappa ¼ 0.932) and the interactional function (kappa ¼ 0.920). As expected, given its more subjective nature, connotations created the greatest disagreement, but divergences were still very limited (kappa ¼ 0.896). Results and discussion The analysis of the references to climate change on the websites from the world's greatest polluters showed that a great number of these companies do not appear to be aware of the need to address this issue in their public communications. Only 50% of the companies have specific websites on climate change and one-third of the companies do not include any allusion to this matter on their corporate websites, despite the environmental impact of their economic activity. This is surprising if we take into account the relevance that citizens give to climate change and to corporate stances on this issue, as we explained in the introduction. If we examine the results in terms of the nationality of the company matrix, we see that there is a much greater awareness of the importance of climate change as a public relations issue in the West and in developed countries in general. All the Western companies (from the United States, Western Europe and Australia) have specific websites on climate change. This seems to correspond to the situation in other Western nations, such as Canada, Japan and Spain Freedman & Jaggi, 2010). In Latin America, Africa and the Middle East, only the Arabian company Saudi Aramco and the Brazilian company Petrobas follow this public relations lead, while more than half of the non-Western companies lack allusions to climate change on their websites (58.33%). The results are summarised in Table 2. It is also relevant to mention that many of the environmental websites appear to be poorly developed, with a very limited number of images, which makes them unattractive for the non-specialised reader. This supports the idea that global companies need to develop greater efforts in their public communications concerning climate change in order to identify themselves as environmentally responsible agents. Focusing on the visual analysis, the most remarkable aspect is the central role given to technology, which appears as the main participant in almost half of the images (46.92%) and is one of the relevant actors in almost three out of four pictures (72.3%). This contrasts strongly with the scarce presence of nature, which is the main participant in just 6.92% of the images, and with the limited role given to people (16.15% only). A deeper analysis reveals that most pictures which have people as main participants are pictures of the CEO or the managing team, with only 10 pictures referring to other persons (7.69%). The prevalence given to technological actors in the website images is consistent with the importance that technology has for the definition of corporate identity (Cheng, 2011). Still, it is surprising to find so few images with nature as a major participant on environmental webpages. This striking absence emerges even more clearly if we look at the situation on the NGO webpages, where nature is the main participant in 29.9% of the images and one of the main actors in 40.83% of the visuals. By contrast, technology appears as the main actor only in 13.5% of the instances (see Tables 3 and 4). The way in which nature is described also differs in the two subsets of images. The websites of the greatest polluters do not contain a single image which portrays the adverse consequences of climate change upon nature. In contrast, the NGO websites make frequent use of negative natural depictions, with 61 images creating negative connotations (19.61% of the total, see Table 4). A typical image on these websites is the grey smoke coming from a factory or the destructive effects of natural phenomena which derive from climate change, like deforestation. Interestingly, the NGO images do not only give maximum prevalence to nature. The preponderance is shared with a third actor: people occupy a marginal role for the global companies' visual discourse. On NGO websites, people appear as the main participant in more than one-third of the images (36.23%) and they are a main actor in almost half of them (47.27%). These results are consistent with previous research on Spanish IBEX 35 companies (Fern andez-V azquez & Sancho-Rodr ıguez, 2020a), but they show a greater tendency to ignore nature and people as visual participants on corporate websites, and a clearer orientation to highlight the importance of technology. Our findings suggest that the narrative on climate change that the global companies are constructing as part of their corporate identity differs significantly from the narratives enacted by other social agents who are committed to fight against this problem, like the NGOs. The abundance of technological images in the global company websites transmits the idea that climate change may be mitigated by the technological solutions and products which are associated with the economic activity of these corporations. The implicit belief in all-powerful technology which the images transmit conforms to 'the ecomodernist story' (Norton & Hulme, 2019), as one of the possible narratives that can be enacted to account for climate change. The ecomodernist story admits that humans need to control their impact on nature, but it rejects imposing limits to economic growth or taking nature as an autonomous entity to which humans are subjected. Instead, it argues that technological advances guarantee that climate change will be controlled without altering the current social and economic framework and without slowing down modernity. From a practical point of view, this narrative is useful for the economic interests of the global corporations, as it reduces the public's anxiety about environmental damage and induces people to keep their consuming habits intact. If climate change can be controlled by technology, then no radical, immediate actions should be taken, particularly if these actions come at an economic cost. At the same time, by association with a technocentric discourse, through the proliferation of technological images on their websites, these companies suggest that the products that they commercialise are environmentally harmless and can even be beneficial for nature. Consuming their 'technological' products is, to put it another way, a way of helping to mitigate the worst effects of climate change. There is, in this sense, an indirect appeal to the viewer to accept that corporations are the ones who need to take the lead in the fight against environmental problems by means of technology. The shocking scarcity of natural images on what are, after all, environmental websites confirms that the internet viewers' attention is subtly displaced from environmental care as an end in itself to an alternative mental framework in which nature is a secondary actor. This strategy of greenwashing (claiming to defend nature but actually subordinating environmental care to other goals) is corroborated by the limited role given to people, who are, once more, largely absent from the visual narrative (less than 10% of the images make reference to people other than the managing team). Again, this absence reinforces the value of technology, metonymically associated with the company's commercial products, and induces a certain passivity on the part of consumers. The solution to climate change rests in the hands of companies and their technological solutions. Hence, customers do not need to take any individual action, lest to change their consuming habits. In this sense, by inducing passivity on the part of the consumers, corporate visual narrative prevents or moderates ecological activism, which could be an obstacle for the companies' commercial interest ('business as usual'). A very different narrative emerges from the visual analysis of the NGO websites, where there are frequent references to nature, particularly to denounce the damage that the environment is suffering from industrial and technological activity. The relevant role which people assume on these websites indicates that the solution to environmental problems does not lie in the 'technological' products commercialised by companies but in the ecological actions undertaken by the consumers themselves. The potential contribution of technology to preserve the environment is acknowledged, as it can be perceived in the presence of images in which technology is associated with renewable energies. But the negative effects of industrial activity are also shown very clearly, as seen in the great number of images with negative connotations, which show the devastating effects of climate change (see Table 5). These negative connotations transmit the urgency of the climate crisis and the need to take radical, immediate actions. We find, therefore, a much less optimistic narrative than the 'ecomodernist' story, one that demands individual responses from consumers, diminishes the positive effects of technology, and situates natural protection as a higher value. To see the extent to which these two narratives are promoted on corporate and NGO websites, we should also pay attention to the analysis of representational and interactional functions. As Table 6 shows, the proportion of conceptual images on corporate websites is greater than on NGOs', where there are more narrative representations (24.43% against 6.15%). The greater dynamism which NGO images exhibit is coherent with the idea of moving the viewer to action, something which is confirmed more clearly by the prevalence of images which contain an imaginary appeal to the viewer (see Table 7). On NGO websites, one out of four images 'demands that the viewer enter into some kind of imaginary relation with him or her' (Kress & van Leeuwen, 1996, p. 122). This symbolic appeal is generally realised through direct gaze, although the demand was also found to manifest itself by reproducing textual messages which address the viewer ('Raise your voice for people and the planet' or 'Create a climate-resilient and zero-carbon world', for example). In opposition to the NGO sites, global company webpage images contain a more limited number of appeals to the viewer (10%) and give priority to an interaction based on offer, which represents the participants as objects of contemplation. This difference in the use of the interactional function confirms the discrepancy between the two narratives in terms of agency: defence of human agency and ecological activism on NGO websites and promotion of corporate technological agency on company websites. Conclusions The aim of this study was to determine the extent to which 20 global companies considered as the world's greatest polluters address climate change in the construction of their reputational identity and to examine the visual narratives that these companies promote on their websites. The visual analysis we carried out shows that a great number of these companies give a very limited role to the discussion of climate change on their corporate webpages. This is particularly true for non-Western companies, perhaps because the legal concerns and pressure from the media are not as strong as in the West. The interest that consumers and citizens show for the environmental crisis, and the demand for corporations to take a clear stance on this matter, makes it advisable to remediate this absence. The fact that some corporations include information on climate change on their webpages, unlike others working in the same sector of activity, gives the former a competitive advantage in terms of reputation and brand management. It would be unwise, in this sense, to disregard the need to pay attention to environmental digital communications, particularly given that updating information on corporate webpages is relatively simple and inexpensive. On a different note, many of the websites that we analysed contain a very limited number of images, as we can see very easily if we contrast them with NGO websites (an average of 6.5 images for the greatest polluters' websites against 25.9 on the NGO websites). Again, it would be advisable for companies to improve their visual discourse, including a greater number of pictures or videos on their environmental webpages, in order to make them more attractive and interesting for the general public. This strategy would complement the release of CSR reports, extending their PR policies to those stakeholders who are not seduced by technical or specialised information. For those companies that have chosen to speak about climate change on their webpages, we can identify some common features in terms of visual messages. As the multimodal visual analysis has shown, corporate discourse on climate change links environmental care with technological advances, suggesting that technology is the best solution to mitigate the effects of the environmental crisis (cf. Fern andez-V azquez & Sancho-Rodr ıguez, 2020a). This association works to protect the economic Number and percentage of images with conceptual and narrative representation. In conceptual representations participants are static, in narrative representations they interact with each other. Source: Authors' own creation. interests of the companies, convincing the readers that technological and industrial activities should be promoted. The visual discourse reproduced on corporate webpages ignores the environmental problems caused by economic development and industrial activity. Conversely, the link between the companies' products and technology creates the paradox of suggesting that consuming this merchandise (mostly oil derivatives) actually helps the environment. It is also worth noting that global company websites give a very limited role to people in visual representations, as we can see if we compare these websites with NGO environmental webpages in terms of the presence of human participants, narrative representations and interactional functions. Corporate climate discourse assigns a passive role to citizens and it places agency on the actions taken by the companies themselves, particularly through the use of technological solutions. This is in direct contrast to the call for action that we find in NGO climate discourses, where nature and people occupy a much more prominent position, as seen in their presence as outstanding participants in most website images. We find, therefore, two competing visual narratives, one which subordinates environmental care to economic development and which induces citizens to keep their consuming habits intact, and an alternative narrative that calls for explicit actions on the part of the individuals and which situates the protection of nature at the centre. Of course, it is perfectly understandable that global companies try to protect their business. Still, we believe that these companies could follow the example of some of the visual communication strategies that NGOs deploy on their websites. Giving a greater relevance to natural images, including the negative effects of this phenomenon, would show stakeholders that the company really cares about the environment and that it shares citizens' concerns about climate change. Likewise, including more images with people, in which narrative and interactive functions are present, would be a way of inviting customers to join the company in a common effort to create a better world, even if this process is complex and far-reaching. Rejecting greenwashing strategies and making people feel more involved in the actions undertaken by the company to protect the environment can only result in a better reputation and a higher social standing, which ultimately will also protect business. We believe that the results of our study can help global companies to improve their environmental digital communications, realigning their PR strategies with social expectations. Our study also contributes to the existing literature on CSR discourse by applying an ecolinguistic and multimodal visual framework, an area of analysis which is still under-researched. This study is not, however, without limitations. The main limitation is the size of the corpus, which is too narrow and mostly focuses on companies from the oil sector. It would be desirable to expand this study with a larger selection of companies which belong to different sectors of activity, to see if the visual narrative strategies that we have identified correspond to the general situation of multinational companies. The results of previous investigations on a more local level (Fern andez-V azquez & Sancho-Rodr ıguez, 2020a), while still limited, support this possibility. On the other hand, the results of our study on visual discourse could also be extended to investigate verbal language, to see how visual and written narratives interact.
2020-11-12T09:08:37.006Z
2020-11-10T00:00:00.000
{ "year": 2021, "sha1": "c0ca7501af7ebd1ae6ef60df07b2a81e0c62c29a", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1331677X.2020.1836993?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "03829356864128e5fb312a2ffeb82d40cae9f9c5", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Business" ] }
268211457
pes2o/s2orc
v3-fos-license
A Rare Case of Acquired Factor VIII Inhibition in an Elderly Female Acquired hemophilia A is a rare condition characterized by the development of autoantibodies against coagulation factor VIII. It often initially presents as serious bleeding in the absence of risk factors and carries high morbidity and mortality if not diagnosed early. Due to its rare nature, data is limited, and guidelines are primarily based on expert opinion. Here we present a case of an elderly patient with severe gastrointestinal bleeding found to have activated partial thromboplastin times, plasma mixing studies, and coagulation factor activity levels consistent with acquired hemophilia A. We hope to bring awareness of this rare disease and promote its consideration in the differential of unexpected bleeding to improve safety outcomes. Introduction A cquired hemophilia A (AHA) is a very rare coagulation disorder, occurring at approximately 1e1.5 cases per million people per year. 1 While other acquired coagulation disorders can occur, AHA develops from autoantibodies directed against coagulation factor VIII. Coagulation factor VIII appears to be the most commonly affected coagulation factor in acquired coagulation factor deficiency disorders. 1Though acquired hemophilia A is idiopathic in up to 50% of cases, it is associated with pregnancy, autoimmune conditions, malignancy, and medications. 2AHA is a disorder of the elderly, with a median age of 73.9e78 years. 3Acquired hemophilia A carries an elevated risk of clinically significant bleeding and high rates of morbidity and mortality.A high index of suspicion of AHA is warranted in cases of unexpected bleeding for prompt diagnosis and treatment to improve survival outcomes. Case report/presentation An 87-year-old female presented to the emergency department referred by her primary care physician for acute anemia identified on routine laboratory evaluation.Her medical history was only remarkable for normocytic anemia, worked up to be due to iron deficiency, being treated with weekly intravenous iron infusions for one year.She had no active complaints and denied any active bleeding, lightheadedness, dizziness, changes in bowel habits, or skin changes.She abstained from alcohol and recreational drug use.On initial evaluation, her vital signs revealed a heart rate of 87 beats per minute and blood pressure of 101/47 mmHg.Physical exam was significant for scleral icterus and bruising on her lower extremities.Initial laboratory workup revealed hemoglobin 7.6 g/dL, MCV 78 fL, platelets 364 k/uL, aPTT 191s, PT 11.9 s, INR 1.08s, total bilirubin 3.3 mg/dL, and indirect bilirubin 2.7 mg/dL.ANA titers were within normal limits.Stool guaiac was found to be positive.A mixing study revealed persistently elevated aPTT (90.2s).CT imaging of the chest, abdomen, and pelvis did not identify any clear source of bleed but did reveal a mass in the rectum, as seen in Fig. 1.Initially, one unit of packed red blood cells was administered along with intravenous fluids and iron supplementation.Subsequent hemoglobin levels improved to 8.2. Her hospital course was complicated by progressively worsening melanotic stools.Surgery and gastroenterology were consulted and a colonoscopy and endoscopy were recommended but were deferred due to coagulopathy.On hospital day 3, the patient was found to have acute lethargy and confusion, for which stat labs were ordered revealing Hgb 7.6 mg/dL and aPTT >400s.The patient was administered three units of packed red blood cells, two of fresh frozen plasma, and one unit of cryoprecipitate.She was transferred to the ICU for closer management, where subsequent aPTT improved to 142.6.The trend of aPTT and Hgb is seen in Table 1.Her hospital course was further complicated by worsening gastrointestinal bleeding and a loss of pulse on hospital day 4. Advanced cardiopulmonary life support was initiated, and spontaneous circulation returned after 22 min.Her mental status deteriorated and she was intubated to protect her airway.Ultimately the patient's family decided that the patient would have wished for comfort measures.Subsequently, the patient was placed into hospice care and ultimately passed away within one day.Posthumous laboratory evaluation revealed markedly decreased factor VIII activity (<1%).Reduced factor VIII levels in the setting of inappropriate correction of aPTT on mixing study suggested acquired hemophila A. Discussion Acquired Hemophilia A (AHA) is a rare variant of hemophilia A that commonly occurs in the elderly as compared to the congenital form.The annual incidence of AHA for children younger than 16 years of age is 0.045 per million per year, whereas for adults greater than 85 years of age the yearly incidence is as high as 14.7 per million per year. 4cquired hemophilia develops by the formation of autoantibodies, also known as inhibitors, to coagulation factors.Coagulation factor VIII is the most common coagulation factor to be affected, as seen in AHA. 1 Though AHA is often idiopathic, it can occur in association with pregnancy, autoimmune conditions, and malignancy. 2 Medications associated with AHA include antibiotics, NSAIDs, amiodarone, rivastigmine, sunitinib, heparin, phenytoin, chloramphenicol, and methyldopa. 3Due to the rare nature of this disorder, data is limited; however, some studies have shown mortality rates of up to 23% within one year of diagnosis. 5The most obvious risk factor in our patient was the colonic mass seen on CT imaging of the abdomen.Unfortunately, the mass was unable to be evaluated for malignancy due to the critical condition of our patient and ultimate medical course.However, upon review of the literature, it appears that acquired factor VII inhibitor has been seen with colorectal cancers 6 Acquired Hemophilia A usually presents with clinically significant bleeding in the elderly or in the postpartum period.New onset of bleeding without any previous history of bleeding should raise suspicion for acquired hemophilias. 7Classically, patients will present with subcutaneous bleeding in the form of purpura or soft tissue hematomas.However, unlike the congenital form of Hemophilia A, hemarthrosis is uncommon. 8More severe conditions such as gross gastrointestinal bleeding can develop, as was the case in our patient.Workup is initiated by evaluating a complete blood count (CBC) and a coagulation panel.CBC will reveal a The trend of hemoglobin here is stable except at day 5 when the gastrointestinal bleeding worsened.PT levels were found to be within normal limits throughout the patient's hospital course.The aPTT remained high despite treatment with aPCC; the peak was seen on day 3 when the aPTT was measured >400.Platelet values remained within normal limits through the patient's hospital course. normal platelet count and an isolated prolonged activated partial thromboplastin time (aPTT) at least two to three times the normal limit.Prolonged aPTT can indicate a deficiency of intrinsic coagulation factors VIII, IX, XI, or XII, or an inhibitor to these factors.The coagulation pathway is highlighted in Fig. 2. A mixing study may then be performed to distinguish between a factor deficiency and a factor inhibitor.This is done by mixing a 1-to-1 ratio of normal plasma with the patient's plasma.Correction of aPTT after mixing study indicates a deficiency of coagulation factors, whereas a lack of correction of aPTT correction suggests a coagulation factor autoantibody or inhibitor.Quantitative assays of factor activity and factor inhibitors are diagnostic of acquired hemophilia. 9In this patient, the mixing study did not appropriately correct the aPTT, and the coagulation factor VIII activity levels were low, suggesting an acquired factor VIII (FVIII) inhibitor or autoantibody. Treatment for acquired hemophilia A is primarily directed towards hemostasis management and eliminating factor inhibitors.For patients with minor bleeding and less than 5 Bethesda units (BU) of inhibitor, observation is sufficient as titers of the factor inhibitor have sometimes disappeared on their own. 10In patients with coagulation FVIII activity levels greater than 5% and inhibitor levels less than 2%, desmopressin is effective for minor bleeding. 10For major bleeding with inhibitor levels less than 5 BU, FVIII replacement and desmopressin are considered first-line therapy. 4However, bypass agents such as activated prothrombin complex concentrate (aPCC) or recombinant activated Factor VII (rFVIIa) are used for higher levels of inhibitor concentration. 4These agents work by bypassing the coagulation cascade to produce thrombin, which would otherwise be limited by FVIII.Our patient was treated with aPCC instead of desmopressin of direct FVIII replacement, partially improving aPTT levels and bleeding to some extent.Quantitative evaluation of FVIII activity levels unfortunately returned posthumously. The development of acquired coagulation factor inhibitors usually involve an antibody which binds to and inactivates coagulation factors.Subsequently, the coagulation cascade, as shown in Fig. 2, is compromised and clots are not able to be effectively formed, leading to uncontrolled bleeding.The underlying mechanism is poorly understood; however, it is likely that the acquired inhibitors are antibodies binding to domains of the coagulation factors, preventing their effective function.Specifically, for acquired inhibitors to coagulation factor VIII, it is hypothesized that autoantibodies bind most commonly to the A2 and A3 domains of coagulation factor VIII. 3 Coagulation factor VIII is then unable to effectively serve as a cofactor to coagulation factor IV to convert factor X into factor Xa. Malignancy is one of the most common causes for acquired inhibitor production and it is likely related to the immunogenic properties of the malignancy.The underlying mechanism is poorly understood and requires further investigation. Decreasing the levels of factor inhibitors is an essential mainstay of chronic therapy as these patients remain at high risk for future bleeds.First-line treatment includes prednisone (1 mg/kg/day) by itself or with the addition of cyclophosphamide (50e100 mg/day). 11Some evidence suggests better patient outcomes with a combination of prednisone and cyclophosphamide. 12 Conclusion This is a case of acquired hemophilia A in a patient likely secondary to lower GI malignancy. Acquired hemophilia A is a rare disorder that presents with new-onset clinically-significant bleeding, typically in the elderly.Due to its rare nature, there is limited data, though studies have suggested significant mortality with delayed diagnosis.Physicians should maintain a high suspicion of acquired hemophilia in patients with unexpected bleeding.We present our case to bring awareness of the catastrophic outcomes associated with delayed diagnosis and treatment and hope to bring awareness of this rare condition. Fig. 1 . Fig. 1.CT imaging of the abdomen and pelvis.A mass suspicious for malignancy is seen here at the junction of the ileocecal valve, as seen by the red arrow. Fig. 2 . Fig. 2. The Coagulation Pathway.The coagulation pathway is displayed here.It is separated into the intrinsic, extrinsic, and common pathway.An inhibitor of coagulation factor VIII is displayed in red to demonstrate acquired hemophilia A.
2024-07-05T06:11:46.204Z
0001-01-01T00:00:00.000
{ "year": 2024, "sha1": "d09147d00244011356a78e54a09dea0c92fe4837", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.55729/2000-9666.1224", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d09147d00244011356a78e54a09dea0c92fe4837", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
247010300
pes2o/s2orc
v3-fos-license
Alcohol consumption trajectories over the Australian life course Abstract Background and Aims Alcohol consumption changes markedly over the life course, with important implications for health and social development. Assessment of these patterns often relies on cross‐sectional data, which cannot fully capture how individuals' drinking changes as they age. This study used data from 18 waves of a general population panel survey to measure drinking trajectories over the life course in Australia. Design and Setting Longitudinal survey data from the Household, Income and Labour Dynamics in Australia (HILDA) survey between 2001 and 2018. Participants A total of 20 593 individuals ages 15 or above in two samples assessing quantity‐frequency (n = 20 569, 52.0% female) and risky single occasion drinking (RSOD), respectively, (n = 17 340, 52.5% female), interviewed as part of HILDA. Measurements Usual quantity of alcohol consumed per drinking occasion; frequency of drinking occasions per week; average daily consumption, calculated by combining reported usual quantity and frequency; and average reported frequency of RSOD per week. Findings Multilevel, mixed effects models run with fractional polynomial terms found similar male and female alcohol consumption trajectories for quantity‐frequency and RSOD measures. Usual quantity of alcohol consumed per drinking occasion (5.4 drinks for men, 3.8 for women) and RSOD frequency (0.56 occasions/week for men, 0.38 for women) peaked in young adulthood, whereas frequency of drinking occasions (2.5 occasions/week for men, 1.7 for women) peaked in middle age. Middle‐age drinkers had the highest average daily consumption of alcohol (1.4 drinks/day for 54‐year‐old men, 0.6 drinks for 57‐year‐old women) and engaged in RSOD slightly less than young adults. Conclusions Alcohol consumption in Australia appears to vary substantially over the life course, with usual quantity per drinking occasion and frequency of risky single occasion drinking peaking during early adulthood and average daily consumption and frequency of consumption peaking in middle age. INTRODUCTION Millions of deaths are attributable to alcohol globally each year [1], with 4186 in Australia alone in 2017 [2]. Alcohol consumption contributes to a range of health impacts, including injuries, accidents and longer-term alcohol-related diseases [3]. In 2015, alcohol was a top 10 leading risk factor for burden of disease across the Australian population [4]. For young people, ages 15-24, alcohol was the leading risk factor for burden of disease for men and the second leading risk factor for women [4]. Young adult drinkers (18)(19)(20)(21)(22)(23)(24), are generally most likely to exceed single-occasion drinking guidelines and put themselves at risk of intoxication or short-term, alcohol-related harms [5]. In contrast, the tendency of older drinkers to drink more frequently [6] may put them at greater risk of long-term alcohol-related, noncommunicable diseases such as cancer or heart or liver disease [2,7]. This considerable burden of disease attributable to alcohol across the population makes alcohol a key public health concern in Australia and internationally [4]. Drinking is a dynamic behaviour that varies over the life course. Cross-sectional, Australian analyses have shown alcohol consumption is greatest in early adulthood, following quick increases in adolescence [5,8]. Consumption then declines with age [8,9], particularly risky single occasion drinking (RSOD). However, frequency of drinking appears to increase with age, with many older Australians drinking daily [5]. As a result, where research may have identified rising consumption during adolescence, peaking in emerging adulthood [6,10,18], middle-age drinkers may now be the peak consumers of alcohol [14] and most likely to exceed lifetime risk guidelines [5]. These shifts raise concerns regarding how trajectories of drinking may have shifted from previously identified patterns. Possibly indicating that current health promotion efforts to reduce risk of alcohol-related harms, often focused on youth drinking, could be better targeted. Work reassessing the prevalence of alcohol consumption and related behaviours across the life stages may help to identify trends in drinking and provide insight into potential stages of life of concern for use for policy or intervention implementation. The substantial amount of data required to evaluate drinking behaviour over a lifetime has meant previous studies have often focussed on specific life stages [10,19,20] or relied on repeated cross-sectional snapshots [12,14,21]. Because cross-sectional methods cannot assess developments within individuals over time, longitudinal data are key for effectively assessing behavioural development. However, few appropriate longitudinal datasets have a period of follow-up long enough to provide meaningful assessments of change within individuals. Other studies have used multiple different data sources to synthesise trajectories of drinking over longer periods [6]. For example, Britton et al. [6], harmonised data from nine UK cohort studies to produce a key picture of alcohol consumption across the life course, finding mean alcohol consumption peaked in early adulthood, declined through adulthood, with a plateau in middle age, before declining into old age. This approach helps overcome difficulties in gaining the required data and allows for the inclusion of participants from multiple birth cohorts and age groups. However, a single longitudinal dataset may provide a more consistent examination of drinking patterns over time. The present study uses data from a single, representative, longitu- To suitably balance available data and model fit, and ensure that individual development would be considered within each model, a range of models with differing minimum years of survey participation requirements were compared. Wave participation was defined by completion of an interview, (i.e. an individual who completes an interview, but does not respond to any alcohol measures is considered as participating in that wave). Individuals who did not participate in the minimum number of surveys, (because of attrition or being unavailable for enough interviews for example) were excluded. In addition, individuals who participated in enough surveys, but did not respond to any alcohol consumption questions throughout were also excluded. Covariates Participant age (in years), gender and household income are reported each wave. Household income was used to adjust for different alcohol consumption outcomes related to differing socioeconomic status [25,26]. To account for the different economic resources available to individuals in different households, equivalised household income was calculated by totalling household income and dividing by an equivalence factor determined by household composition [27]. For example, a single person household with an income of $100 000 has a higher equivalised income than a 2 adult-3 child household with an identical $100 000 income. Gender is included because men historically consume more and experience a greater level of harm from alcohol than women [28]. In addition, men's alcohol consumption often differs from women's at specific stages of life [6,12,29], and in response to life events (e.g. pregnancy, childbirth) [30,31]. Outcomes Alcohol consumption was assessed using quantity-frequency measures. Frequency of alcohol consumption is based on participant responses to the question 'Do you drink alcohol?' Responses were coded based on the average occasions per week that alcohol is consumed and ranged from 0 (for those who responded 'I have never drunk alcohol' or 'I no longer drink'), to 7 (for 'Yes, I drink alcohol everyday'). Usual quantity of alcohol consumption per drinking occasion was assessed by the question 'On a day that you have an alcoholic drink, how many standard drinks do you usually have?' Participant responses were coded onto a scale from 1.5 ('1 to 2 standard drinks') through to 13.5 ('13 or more standard drinks'). Those who indicated they did not or no longer drank alcohol were coded as 0. A measure of average daily alcohol consumption was derived by multiplying usual quantity and frequency values. These totals were then divided by 365 to obtain average daily alcohol consumption, in standard drinks. An Australian standard drink is equivalent to 10 g of alcohol. RSOD was assessed by participants reporting the frequency they exceed sex-based thresholds (5 standard drinks for women and 7 for men) in response to the question 'How often do you have 5/7 or more standard drinks on one occasion?' Participants were coded from 0 (for those who abstained or did not exceed the threshold in the past year), to 5 (for those exceeding the threshold 5 or more times per week). Full response categories for each outcome measure are provided in the Supporting information. Statistical analysis In Stata 15 [32], multilevel mixed effects models with three levels and fractional polynomial terms were run to account for the longitudinal, Fractional polynomials are often used in regression models to fit non-linear functions [33,34] and involve fitting variables within a model with a range of fractional polynomial terms to identify the best fitting model. The goal of fractional polynomial models is to provide flexible models that are as simple and quick to run as possible [33]. Model selection was carried out using the 'fp_select' postestimation tool, which implements the function selection criteria described by Royston [34]. Specifically, 'fp_select' tests the fit of each fractional polynomial model against similar models with an increasing number of fractional polynomial terms or dimensions, to a maximum of 5 in this case. Increasing the number of dimensions increases the complexity and widens the range of possible shapes a solution may take. Details of the implemented tests are provided elsewhere [34], but briefly, a model with a single fractional polynomial is first tested against a model with two terms. A non-significant test indicates the less complex model is preferable, otherwise the cycle continues with more complex models up to maximum of 5 in this case [34]. The analysis plan and methods in the present study have not been pre-registered and as such, the results should be considered exploratory. In the RSOD sample, men were slightly younger (39 vs 40 years) and engaged in RSOD more often than women (0.11 vs 0 median risky occasions per week). Compared to the primary sample, a much greater proportion of participants abstained partially and throughout the entire study period (15.4% of men, 34.4% of male person-years, 23.6% of women, 49.2% of female person-years). Primary sample models Average daily alcohol consumption Usual quantity per drinking occasion For both men and women, the most suitable model was the fivedimensional model. Again, trajectories were similar for men and T A B L E 1 Sample demographics by gender for the primary sample, used for the average volume, quantity, and frequency models Usual quantity per occasion peaked at 5.5 standard drinks per occasion for men and just under 4 for women (Fig. 2) in young adulthood (18)(19)(20)(21)(22)(23)(24). These peaks were followed by slight declines and a levelling out in middle age (45-64) at around 3 standard drinks for men and 2 for women, before further declines for both genders into old age (65+). Frequency of drinking occasions The model selection procedure indicated a five-dimensional model was preferable for women and a four-dimensional model preferable for men. Despite this difference, trajectories of drinking frequency were broadly similar for men and women. Frequency of drinking rose sharply for adolescents (Fig. 3) Risky single occasion drinking Five-dimensional models were again the best fit for men and women according to our model selection procedure (Fig. 4). There was a broad similarity in the development of RSOD among men and women, although male RSOD remained more frequent than DISCUSSION To our knowledge, this is the first single sample longitudinal study to develop a model of multiple drinking measures across the life course. We found men and women shared similar trajectories across all alcohol consumption measures assessed, with men peaking at and maintaining a higher level than women for each measure. Our results show that RSOD and usual quantity per drinking occasion peak in young adulthood, whereas frequency of drinking occasions peaks in middle age. Consequently, total consumption peaks in middle age rather than adulthood, despite reductions in the amount consumed per drinking occasion. These findings broadly reflect existing literature surrounding trends in drinking, including Australian drinkers [6,14,29]. In particular, past international studies have identified peaks in usual quantity in young adulthood [10,29] and plateaus and slight increases relative to mid-adulthood in middle age [35]. The plots of age effects on alcohol volume observed by Kerr et al. [12] were broadly similar to our curves of usual quantity per drinking occasion for men and women. Studies have also found frequency of drinking occasions to peak among middle-age and older drinkers, following rapid increases during adolescence and young adulthood, [6,29]. Casswell et al. [10] identified similar trends in frequency of drinking occasions among adolescents and young adults in their sample, which contained drinkers from a birth cohort older predating young adults in our own sample. Highlighting the consistency with which younger drinkers have engaged in drinking over past decades. Our findings concur with those that find a peak in RSOD in young adulthood [12,36,37]; however, the increasing frequency of RSOD we observed among middle-age men and women differed from previous studies [36,37]. Kerr et al. [12] identified a trajectory for RSOD frequency similar to our own among men, but not for women. Despite evidence of reductions in frequency of RSOD [15], young adults continue to engage in RSOD with a frequency unmatched by other age groups. However, the frequency that middle-age drinkers engage in RSOD is still of concern, with middle-age men reporting drinking seven or more drinks in a single occasion on average once a fortnight. Limitations The HILDA survey uses an annual follow-up, requiring participants to recall their average consumption behaviour over an entire year. As a result, participants may unintentionally respond to these questions inaccurately. Evidence also suggests the use of quantity-frequency measures, such as those in HILDA, may underestimate alcohol consumed [38], particularly compared to per-capita consumption data or against specialised alcohol surveys [39]. It should also be noted that our Our sample included limited numbers of older drinkers (307 (1.5%) and 347 (2.0%) individuals age 80+ at first response in our primary and RSOD samples, respectively). This may limit our ability to identify trajectories of drinking among older drinkers that are more representative of multiple birth cohorts. However, the declines in consumption following middle age observed in our models reflect similar behaviours as in other studies of alcohol consumption among older drinkers [20,40]. Comparatively, recent sharp declines in youth drinking will only be partially captured in this sample, because it combines data for specific ages across the entire study period (2001-2019). These findings represent average population trajectories. These trajectories smooth out the effects of socio-demographic factors (e.g. region, income) and specific life events (e.g. marriage), which have been shown to influence drinking behaviour [30,41]. CONCLUSION This study provides the first comprehensive, longitudinal assessment of life-course patterns of alcohol consumption in Australia. Our findings indicate there are substantial changes in alcohol consumption habits as individuals age. Usual quantity per drinking occasion and frequency of risky single occasion drinking peaked during early adulthood, whereas contrary to traditional focuses on harms for younger drinkers, average daily consumption and frequency of consumption was greatest for middle-age drinkers. Aside from providing an updated picture of drinking trajectories across the life course, these findings also highlight young adulthood and middle age as key points of the life course where future research, education and policy or interventions to reduce harms from drinking could be targeted.
2022-02-22T06:23:08.880Z
2022-02-21T00:00:00.000
{ "year": 2022, "sha1": "8c091aa5ea3ea36e327d47823541caebefec6e22", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/add.15849", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e79509945008226b2229ff3e1f6abb7611f6ecb3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226350587
pes2o/s2orc
v3-fos-license
Swami Vivekananda : Revival and reform in the making of Hinduism The Parliament of World Religions, in the History of Religions, was a convergence of paths and ideas of different cultures and religious persuasions from East to West. There were many speakers from the East1, but it is true that Swami Vivekananda is persistently recollected as the most prominent speaker at the inaugural Parliament held in Chicago in 1893. In this article, I examine the effectiveness of revival and reform, and how new elements, both in India and in America – an entirely different sociocultural system of beliefs and practices – are introduced. I look at three major revivals in India, the Brahmo Samaj, the Arya Samaj and end with the Ramakrishna Movement. How was an Indian, and in particular Hindu, identity in India being re-formulated? This article attempts in a small way to show how India responded to and dealt with revolutionary changes, and how this had consequences for its worldview. Most religions – but perhaps not all religions, especially in the remote spots of the world – have undergone western and modern influences. Introduction The Parliament of World Religions, in the History of Religions, was a convergence of paths and ideas of different cultures and religious persuasions from East to West. There were many speakers from the East 1 , but it is true that Swami Vivekananda is persistently recollected as the most prominent speaker at the inaugural Parliament held in Chicago in 1893. In this article, I examine the effectiveness of revival and reform, and how new elements, both in India and in America -an entirely different sociocultural system of beliefs and practices -are introduced. I look at three major revivals in India, the Brahmo Samaj, the Arya Samaj and end with the Ramakrishna Movement. How was an Indian, and in particular Hindu, identity in India being re-formulated? This article attempts in a small way to show how India responded to and dealt with revolutionary changes, and how this had consequences for its worldview. Most religions -but perhaps not all religions, especially in the remote spots of the world -have undergone western and modern influences. And reform happens when, in response to foreign influence and criticism, a re-formation precipitates change, usually from within, to improve conditions. The word reform from the Latin re means back, again, and formare, to form, that is, to put together: to restore, reconstruct or rebuild. I draw on Ninian Smart's (1998:407-408) concept of neofoundationalism as a framework to understand the strategies of reformers who reach back to the past as a re-orientation to the historical beginnings of a religion in the act of reform. So, for example, if one is working with Hinduism, a modern western-educated Hindu will draw on ancient motifs and philosophies and quote texts 3000 years old but will cast the faith in laws, which are fitting to the seminal period of the nineteenth and 20th century's thoughts and struggles. These re-formations, of course, have many implications. Whilst changes occur to protect and preserve the essence of a particular society for the future, it may mean deviations from an assumed faith because of the social and political changes within a society which could lead to ritual, ethical, doctrinal and philosophical changes. In the case of India, the distortion of, and demeaning 1.The other representatives speaking for Hinduism or who were from India were: Manilal N. D'vivedi, Protab Chunder Mozoomdar, the then leader of the Brahmo Samaj, Vrichand A. Gandhi, Dharmapala, Miss Jeanne Sorabji (a Parsee who on the first day of the Parliament confessed that her father had converted to Christianity), and Professor C.N. Chakravati (ed. Barrows 1893, vol. 1:65-66). The importance of the life and teachings of Swami Vivekananda can never be overestimated by contemporary Hindus; the numerous Ramakrishna centres around the world bear testimony to his abiding influence even 127 years after his address to the Parliament of World Religions in 1893. Vivekananda symbolises a Hinduism that has been able to assert its sovereignty not just over the intolerable and very parochial missionary attitudes of Christianity in the 19th century, but his notion of universal Hinduism took root amongst the people of the world and thus positioned itself in the pantheon of World Religions. This article draws on Ninian Smart's notion of neofoundationalism to show how a series of reformers, culminating with Vivekananda, reach into the past to reconstruct and revive Hinduism. I argue that the success of Vivekananda was because of his particular version of Vedanta, which he first made accessible to the West at the Parliament of World Religions. I conclude that had it not been for Vivekananda's message of universalism, Hinduism would not have entered the World Religion stage at the end of the 19th century and that India would not have regained its national pride and self-consciousness. attitudes towards its teachings and practices by the colonisers, and the move towards modernity, have stirred the notion of reform and reconstruction as a means to developing its selfconsciousness. The result of reformation and revival was the admission of Hinduism into the pantheon of World Religions. To become a World Religion, a religion must be able to do a few things: religion should be able to emancipate itself; it must develop a universal message; it must own a doctrine of salvation that is unequivocal and accessible to its potential adherents; it must be literate; it must possess a collection of sacred scripture that is translatable into different languages; it must have a class of interpreters who can act as missionaries and above all, it must transcend cultural boundaries (Fitzgerald 1990:104). And perhaps the greatest challenge for Vivekananda at the Parliament was how to take Hinduism out of its sociocultural moorings and make it universally relevant. And at the end of the 19th century all arguments and counterarguments about what constituted authentic Hinduism had been conclusively brought together by Vivekananda in his opening address. For in his proclamation, Hinduism was no longer a discrete system bounded by a singular sociocultural unit for a particular time and place, but it had opened to all who wanted it. India: Background The modern period in India is marked by the year 1750 when the British had laid claim to the whole of India, through a unique series of circumstances, establishing its authority over hundreds of millions of people. The burgeoning presence of Europeans and de-industrialisation, amongst other factors, led to the fragmentation of and the ultimate decline of the Mughal Empire. This pre-modern empire gave South Asia relative peace and stability during a large part of the 17th century, which was important for economic expansion. Under the influence of modern European civilisation, India's own civilisation and culture was at a low point. Whilst the notion of nation and nationalism -a product of British rule -may have had a positive effect by creating a geographical unity under the auspices of a common administration and cemented by a common language, English, nationalism was not understood within the framework of its pluralism and diversity. India had many separatist elements such as conflicts between the religions, ethnic and religious diversity, a variety of dialects and languages, social cleavages, and its general lack of progress towards modernity. India may have been given the political unification it had lacked for centuries, but a cultural esprit de corps and its selfconsciousness as a nation were missing. It was in this zeitgeist that a renewal of thinking would appear through a series of reformers. And although these reformers were influenced by Western education and thought, they still clung to the linguistic, cultural and religious endowments of the motherland and would stoke India's pride more than nationalism did -at that time, anyway. The reformers It was in Bengal, the seat of the East India Company's power, at the beginning of the 19th century, where the first renovation of Hindu society began with Raja Rammohan Roy (1774-1833). Roy's intention in the sphere of religious reform was to return to the religion of their Indian ancestors. Consequently, he based his authority on the Upanishads and the Brahma Sutras as the authoritative texts of Hinduism. Notable members in the leadership of his society included, amongst others, Debendranath Tagore and Keshab Chandra Sen. Although Roy was born into an orthodox Brahmin family, he was greatly influenced by the ethical systems of Islam and Christianity and the concept of monotheism (Gupta 1942:52). His exposure to these philosophical ideas therefore made him critical of the polytheism in Hinduism; he published these criticisms in a book in Persian. The influence of Unitarian missionaries in Calcutta had an effect on the way in which he interpreted the Upanishads (Smart 1998:406). But what makes Roy's work important was his involvement in the abolition of sati, and in pointing out that polygamy was unscriptural, thereby making it an unacceptable practice (Kruger, Lubbe & Steyn 2009:91). The appeal of the Roy's Brahmo Samaj was limited: it did not take hold amongst those who had a deep devotion to deities, nor did his abstract ideas take root amongst the Brahmins, whose primary concern was the preservation of ritual purity (Flood 1996:254). It is easy to look back at Roy's ideas and see them as parochial and limited to a very specific context within Indian society, but it is worth noting that his early revivalism brought out some classical ideas and models that might have otherwise been swept away. And his efforts in exposing and dealing with some of the social ills of that society must be acknowledged. Furthermore, this early revival demonstrates that religious regeneration precipitated the abolition of sati and reduced polygamy. The second revival began with Swami Dayananda Saraswati, who founded the Arya Samaj in 1875. In contrast to Roy, Dayananda was a guru, a Sanskrit scholar and a sannyasi, who attempted to return to Vedic orthodoxy. Dayananda did not want to be influenced by other religions the way Roy was. For Dayananda, only the Vedic hymns were seen as true texts. The Brahmanas and the Upanishads, which were seen as later accretions to Hindu philosophy and theology, were fallacious and sectarian and therefore to be abandoned (Smart 1998:407). Having renounced the Brahmins and disposed of their services, Dayananda opened the study of scripture to women and outcastes, thus instituting early notions of an egalitarian society. His principle idea was a strict monotheism and an iconoclasm that did not appeal to many Hindus who wanted to maintain their colourful practices and multiplicity of gods. This was probably Dayananda's weakness; he did not consider the religious pluralism in India and what this meant for nationhood. Furthermore, the Arya Samaj was (Armstrong 2014): [A]n extremely reductive form of 'Hinduism' since the Vedic tradition had long been the faith of a small elite and very few people were able to understand ancient Sanskrit. It thus tended to appeal only to the educated classes. (p. 262) The success of the Samaj, it would appear, was mainly in the Indian diasporas in places such as South Africa and Fiji where Indians worked as indentured labourers. Despite the Samaj's relatively short reign in India, important precedents were established, which show that these revivals did not take place against a background of complete cultural decay, but that they forged developments 'reaching back to their foundations' (Smart 1998:408), which prepared the way for later achievements that were to come in the late 19th century. Towards the end of the 19th century there appears to be a culmination of a series of strands from the earlier plots, not breaking with the past but re-formulating trends for a new era in the religious and cultural life of India, and thus beginning the third in the series of revivals. It was through the pluralist perspective of Ramakrishna, which was followed through by his disciple, Swami Vivekananda, that modern Hindu ideology and philosophy would appear. Shankara's Advaita Vedanta and Ramanuja's Vishistha Advaita There were numerous philosophical and theological positions that would influence Vivekananda's own practice and teachings, but the two most notable Indian thinkers were Shankara and Ramanuja. I have used Stevenson and Haberman (1998:56-65), Flood (1996:239-243) and Kruger et al. (2009:81-83) in the composition of this broad summary of the two philosophical positions. Shankara was an 8th century Indian philosopher and his position constitutes one of the most popular justifications for the act of religious renunciation. Shankara was the first Indian philosopher to address some of the very pressing questions regarding Brahman and the relationship between the ultimate reality and the world of multiplicity we experience through our senses. His is a philosophy of unity that devalues all diversity. For Shankara, Brahman is the only truth. The world experienced through our senses is not Brahman and ultimately not real. Ramanuja, a theologian and philosopher, lived possibly between the 11th and 12thcenturies. He was the chief interpreter of Vedanta for the south Indian devotional movement known as Shri Vaishnavism. His philosophical system is classified as Vishistha Advaita (Non-Dualism of the Differentiated) as it takes differentiated things to be real and understands them to be attributes of a non-dual reality. His theological position, therefore, was diametrically opposed to that of Shankara's. For many Vaishnava Hindus, the worship of God in the form of Lord Vishnu is the personal nature of the divine and an ultimate attitude, not an illusion to be transcended. Whilst both these philosophers accept the Upanishadic postulation that Brahman is the sole reality, for Ramanuja, Brahman means God who is equipped with multiple qualities. However, Shankara does make a compromise and concedes to the idea of devotion (bhakti) to the personal Lord (Isvara), but only as a lower level of knowledge. Brahman, in its timeless essence as identical with the self, is beyond all predicates and qualities (nirguna), but in its temporal mode as the Lord it has attributes (saguna), and so can be approached through devotion as an object of consciousness. To see the absolute as the Lord is to maintain a distinction between self and absolute, which is to retain a remnant of ignorance that must be finally transcended; if reality is one, all distinction must be illusory. Many of the devotional theologians within Hinduism remark that they do not want to become sugar (Shankara's goal); instead, they want the blissful experience of tasting sugar (Ramanuja's goal). For Shankara, the highest spiritual path consists of a meditative practice designed to lead one to the perceptive realisation that 'I am Brahman'. A salient precondition for this practice, however, is to remove oneself from the quotidian practices of life that affect the senses and move towards world renunciation. Ramanuja, on the other hand, has a more positive view of the world and insists that one should engage in activities according to one's own life situation; renouncing the world is simply another attempt to establish control and does not lead to a state of perfect happiness. Instead, Ramanuja advises complete surrender to God, for only then comes the freedom to enjoy the splendour of the world. So, although renouncer traditions may be found all over India, bhakti devotion in temples and home altars dominate Hindu piety. It was the synthesis of these two philosophical systems, together with his master, Ramakrishna, that influenced Vivekananda's thinking. America: Background The Great Awakening that broke out in the middle of the 18th century brought with it new religious fervour that revitalised American piety, changing it from ritual and ceremony to personal religion. This wave of religious enthusiasm amongst Protestants swept the colonies in the 1730s and 1740s, leaving a permanent impression on American religion. What The Great Awakening shows is that religion, 'instead of being an obstacle to progress and democracy, could be a positive force for modernisation' (Armstrong 2014:243). What's more is that it was America's first mass movement that gave ordinary people their first experience of something that extended to the whole country and that would change the tide of history (Armstrong 2014:245). Furthermore, the theological innovations in this period extended to include the people's concerns to the issue of slavery whilst challenging the establishment. Towards the end of the 18th century a new wave of revivals, known as the Second Great Awakening, began with the ordinary people who campaigned for a more 'democratic and Bible-based America' (Armstrong 2014:249). It was an age of mass rallies, huge tent campaigns and Gospel songs that drove crowds to euphoria. Emotions and feelings, as opposed to dry intellectual discourse, became part of the new religious practice. Whilst the elite looked at the revival as backward, it was in effect 'a Protestant version of the Enlightenment' which gave the revivalists the beginnings of the modern ideals of democracy, equality, freedom of speech and independence, in a characteristic mode of expression that uneducated people could claim as their own (Armstrong 2014:249). The spirit of what began with the downtrodden in America would continue in the form of evangelical Christianity amongst the middle classes. By the mid-19th century, evangelicalism had become the dominant faith in America. In the second half of the 19th century the process of settlement and industrialisation speeded up. Large numbers of immigrants, mainly Catholic and Jew, descended on predominantly Protestant America (Smart 1998:372). Adjustments had to be made on both sides for these immigrants to be successfully integrated into their new religious and social circumstances and surroundings. This period also witnessed the development of Black religion, enhanced by the migration of many people from the South to the industrial north. Their growing awareness of social and political injustices led to African Christianity becoming charged with the struggle for justice (Smart 1998:373-374). From academic quarters, the 18th and 19th century discourse on religion was ostensibly still concerned with the amassing of conjectural suppositions relating to the origins and development of religion (Masuzawa 2005:12). It was, nevertheless, during the latter half of the 19th century that interest in the religions of the world became extensive in America. The momentum that began to establish itself in a number of disciplines, namely theology, philology, history and ethnology and whose works had become the basis of comparative studies (Masuzawa 2005:265). From 1867, various chairs in these respective disciplines were created to advance the study in comparative religion at various universities across the United States and which included, amongst others, Harvard, New York, Cornell, Chicago and Princeton. Some of the pioneering academics who held these positions were the likes of James Freeman Clarke, Samuel Johnson, James Moffat, William Dwight Whitney and George Stephen Goodspeed. These scholars began publishing works that reflected a diversity of approach and a consciousness about the study of religions hitherto disregarded in North America. The themes that dominated their discussions related to the religious zeitgeist of the time, which showed that they were taking other religions seriously and at the same time moving away from notions that Protestantism, Catholicism and Judaism were the only religious discourses that mattered. Whilst official positions were established and academic scholarship thrived under the banner of world religions, what was missing was the actual contact with those other religions. For, arguably, the study of religious pluralism cannot be restricted only to the facts or abstract concepts regarding other religious systems and their doctrinal teachings (Singh 2005:64). What was needed then was the praxis and out-workings of these academic pursuits to be witnessed firsthand. However, early 19th century American history had been a time of drastic disruption, which was most discernible in the area of religion. Hutchinson (2003:19) had argued that there were unparalleled demographic changes in this era, and the numerical superiority of Protestant Christianity was greatly reduced. Furthermore, what made the religious changes in that era so deeply disturbing was not merely that the American demographics had changed, but that it differed greatly from what had possibly existed in the colonial period. The colonials, apparently, 'had thought of their society and culture as diverse', but it had on the whole remained 'broadly homogenous for more than two centuries' (Hutchinson 2003:19-20). What the encounter with diverse cultures reflected was that discourses about 'religious diversity' and 'religious pluralism' were changing in American society (Hutchinson 2003:112). The major themes emerging in American religious history by the end of the 19th century related largely to the notions of intolerance, diversity and pluralism, which were resolvable only through contact and dialogue with other religions. And as Barrows (1893:5-6), the chairman of the Parliament of World Religions committee noted on the importance of the Parliament: '… there has been much dissonance between the various religions and that has alienated many people in history. The modern idea was towards unity'. Vivekananda goes to America In order to understand perhaps why Vivekananda's message was so effective, I show why the other speakers at the Parliament may not have had the same appeal by drawing on some aspects of an article presented by one of the Hindu delegates, D'vivedi. But before presenting D'vivedi, I present a brief illustration from the data collected in participant observation of Hinduism that was presented at the Parliament from the perspective of a missionary who was based in India. Perhaps not all accounts by missionaries, in the late 19th century, of their experiences and interpretations of their work in the colonies amongst other religions may be viewed as derogatory and disrespectful. Thomas Ebenezer Slater was an exception. As a missionary from the London Missionary Society, he had begun his career in Calcutta and then continued in Madras. At the time of the Parliament he worked in Bangalore. His article, 'Concession to native ideas, having special reference to Hinduism' point out that Hindus (Barrows 1893): … [B]y instinct and tradition, are the most religious people in the world. They are born religiously, they eat, bathe, shave and write religiously, they die and are cremated or buried religiously, and for years afterward they are devoutly remembered religiously. They will not take a house or a shop or office, they will not go on a journey or engage in any enterprise, without some religious observance. We thus appeal in our missionary effort to a deeply religious nature; we sow the gospel seed in a religious soil. (p. 456) His understanding of the religious habits of Indian people, correct in its details, his appraisal of Hinduism's plurality, inclusiveness, its heterogeneous nature, diversity and its wide metaphysical philosophies shows that he went deeper than simply casting aspersions on what he observed. The religious paradigms of institutionalised institution, homogeneity and community, and a single definitive religious book, the notion of a monotheistic God, amongst other things, is highly unsatisfactory in the study of Hinduism, especially if it is viewed from Christian and Western categories in the way we study religion. Perhaps what Slater finds difficult is the idea that a Hindu can be a Hindu regardless of whether her personal views lean towards monotheism, monism, polytheism or atheism. And hence his observation of a foreign system that betrays those inherited categories he is familiar with (Barrows 1893): [W]hat is styled as 'Hinduism' is a vague eclecticism, the sum total of several shades of belief, of divergent systems, of various types and characters in the outward life, each of which at one time or another calls itself Hinduism, but which, bears little resemblance to other beliefs. Every phase of religious thought and philosophic speculation has been represented in India. Some of the Hindu doctrines are theistic, some atheistic and materialistic, others pantheistic -the extreme development of idealism. Some of the sects hold that salvation is obtained by practicing austerities and by self-devotion and prayer; some that faith and love (bhakti) form the ruling principle; others that sacrificial observances are the only means. Some teach the doctrine of predestination: others that of free grace. (p. 456) The Reverend Slater admits that [I]t is hard for foreigners to understand the habits of thought and life that prevail in a strange country, as well as all the changes and sacrifices that conversion entails; and, with our brusque, matter-of-fact Western instincts, and our lack of spiritual philosophic insight, we too often go forth denouncing the http://www.hts.org.za Open Access traditions and worship of the people, and, in so doing, are apt, with our heavy heels, to trample on beliefs and sentiments that have a deep and sacred root. (Barrows 1893:456-457) Note his description of the foreign land as 'strange' emphasising the phenomena he had seen as odd and peculiar. Despite the fact that Slater is trying to convert people, he has a deep respect for their beliefs and is critical of the westerner's too easy dismissal of them. A little irony is, given the fact that he was a missionary and still doesn't seem to have the average condescending view of the 'other'. Regardless, the Reverend Slater returns to his belief and concludes that the purpose of religion is in the Bible alone. Manilal N. D'vivedi was a Justice of the Peace of the town of Nadiad in Maharashtra and a member of the Philosophical Society of Bombay, who also addressed the Parliament on that occasion (Barrows 1893:65-66). D'vivedi's opening strategy took the form of addressing the Biblical religions and their irrational and unfounded use of a limited chronology in their histories and revelations, whilst showing that for the Hindu time is not measured in the same way except through the cycles of yugas. According to Smart (1998:353), the use of history in a modern manner to investigate the past, rather than just tell a story, inevitably began to raise questions about the historicity of the Bible, whereas the formation of evolutionary theory was amongst the factors that brought doubt to the understanding of traditional cosmology (Smart 1998:353). This is what D'vivedi argued (Barrows 1893): [W]hereas the Indian religion claims exorbitant antiquity for its teachings, the tendency of Christian writers has been to cramp everything within the narrow period of 6000 years. But for the numerous vagaries and fanciful theories these extremes give birth to, this point has no interest for us at the present moment. With rapid advances made by physical science in the West, numerous testimonies have been unearthed to show the untenableness of biblical chronology, and it would be safe to hold the mind in mental suspense in regard to this matter. (p. 316) What D'vivedi attempted in his address was to present his audience with details relating to the various texts found in Hinduism and a discussion on semantics. His other strategy, as an attack on the biblical faiths, was to use Biblical chronology as irrational and unfounded. The main point of his argument was to persuade his western audience, who he assumed were not very well informed about Hinduism, that he was going to correct their misconceptions. And as such his article was long and excessively concerned with details about the various texts, semantics, doctrines and the caste system. And then to end, the notion of universality in religion is raised about the possibility of articulating the principles of a universal religion without adhering to any religion in particular (Barrows 1893): [H]aving ascertained the general and particular scope and meaning of Hinduism, I would ask you, gentlemen of this august Parliament, whether there is not in Hinduism material sufficient to allow of its being brought in contact with the other great religions of the world, by subsuming them all under one common genus. In other words, is it not possible to enunciate a few principles of universal religion which every man who professes to be religious must accept apart from his being a Hindu, or a Buddhist, a Mohammedan or a Parsee, a Christian or a Jew? (p. 331) Then came the '... eloquent monk Vivekananda of Bombay, clad in gorgeous red apparel, his bronzed face surmounted with a huge turban of yellow' (Barrows 1893:62), who addressed the audience. What did Vivekananda (1977, vol. 1:5) say that gave him such a warm reception from the moment he uttered his now famous opening lines: 'Sisters and Brothers of America …'. At first these words appear to be a common-place greeting to fellow religious travellers. But on closer inspection it reveals much more: in these words are the embodiment of a whole new metaphysic; in it the unity of religion; in it the relationship between the divine and the different parts of the cosmos; in it immanence; in it is the notion of transcendence; the collapse of geographical spaces; the strategic effort to re-appropriate history and remove the dualistic 'us' versus 'them'. In these words are the seeds of the making of Hinduism as a world religion. Drawing on Fitzgerald's (1990) dimensions of what constitutes a world religion, mentioned above, I show how Vivekananda presents a Hinduism that addresses all the major concerns regarding Hinduism's universal relevance, its doctrine of salvation, its sacred texts, in a way that allows it to transcend all cultural boundaries. Amongst various religious groups, to call someone a 'sister' or 'brother' means to address that person as a 'fellow believer' or fellow religious traveller. There is an element of family and kinship insofar as the religion of a group means that it holds similar views, which are no doubt attached to the same God. Vivekananda's address would suggest a connection between the speaker and the audience that invariably joins them in a way that is suggestive of inclusivity and solidarity. And so, when Vivekananda addressed his audience as sisters and brothers of America, he had in fact inaugurated his notion of universal acceptance. What Vivekananda was announcing at this point was oneness and unity -the core of his metaphysical beliefs. And if Vivekananda, like his master, believed that the nature of God in Islam and Christianity was similar, then of course the people who were gathered at this august assembly were indeed his sisters and brothers. This then does not only suggest the unity of all religions, but that all religions are accepted as true (Vivekananda 1977, vol. 1:3). And in this simple act of his opening address, Vivekananda (1977, vol On the nature of the soul, reincarnation and general Hindu soteriology, Vivekananda (1977, vol. 1) refrains from delving into abstract concepts that are abstruse, in the way D'vivedi had done in his presentation, but rather, he offers the explanation through the use of metaphor which comes across with the gentleness of a mystical master: [W]ell, then, the human soul is eternal and immortal, perfect and infinite, and death means only a change of centre from one body to another. The present is determined by our past actions, and the future by the present. The soul will go on evolving up or reverting back from birth to birth and death to death. But here is another question: Is man a tiny boat in a tempest, raised one moment on the foamy crest of a billow and dashed down into a yawning chasm the next, rolling to and fro at the mercy of good and bad actions -a powerless, helpless wreck in an everraging, ever-rushing, uncompromising current of cause and effect … (p. 10) Vivekananda offers the Vedas as the foundational text of the Hindus, not in a way that is typical of what is to be expected by the biblical traditions -or Islam for that matter, but as a point of centring for Hindus. It betrays the classic idea of what a religious text should be, that is, it has no beginning and no end, presenting a God that is ever-present in the evolution of creation (Vivekananda 1977, vol. 1): [T]he Hindus have received their religion through revelation, the Vedas. They hold that the Vedas are without beginning and without end. It may sound ludicrous to this audience, how a book can be without beginning or end. But by Vedas no books are meant. They mean the accumulated treasury of spiritual laws discovered by different persons in different times. Just as the law of gravitation existed before its discovery, and would exist if all humanity forgot it, so it is with the laws that govern the spiritual world. (p. 6) Conclusion I set out to show that during the latter half of the 19th century, India was ready to take on the challenges of regaining its identity in a colonially demarcated space. I have shown that how the reforms initiated by Raja Rammohan Roy and Swami Dayananda, although limited in their appeal, nevertheless, in reaching back to the past, brought out ancient classical models that might otherwise have been swept away, and established important precedents. It was Vivekananda, I have argued, who, having been influenced by the pluralist perspectives and exclusivist philosophy of his master, as well as the philosophical and theological positions of Shankara and Ramanuja, produced a new metaphysic of Hinduism, which was a success both in India and abroad. For India, it meant the restoration of its national pride and unity and the acceptance of its religion as substantive and true. For the West, which was also at that time ready to receive the seed of 2.For a more nuanced explanation on how Vivekananda used metaphor, analogy and simile, to illustrate metaphysical theories, see Suren Naicker's 2016 article. Hinduism, it meant an offering of Hinduism that was universal because it could be lifted out of its very particular sociocultural moorings and take its place amongst all people as a world religion; no longer was Hinduism confined to a particular place for a particular people. Furthermore, Vivekananda's theological posture that all religions lead to the same goal was not only a unique form of universalism, but one that would include all other religions as well. This was the breadth of his vision. The lasting effects of Vivekananda's reappraised Hinduism, which he presented to the World Parliament of Religions in 1893 to such powerful effect, paved the way for the resurgence of religion and spirituality that prevailed during the latter half of the 20th century beginning with the 1960s counter culture. More research may be attempted in this direction.
2020-10-28T19:20:16.545Z
2020-10-20T00:00:00.000
{ "year": 2020, "sha1": "f6ef3f9d4ca122b9520ed38ed292d523ff9b56b0", "oa_license": "CCBY", "oa_url": "https://hts.org.za/index.php/hts/article/download/6110/16213", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dc31ce3148733b999803e89b24b249a5a4d13e08", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "History" ] }
119151082
pes2o/s2orc
v3-fos-license
On the structure of (claw,bull)-free graphs In this research, we determine the structure of (claw, bull)-free graphs. We show that every connected (claw, bull)-free graph is either an expansion of a path, an expansion of a cycle, or the complement of a triangle-free graph; where an expansion of a graph $G$ is obtained by replacing its vertices with disjoint cliques and adding all edges between cliques corresponding to adjacent vertices of $G$. This result also reveals facts about the structure of triangle-free graphs, which might be of independent interest. Introduction The structure of graphs with some given forbidden subgraphs is well studied, and quickly gained several applications in graph theory and in theoretical computer science. For some of the known results in this field see [3], and [4]. In this paper, we study the structure of (claw, bull)-free graphs. A graph is a claw if it is isomorphic to K 1,3 , and a bull if it can be obtained from a triangle by adding two pendant edges at two different vertices ( Figure 1). Notation. Let X and Y be disjoint subsets of the vertex set of a graph G. Then we write X ⇔ G Y (or simply X ⇔ Y if the graph G is understood from the context) to mean that every vertex in X is adjacent to every vertex in Y . We also denote by N G (X) (or N (X), if G is understood) the open neighborhood of X, defined by The main result in this paper is as follows: Theorem 1.1. A connected graph G is (claw, bull)-free graph if and only if it belongs to one of the following (disjoint) classes of graphs: • the class of graphs which are expansions of paths of length at lesat four, • the class of graphs which are expansions of cycles of length at least six, • the class of connected graphs which are complements of triangle-free graphs. Since the complement of a bull is still a bull, the complement of triangle free graphs are also (claw, bull)-free. As a corollary of Theorem 1.1, this two classes are almost the same. Corollary 1.2. The class of triangle-free graphs is the union of the class of complete bipartite graphs and the class of complements of all graphs G where G is a connected (claw,bull)-graph which is not an expansion of a path of length at least 4 or an expansion of a cycle of length at least 6. In the following three sections we consider three sub-classes of (claw, bull)-free graphs based on the length of a longest cycle. Finally we combining the result of these sections to show Theorem 1.1. We will use standard definitions and notation for graphs as given in [2]. Notation. Given a graph G, we define (G) as the length of a longest induced cycle in G. This research was inspired by the following result on the structure of (claw,bull)-free graphs, obtained in a study of the game of cops and robbers [1]: Let u 0 and u 1 be two adjacent vertices in a (claw,bull)-free graph G, and let U be the set of neighbor of u 0 in G − u 1 . Then, the component H of u 0 in G − U is an expansion of a path whose bags; in other words, with N 0 = {u 0 } and N i being the ith neighborhood of u 0 in H for each positive integer i, each N i is a clique and we have N i ⇔ N i−1 for each i ≥ 1. Indeed, we shall use Lemma 1.3 to show that the sub-class of (claw,bull)-free graphs under consideration in Section 4 consists of expansions of paths. 2 The case (G) ≥ 6. Lemma 2.1. Let G be a (claw, bull)-free graph, C an induced cycle of length k ≥ 4 and x ∈ N (C). Then N (x) contains two consecutive vertices of C. Moreover, if k ≥ 5 then N (x) contains three consecutive vertices of C. Proof. Let V (C) = {v 1 , . . . , v k } and suppose xv 1 ∈ V (G). Since G is claw-free, we must have xv 2 ∈ E(G) or xv k ∈ E(G), establishing the first claim. Suppose, without loss of generality, that xv 2 ∈ E(G). Then, in case k ≥ 5 one must have xv 3 would be a bull (See Figure 2). . . . Proof. Let V (C) = {v 1 , v 2 , . . . , v k }. By Lemma 2.2 we know that N [C] = V (G) and every vertex outside of C has at least three neighbours in C. For the rest of the proof, set N x := N (x) ∩ V (C) for each x ∈ V (G). x The case a > 2 (the case c > b + 1 is similar) ; thereby, according to Lemma 2.1, N y consists of three consecutive vertices of C. Hence by Claim 1 we have N y = N x ∪ {x} and; in particular, xy ∈ E(G). Hence, we may assume x, y ∈ V (G) \ V (C). Suppose, contrary to the claim, that xy / ∈ E(G). would be a bull unless xy ∈ E(G). In this case, N x and N y are the same set, say, . Furthermore, in light of Claim 2 it follows that: from which it follows that G is an expansion of C, as desired. would be a claw unless xy ∈ E(G). Proof. Let I be a largest independent set in G with |I| ≥ 3. According to Lemmas 2.1 and 2.2 every vertex x ∈ I \ V (C) is adjacent to two consecutive vertices of C. Hence, I ∩ V (C) has to consist of two consecutive vertices of C, a contradiction. Let x, y, z be distinct vertices in I. Since G is claw-free, no vertex of C is adjacent to all three of x, y, z. Hence, by the pigeonhole principle and Lemmas 2.1 and 2.2, we may assume v 3 / ∈ N (x) and v 4 / ∈ N (x), which imply xv 1 , xv 2 ∈ E(G). Furthermore, we may assume v 1 / ∈ N (y) (See Figure 8). Every vertex x ∈ I \ V (C) is adjacent to three consecutive vertices of C. Hence, likewise Case 1.1, I has to contain two consecutive vertices of C, a contradiction. Let x, y, z be distinct vertices in I. Since G is claw-free, no vertex of C is adjacent to all three of x, y, z. Hence, by the pigeonhole principle and Lemmas 2.1 and 2.2, we may assume v 4 / ∈ N (x) and v 5 / ∈ N (x), which imply xv 1 , xv 2 , xv 3 ∈ E(G). Furthermore, we may assume v 1 / ∈ N (y); thereby, yv 3 , yv 4 ∈ E(G). But then we must have yv 5 ∈ E(G) for otherwise G[{v 3 , v 4 , v 5 , x, y}] would be a bull. If v 1 , v 5 ∈ N (z) then G[{v 1 , v 5 , x, y, z}] would be a bull, and if only one of v 1 , v 5 is in N (z) then G[{v 1 , v 5 , x, z}] or G[{v 1 , v 5 , y, z}] would be a claw. Hence, v 1 / ∈ N (z) and v 5 / ∈ N (z); thereby v 2 , v 3 , v 4 ∈ N (z). But then G[{v 3 , v 4 , v 5 , x, z}] would be a bull, a contradiction. Proof. Let {α 1 , α 2 , α 3 } be an independent set of vertices in G. Since diam(G) = 2, for each i ∈ [3] there is a common neighbor w i ∈ V (G) of the α j s for j ∈ [3] \ {i}. Moreover, for each i ∈ [3] we have w i α i ∈ E(G), for otherwise G[{α 1 , α 2 , α 3 } ∪ {w i }] would be a claw. We shall show that the 6-cycle C : α 1 w 3 α 2 w 1 α 3 w 2 α 1 is induced; thereby (G) ≥ 6. To this end, suppose on the contrary, that C has a chord. As such, without loss of generality we may assume w 2 w 3 ∈ E(G). But then G[{α 1 , α 2 , α 3 , w 2 , w 3 }] will be a bull, a contradiction. Hence, C is an induced cycle, as desired. The following proposition will be used in multiple occasions in the proof of Lemma 4.3. The following Lemma is the main result of this subsection. Proof. Let k = diam(G). According to lemma 4.1 we have k ≥ 3. Let v 0 , v k ∈ V (G) such that d G (v 0 , v k ) = k, and let P = v 0 , . . . , v k be a geodesic path between them. Moreover, let U = N G (u 0 ) \ {u}, set H = G − U and define N i 's as in Lemma 1.3. Moreover, let A = {α 1 , α 2 , α 3 } be an independent set of vertices where α i s are distinct. Claim 1. No vertex in U is adjacent to v 3 or a vertex in any N i with 3 < i ≤ k. Moreover, a vertex of U adjacent to a vertex in some N i is adjacent to every vertex in every N j with j < i. Proof of Claim 1. If the first part does not hold, then one has d G (v 0 , v k ) < 2 + k − 3 < k, a contradiction. As for the second part of the claim, consider a vertex u ∈ U which is adjacent to a vertex w i ∈ N i and for each j ∈ {0, . . . , i − 1} choose a vertex w j ∈ N j . Then, by the definition of the N j s, w 0 , w 1 , . . . , w i is an induced path. Since uw 0 = uv 0 , uw i ∈ E(G), by Proposition 4.2 it follows that every uw j is an edge of G. This establishes the second part of the claim. Claim 1 Proof of Claim 2. It suffices to show that N k+1 = ∅. To this end, by the way of contradiction suppose N k+1 = ∅ and choose a vertex w k+1 ∈ N k+1 . Let Q be a geodesic path in G from w k+1 to v 0 . Considering the fact that d H (w k+1 , v 0 ) = k + 1 > k, we conclude that Q must contain exactly one vertex, say u, from U . As such, we must also have uv 0 ∈ E(Q), i.e. uv 0 must be the last edge of Q. Moreover, since V (Q) \ {u} ⊆ V (H), every vertex in V (Q) \ {u} must be in some N j . Suppose the vertex of Q preceding u is in N i and call it w i . Case II: i ≤ k. Q will be of the form w k+1 w k , . . . , w i , u, v 0 where each w j (j ∈ {i, i + 1, . . . , k + 1}) is in N j . In particular the length of Q, which is bounded above by the diameter k of G, is k + 3 − i. Hence, i ≥ 3. On the other hand, by Claim 1, we must have i < 4 (since u is not adjacent to v 3 ∈ N 3 ). Therefore, i = 3 and, hence, uv 1 , uv 2 ∈ E(G), according to Claim 1. Moreover, we have uw 4 ∈ E(G), by Claim 1, 4 , u}] will be a bull, a contradiction. Claim 2 Let R be the set of paths of the shortest length from a vertex in W to v k . Note that every path in R has at least one vertex in common with U , for otherwise w would be in k 1 N j , a contradiction. Choose R ∈ R such that |V (R) ∩ U | is the minimum. Furthermore, let w be the initial vertex of R and u the last vertex of R which is in U . Observe that every vertex of R that follows u is in some N j with j ∈ [k] and, according to Claim 1, the immediate successor of R is in 3 1 N j . Let the latter be w i ∈ N i. Then, we must have (1) where each w j is in N j and w 3 = v 3 . (Recall that uv 3 ∈ E(G); thereby, in case i = 3 we must have w 3 = v 3 .) As a result, the length of R(u, v k ) is the max{k − i + 1, 2}. Then. from the facts that • R has at least one edge more than R(u, v k ), • length of R is bounded above by the diameter k, and Consequently, if i = 2 or i = k = 3 then we must have wu ∈ E(G) (for otherwise the length of R would be grater than k); hence, G[{w, u, v 0 , v i }] would be a claw, a contradiction. Also, G[{w, u, v 0 , v i }] would be a claw if i = 3, k > 3 and wu ∈ E(G). Hence, the only case to examine is when i = 3 < k and wu ∈ E(G). As such, that R has length ≤ k implies Note that we must have u ∈ U , for otherwise R(u , v k ), would be in R, contradicting the Figure 14: Ruling out the case i = 3 < k and wu ∈ E(G) in the proof of Claim 3, Lemma 4.3. With R(w, w i ) = w, u , u, w 3 , G[{u u, v 2 , v 4 , w 3 }] will be a bull. choice of R as a path of the shortest length in R. Likewise, we must have u w 3 ∈ E(G), for otherwise wu + u w 3 + R(w 3 , v k ) would be a path in R yet shorter than R. Furthermore, we 13 must have u v 2 ∈ E(G), for otherwise the path R := wu + u v 2 + v 2 w 3 + R(w 3 , v k ) would have the same length as R, implying R ∈ R, with the property that contradicting the choice of R as an element in R with minimum size intersection with U . But then G[{u u, v 2 , v 4 , w 3 }] will be a bull, a contradiction. Claim 3 Claim 4. Let U ⊆ U such that any two vertices in U have a common neighbor in N 2 . Then U is a clique. Proof of Claim 4. According to Claim 1 no vertex in U is adjacent to v 3 . Hence, for any pair x, y of distinct vertices in U with xy ∈ E(G), and for every common neighbor w 2 ∈ N 2 of x, y the graph G[{x, y, w 2 , v 3 }] is a claw. Therefore, U must be a clique. Claim 4 Proof of Claim 5. Consider any vertex w 3 ∈ N G (u) ∩ N 3 . According to Claim 1 we have would a bull, a contradiction. Hence, Hence, we must have diam(G) = 3. Claim 5 Claim 6. U ⇔ {v 1 }. Proof of Claim 6. Let u ∈ U such that uv 1 ∈ E(G). Then, by Claim 1 u is adjacent to no vertex in an N i with i > 0; in other words, we have Let Q be the set of paths of the shortest length from u to v k . Note that every path Q ∈ Q has at least two vertices in U (one of which is of course u), for otherwise one would have uv 0 ∈ Q, implying that l(Q) = l(Q(v 0 , v k )) + 1 > k, a contradiction. Hence, Choose Q ∈ Q such that |V (Q ) ∩ U | is the minimum and let u be the last vertex of Q which is in U . Note that where the second inequality follows from the fact that u v 3 ∈ E(G). Note that u must be adjacent to some vertex w 2 ∈ N 2 , for otherwise one would have l(Q ) > k + 1, a contradiction. As such, we must have uu ∈ E(G), For every w 2 ∈ N 2 ∩ N G (u ), the graph G[{u, u , v 1 , v 3 , w 2 }] will be a bull. Thus, according to 5 we must have k > 3. Moreover, and u is followed by a vertex w 3 ∈ N 3 along Q . Note that for otherwise the path from u to v k obtained by augmenting the path u, u , w 3 to Q (w 3 , v k ) would be shorter than Q , a contradiction. Moreover, as such, we must have Figure 16: Ruling out the case that |V (Q ) ∩ U | ≥ 2 in the proof of Claim 6, Lemma 4.3. By (8) and (9) the graph G[{u , u , v 2 , v 4 , w 3 }] will be a bull. for otherwise the path Q obtained by augmenting the path u, u , v 2 , w 3 to Q(w 3 , v k ) will have the same length as Q whereas contradicting the choice of Q . Finally, as shown in Figure 16, G[{u , u , v 2 , v 4 , w 3 }] will be a bull, a contradiction. Hence, U ⇔ {v 1 }, as desired. Claim 6 Claim 7. U is a clique. Proof of Claim 7. Suppose, contrary to the claim, that x, y are distinct vertices in U such that xy ∈ E(G). By Claim 6 we have Moreover, we have xv 2 ∈ E(G) or yv 2 ∈ E(G), for otherwise G[{x, yv 1 , v 2 }] would be a claw. In addition, according to Claim 4, v 2 cannot be adjacent to both x and y. Hence, we may assume xv 2 ∈ E(G) & yv 2 ∈ E(G). But then, G[{x, y, v 1 , v 2 , v 3 }] would be a bull, a contradiction. Hence, U is a clique. Claim 7 (a) Let I be a largest independent set in G. By Claim 7 we have |I ∩ U | ≤ 1. Hence, by Lemma 1.3, Claim 3 and that |I| ≥ 3, we must have k ≥ 4, as desired. (b) As k ≥ 4 and according to Claims 1 and 5, no vertex in U is adjacent to a vertex in any N i with i ≥ 3. Note that by Claim 6, we have uv 1 ∈ E(G) for every u ∈ U . We shall show that every vertex in U is either adjacent to every vertex in N 2 or non-adjacent to every vertex in N 2 . To this end, by the way of contradiction, let there be u ∈ U and s 2 , t 2 ∈ N 2 such that us 2 ∈ E(G) and ut 2 ∈ E(G). Then G[{s 2 , t 2 , u, v 3 , v 4 }] will be a bull, a contradiction. Therefore, U is the disjoint union of the sets showing that V 0 ∩ V 1 = ∅. If u ∈ U ,s 2 , t 2 ∈ N 2 with us 2 ∈ E(G) and ut 2 ∈ E(G), then G[{s 2 , t 2 , u, v 3 , v 4 }] will be a bull 16
2018-12-31T20:51:27.000Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "c2a7b892f92188ffc268d1fce089c31dbeb84890", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c2a7b892f92188ffc268d1fce089c31dbeb84890", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
16620862
pes2o/s2orc
v3-fos-license
Experimental Oral Transmission of Chronic Wasting Disease to Reindeer (Rangifer tarandus tarandus) Chronic wasting disease (CWD), a transmissible spongiform encephalopathy of cervids, remains prevalent in North American elk, white-tailed deer and mule deer. A natural case of CWD in reindeer (Rangifer tarandus tarandus) has not been reported despite potential habitat overlap with CWD-infected deer or elk herds. This study investigates the experimental transmission of CWD from elk or white-tailed deer to reindeer by the oral route of inoculation. Ante-mortem testing of the three reindeer exposed to CWD from white-tailed deer identified the accumulation of pathological PrP (PrPCWD) in the recto-anal mucosa associated lymphoid tissue (RAMALT) of two reindeer at 13.4 months post-inoculation. Terminal CWD occurred in the two RAMALT-positive reindeer at 18.5 and 20 months post-inoculation while one other reindeer in the white-tailed deer CWD inoculum group and none of the 3 reindeer exposed to elk CWD developed disease. Tissue distribution analysis of PrPCWD in CWD-affected reindeer revealed widespread deposition in central and peripheral nervous systems, lymphoreticular tissues, the gastrointestinal tract, neuroendocrine tissues and cardiac muscle. Analysis of prion protein gene (PRNP) sequences in the 6 reindeer identified polymorphisms at residues 2 (V/M), 129 (G/S), 138 (S/N) and 169 (V/M). These findings demonstrate that (i) a sub-population of reindeer are susceptible to CWD by oral inoculation implicating the potential for transmission to other Rangifer species, and (ii) certain reindeer PRNP polymorphisms may be protective against CWD infection. Introduction Chronic wasting disease (CWD) is an invariably fatal neurodegenerative disease of cervids belonging to the transmissible spongiform encephalopathy (TSE) group of disorders which include Creutzfeldt-Jakob disease (CJD) in humans, bovine spongiform encephalopathy (BSE) in cattle and scrapie in sheep and goats. Common among TSEs is a pathogenesis contingent on the conversion of a normal host-encoded prion protein (PrP C ) to an abnormal disease-associated isoform (PrP CWD ) which accumulates in the nervous system and other tissues, resulting in the progressive manifestation of clinical disease. CWD is the only TSE maintained in free-ranging wildlife and the relatively efficient horizontal transmissibility of CWD between conspecific cervids presumably relates to the excretion of infectivity in saliva, urine and feces [1,2] as well as its resiliency in contaminated environments [3,4]. The transmissibility of CWD from cervids to domestic livestock, humans and other interacting wildlife species (e.g. carnivores) remains uncertain although current evidence suggests a considerable species barrier markedly impedes, if not prevents such transmission under natural circumstances. Since the identification of CWD in captive deer in Colorado in 1967 [5], it has been described in captive and wild cervid populations in several US states, three Canadian provinces and Korea [6,7,8]. Naturally affected cervid species include mule deer (Odocoileus hemionus), white-tailed deer (Odocoileus virginianus), Rocky Mountain elk (Cervus elaphus nelsoni) and moose (Alces alces) [7]. Experimentally, CWD has been orally transmitted to red deer (Cervus elaphus elaphus) [9], with similar clinical and pathological findings to CWD in other cervids. Chronic wasting disease in reindeer (Rangifer tarandus tarandus), or closely related caribou (e.g. Rangifer tarandus caribou, Rangifer tarandus granti, Rangifer tarandus groenlandicus) has not yet been reported despite the potential overlap in natural host ranges with CWD-susceptible cervids in North America. Determining transmissibility of CWD to reindeer and caribou is of particular importance given the cultural significance of these species to the aboriginal peoples of northern North America and Eurasia, the critically reduced populations of many reindeer and caribou herds, and the potential for some highly migratory herds to further disseminate CWD. Here we investigate the oral transmission of CWD from elk (CWD ELK ) or white-tailed deer (CWD WTD ) to reindeer and describe associated genetic, clinical and pathological findings. Antemortem Testing Animals were subjected to recto-anal mucosal lymphoid tissue (RAMALT) biopsy at various time points after inoculation to determine if they had been infected with CWD. Two of the 3 reindeer inoculated with CWD WTD (Reindeer 12 and 47) showed marked accumulations of PrP CWD in RAMALT germinal centers at 13.4 and 16.2 months post-inoculation (mpi) (Figure 1, Table 1). Testing of the biopsy material from month 16.2 by western blot and ELISA confirmed the presence of PrP CWD in the same two animals with ELISA optical density (OD) values of 1.519 and 1.484 (negative OD cut-off = 0.19). The 3 rd reindeer in the CWD WTD inoculum group (Reindeer 17) was negative for PrP CWD in all RAMALT samples by ELISA, western blot and IHC, with last sampling conducted at 25 mpi. Similarly, none of the reindeer inoculated with CWD ELK (Reindeer 1, 2 and 60) showed detectable PrP CWD in RAMALT biopsies at 7.5, 12, 17.5 or 54.8 mpi. Clinical Signs Beginning around 17.7 to 18.2 mpi the two RAMALT-positive reindeer inoculated with CWD WTD (Reindeer 12 and 47) began to show subtle weight loss and uneven hair coat. After briefly displaying ataxia, excessive salivation and grinding of the teeth both animals developed terminal CWD at 18.5 and 20 mpi. Other than markedly reduced subcutaneous, abdominal and thoracic adipose stores, gross lesions were not identified at necropsy. The third, RAMALT-negative reindeer (Reindeer 17) inoculated with CWD WTD remains alive and has shown no clinical signs of CWD (currently @ 26 mpi). The reindeer receiving the CWD ELK inoculum (Reindeer 1, 2 and 60) did not display clinical signs of CWD at any time during the study and were euthanized at 22.6, 45.3 and 60.9 mpi, respectively. Histopathology and Immunohistochemistry Histopathological examination of brain tissues from both clinically affected reindeer revealed widespread spongiform pathology characterized by regionally extensive vacuolation of the grey matter neuropil and rare neuronal perikarya. Lesions were most prominent in the medulla, particularly the dorsal vagal nucleus (Figure 1), nucleus of the solitary tract, the reticular formation, cuneate nuclei and nucleus of the spinal tract of the trigeminal nerve. Notable spongiform change was also identified elsewhere in the brainstem, striatum, thalamus, cerebellum and to a lesser extent in the cerebral cortex. Localization of PrP CWD by IHC identified accumulations throughout the central and peripheral nervous systems, lymphoreticular tissues and neuroendocrine tissues of the two reindeer displaying clinical signs (Table 2, Figure 1). The tissue distribution and intensity of PrP CWD accretion was generally consistent between both CWD-positive reindeer. Tissues listed in Table 2 were concurrently tested by a commercially available ELISA kit and results were found to corroborate IHC findings (data not shown). Conversely, PrP CWD -specific immunolabeling was not detected by IHC or ELISA in any of the tissues collected from the 3 non-clinical reindeer receiving the CWD ELK inoculum. Consistent with the distribution of spongiform pathology, PrP CWD was distributed throughout the brain with highest intensity in the medulla and relatively lower deposition in the cerebral cortex. The most abundant type of PrP CWD accumulation in the central nervous system was fine particulate, accompanied by coarse granular and coalescing deposits in heavily affected regions. Throughout sections at all levels of spinal cord, particulate and granular PrP CWD accumulations were present in the dorsal, lateral and ventral horns, scattered lightly around white matter and within perikarya of the dorsal root ganglia. In the eye, PrP CWD was most prominent in outer and inner plexiform layers of the retina with a scant presence in the ganglion cell layer and optic nerve. In the peripheral nervous system, occasional perineuronal PrP CWD was found in trigeminal and celiac ganglia. Scant, coarse granular PrP CWD was identified along nerve fibers and in association with perikarya of the sympathetic trunk, while sciatic and phrenic nerves did not contain detectable PrP CWD . Occasional granular PrP CWD was present within nerve fascicles of the vagus nerve. Prominent accumulations of PrP CWD were consistently affiliated with neurons of the submucosal and myenteric plexuses throughout the gastrointestinal tract and in small ganglia associated with the submandibular salivary gland and bladder. All tissues of the lymphoreticular system, regardless of their proximity to the gastrointestinal tract, displayed particularly intense PrP CWD accumulations in association with lymphoid follicle germinal centers. The percentage of follicles affected in each lymph node or tonsil was typically between 90 and 100 percent. In sections of kidney from one animal, two glomeruli were found to be affected by mild lymphocytic infiltrates which stained positive for PrP CWD . Similarly, rare lymphoid aggregates within lung tissue were associated with modest PrP CWD deposits. Several neuroendocrine tissues also contained abundant PrP CWD deposits. Coarse granular staining within the pancreas was restricted to the islets of Langerhans and deposits in the adrenal gland were predominantly confined to the medulla. The pituitary gland contained heavy PrP CWD staining in the pars nervosa and intermedia while rare granular staining was found in association with parafollicular cells of the thyroid gland. Thorough examination of several skeletal muscle groups by IHC did not reveal PrP CWD , however, sections of the ventricular myocardium were found to contain scattered foci of punctate granular staining in cardiomyocytes. Western Immunoblot Analysis of medulla oblongata samples from CWD WTDinoculated reindeer by western immunoblot demonstrated proteinase K-resistant PrP (PrP res ) in the 2 clinically affected animals ( Figure 2) but not in the 3 animals inoculated with CWD ELK (data not shown). The PrP res glycosylation patterns (di-, mono-and unglycosylated bands) observed in the clinical reindeer were comparable in molecular weight and glycoform ratio to that of the original white-tailed deer inoculum. PRNP Sequence Analysis Sequencing of the prion protein coding region (PRNP) revealed some variability among the 6 reindeer. The PRNP from reindeer 12 and 47 encoded a PrP amino acid sequence identical to that of mule deer (Figure 3), and both reindeer were susceptible to infection with CWD WTD . In contrast, PRNP from reindeer 2 and 60 were heterozygous at bases 4 (codon 2, V/M), 385 (codon 129, G/S), 413 (codon 138, S/N), and 505 (codon 169, V/M) of PRNP (cervid numbering), and completely resisted infection with CWD ELK up to 60 mpi (Table 1). Finally, reindeer 1 and 17 were heterozygous at base 413 (codon 138, S/N) of PRNP. Reindeer 1 was euthanized at 22.6 mpi with no evidence of PrP CWD in any tissue. Reindeer 17 is still alive and without clinical signs at 26 mpi and thus far has no PrP CWD in lymphoid tissue, implicating 138 N as a protective polymorphism. All reindeer were methionine homozygous at codon 132 of the cervid prion protein, which corresponds to polymorphic codon 129 in humans. Discussion The current study demonstrates the oral transmission of CWD from the brain tissues of infected white-tailed deer to reindeer (Rangifer tarandus tarandus). Antemortem testing identified PrP CWD in two reindeer at 13.4 mpi and both animals manifested consistent clinical symptoms and succumbed to disease by 20 months. The course of disease progression and widespread distribution of PrP CWD in nervous and lymphoreticular systems are consistent with observations from other cervids naturally or experimentally infected with CWD, including elk, white-tailed deer, mule deer, moose and red deer [9,10,11]. A natural case of CWD in reindeer or caribou has not yet been identified but given the potential overlap in habitat with infected free-ranging cervid populations and the current findings of oral transmissibility, the potential for natural transmission certainly exists. Cervids with terminal CWD typically display PrP CWD deposition throughout multiple organ systems which progresses from early lymphatic tissue involvement, to central and peripheral nervous tissues followed by accumulation in other tissues including the endocrine system and heart [12,13]. In our study, RAMALT sampling prior to 13.4 mpi was not conducted precluding an accurate determination of when lymphoid involvement occurred, although PrP CWD has been detected in peripheral lymphoid tissue as early as 42 d after oral inoculation of mule deer fawns [14]. Our observations of PrP CWD -specific staining in sections of pancreas, pituitary, adrenal medulla, lung, kidney, bladder and salivary gland corroborate findings in other CWD-infected cervids [9,12,13], with PrP CWD primarily localized to lymphoid or neural components of these tissues. The presence of PrP CWD in reindeer excreta such as saliva, urine and feces is anticipated based on studies in deer [1,2] but was not investigated here. Skeletal muscle from CWD-infected mule deer has been shown to contain infectivity [15] and PrP CWD was recently reported in muscleassociated nerve fascicles of white-tailed deer [16]. We were unable to detect PrP CWD in myocytes or associated nerves by IHC or ELISA consistent with the negative findings of others [9,12,17,18]. Regardless, the widespread distribution of PrP CWD in infected reindeer tissues merits consideration while evaluating the risks emanating from processing or consuming potentially infected animals. None of the reindeer inoculated with CWD ELK developed clinical disease or accumulated PrP CWD in any of the tissues tested. The apparent resistance of reindeer to infection by the CWD ELK inoculum may relate to subtle differences in the CWD strains present within the pooled elk and white-tailed deer inocula, however, we think this is unlikely. Differences between the two inocula were not evident through glycosylation profile analysis of PrP res by western blot (data not shown). Further characterization of these inocula in transgenic mice is ongoing. We also considered that this may have been due to a low prion titre in the elk inoculum, however, the same CWD inoculum pool has been used in other studies and was highly infectious for red deer [9] and elk [19]. Alternatively, resistance could be due to the PRNP genotype, in which heterozygous codons occurred at positions 2 (V/M), 129 (G/S), 138 (S/N), and 169 (V/M) in the mature form of PrP C . An important role for PRNP polymorphisms in determining disease susceptibility is also suggested by the finding that one of the three reindeer has not yet succumbed to infection with the CWD WTD inoculum by 26 mpi and showed no detectable PrP CWD in lymphoid tissue at 25 mpi. This possibly resistant reindeer differs from the susceptible reindeer in being heterozygous at a single codon 138 (S/N). The role of codons 2, 129 and 169 in CWD resistance is unknown. Interestingly, 138 N is common to the 4 reindeer that resisted elk or white-tailed deer CWD infection and may offer some protection. Consistent with this possibility, fallow deer were reported to express a PRNP gene encoding 138 N and 226 E and resisted infection when housed together with CWD-infected mule deer for 7 years [20]. Residue 138 N was implicated as the residue possibly protective against CWD infection, since red deer that express PrP C with 226 EE are highly susceptible to elk CWD infection [9]. Fallow deer were shown to be susceptible to CWD inoculation by the intracerebral route with long incubation periods of 4-5 years [21]. Nevertheless, it will be important to determine whether fallow deer resist CWD after an oral challenge. Codon 138 is in the b12a1 loop region [22]. Although serine and asparagine are similar small, polar residues, serine/asparagine differences at position 173 (elk numbering) of the b22a2 loop may impact the susceptibility of a species to CWD infection [23,24]. Additionally, crystal structures of the b22a2 loop peptides have shown that 173 (S/N) and 177 (N/T) sequence differences modify how the side chains pack at the b-sheet interface, and suggest that this incompatibility may underlie transmission barriers [25]. Whether serine or asparagine at position 138 has a similar effect on b-sheet packing is not yet known. Would the reindeer that were heterozygous at 2 (V/M), 129 (G/ S), 138 (S/N), and 169 (V/M) completely resist CWD infection or require a long incubation period to develop PrP CWD or clinical signs of disease? No trace of PrP CWD in lymphoid tissue or any other tissues at the end of the experiment (60 mpi) suggests at minimum a long delay in developing CWD infection, and at best complete protection from CWD infection. In North America, the term reindeer describes a domesticated Rangifer tarandus subspecies (Rangifer tarandus tarandus) introduced from Eurasia approximately a century ago, while the term caribou refers to several native wild Rangifer tarandus subspecies. Major caribou subspecies include woodland (Rangifer tarandus caribou) and barren-ground (Rangifer tarandus groenlandicus, Rangifer tarandus granti) caribou [26]. Barren-ground caribou generally exist in higher northern latitudes, in some regions commingling with semidomesticated reindeer herds, while the range of woodland caribou extends further south into the provinces of Alberta and Saskatchewan which currently harbour CWD-infected cervids. A study of 95 caribou in northern Quebec failed to identify CWD [27], although CWD has not yet been detected in Canadian provinces east of Saskatchewan. Interbreeding between reindeer and caribou has occurred in captivity and the wild although general genetic introgression appears to be minimal [28]. Two of the reindeer protected from CWD were heterozygous at 4 codons that were in common with the reported genotypes of Rangifer tarandus granti [29] and differ from Rangifer tarandus tarandus (unpublished, Genbank accession #AY639093.01). It is possible that the two resistant reindeer were derived from interbreeding two Rangifer tarandus subspecies, potentially captive and free-ranging. It will be crucial to determine the prevalence of the polymorphisms in both free-ranging and captive reindeer populations, particularly considering the possible protective PRNP residues. Large scale analysis of PRNP alleles in reindeer has not been reported. Largely consistent with our findings in captive reindeer, Happ et al. [29] recently sequenced PRNP from three free-ranging caribou herds in Alaska and identified polymorphisms at codons 2 (V/M), 129 (G/S), 138 (S/N) and 169 (V/M). Moreover, the most frequent caribou allele across the 3 herds examined was VGSV, which is common to the 2 reindeer we found to be susceptible to CWD WTD . The close interrelatedness of Rangifer species implies disease susceptibility in one may predict susceptibility in the [11,41,44,45,46,47,48]. doi:10.1371/journal.pone.0039055.g003 others, although a single amino acid difference in a key position could confer resistance. The importance of Rangifer species to the culture of aboriginal peoples cannot be underestimated with many components of hunted animals being consumed as food. Although relatively limited in comparison to elk and deer industries in North America, reindeer and caribou farming does occur, producing consumables such as meat, hides and antler velvet. The human health risks of consuming meat or other products derived from CWD-infected animals remain uncertain, although epidemiological evidence indicates transmission has not yet occurred [30,31,32] and transgenic mouse studies suggest the risk is remote in humans expressing common PRNP sequences [33,34,35]. The finding that CWD can be transmitted to squirrel monkeys by intracranial inoculation [36] raises concern for human transmissibility, however a study in macaques failed to demonstrate transmission after 70 months [37]. Since prion strains may undergo changes in transmission characteristics following passage through different species and strain selection pressures can be exerted by host genetic factors during passage within a species [24,38,39,40], caution is warranted when predicting the risks of CWD transmission from reindeer to other species. This is the first evidence of CWD transmission to the sub-species Rangifer tarandus tarandus, implicating the potential for transmission to others in this genus. Current diagnostic tests, including antemortem RAMALT testing, appear capable of detecting CWD in Rangifer species and increased surveillance would be required to determine if natural transmission has indeed occurred. Additional studies are ongoing to chart the distribution of infectivity during the course of disease and determine the influence of PRNP polymorphisms in disease susceptibility. Ethics Statement All experimental procedures involving animals were performed under strict accordance with Canadian Council on Animal Care guidelines in such a manner as to minimize suffering. Protocols were approved and monitored by the Animal Care Committee at the Canadian Food Inspection Agency, Ottawa Laboratory -Fallowfield. Recipient Animals and Experimental Challenge The transmissibility of Canadian elk (CWD ELK ) and whitetailed deer (CWD WTD ) CWD inocula was investigated in two groups of three, six-month-old reindeer sourced from CWD-free captive herds. Animals were group housed in isolation barns at the Canadian Food Inspection Agency, Ottawa Laboratory -Fallowfield (Ottawa, Ontario). Inocula were prepared as pooled brain homogenates containing 5 g of brain material in 20 ml of physiological saline. The CWD ELK inoculum contained material from 12 animals in one captive herd while the CWD WTD inoculum was derived from 3 positive animals in a different captive herd and mixing of animals between herds had not knowingly occurred. All source animals were clinically affected by CWD and were confirmed positive by ELISA, western blot and IHC. Reindeer were orally inoculated with two 5 g doses of either elk or white-tailed deer brain homogenate on days 0 and 7 of the trial. Each dose was delivered by syringe to the oropharynx and infected animals were monitored daily for evidence of clinical disease. Genotyping Genotype analysis was conducted on nucleic acid extracted from live animal blood samples using a purification kit following the manufacturer's protocols (Purelink, Invitrogen). Genomic DNA was used as a template in PCR to amplify the PRNP open reading frame using a high fidelity polymerase (Platinum Pfx DNA Polymerase, Invitrogen) and the following primers: forward 59-GGGCATATGATGCTGACACCCTCTTT and reverse 59-GAGAAAAATGAGGAAAGAGATGAGGAGG. Reaction conditions were as follows: 94uC for 2 min followed by 35 cycles of denaturation (94uC, 15 sec), annealing (53uC, 30 sec), and extension (68uC, 48 sec). PCR amplicons were gel purified, cloned into a Topo-TA cloning vector (Invitrogen), and sequenced using the T7 primer to obtain sequences from at least 10 clones with the PRNP gene. Genotyping of reindeer 1 was conducted on antler velvet using the previously described method [41]. Ante-mortem and Post-mortem Tissue Collection At several points during each study animals were sedated with a mixture of ketamine and xylazine for blood collection and biopsy of recto-anal mucosa associated lymphoid tissue (RAMALT). RAMALT biopsies were obtained and evaluated by IHC as previously described [9,42] and were subjected to testing by commercially available TSE detection kits (TeSeE ELISA and TeSeE Western blot, Bio-Rad Laboratories). If animals displayed evidence of discomfort, immobility or terminal CWD, or reached specific time points in the study they were euthanized with sodium pentobarbital. A broad range of neural and non-neural tissues from all major organ systems were collected ( Table 2) and adjacent tissue samples were frozen at 280uC or fixed in 10% neutral buffered formalin. Immunohistochemical and PET Blot Testing Formalin-fixed tissues were embedded into paraffin wax, sectioned at 5-mm thickness and stained with hematoxylin and eosin or by IHC as previously described [9]. Briefly, following antigen retrieval by autoclaving in citrate buffer solution (DAKO Target Antigen Retrieval), IHC for PrP CWD was performed on an automated immunostainer (Ventana Medical Systems, Tucson, Arizona, USA) using the monoclonal antibody F99/97.6.1 (VMRD, Pullman, Washington, USA) and AEC detection kit (Ventana Medical Systems). Positive and negative staining were differentiated based on comparisons with control tissues included in each run. Paraffin-embedded tissue (PET) blot was conducted as previously described [43], mounting paraffin-embedded sections on a polyvinylidene fluoride (PVDF) membrane (Immobilon-P, Millipore), digesting with Proteinase K (250 mg/ml) and staining with F99/97.6.1 (VMRD, Pullman, Washington, USA). ELISA and Western Blot Fresh or previously frozen unfixed tissue samples were analysed by ELISA and western blot using commercially available TSE detection kits (TeSeE ELISA and TeSeE Western blot, Bio-Rad Laboratories) according to manufacturer's instructions and as previously described [9]. The ELISA cutoff value was calculated as the average optical density readings of the negative controls plus a fixed value of 0.09 units. Western blots were run with molecular mass markers and CWD positive and negative control elk brain homogenates.
2016-05-04T20:20:58.661Z
2012-06-18T00:00:00.000
{ "year": 2012, "sha1": "7cff3552d41cb6269755637e2f388e73a0e376ae", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0039055&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7cff3552d41cb6269755637e2f388e73a0e376ae", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268173623
pes2o/s2orc
v3-fos-license
Assessment of the Antibacterial Activity of Spilanthes acmella Against Bacteria Associated with Dental Caries and Periodontal Disease: An In-vitro Microbiological Study Dental caries and periodontal disease are two of the most common oral diseases caused by bacterial infections. Traditional medicine in India has a long history of using plant extracts for dental care. Spilanthes acmella ( S. acmella ), also known as the "Toothache Plant," is a medicinal plant that has been traditionally used for its medicinal properties but has not been extensively studied for its applicability and use in dentistry. This study aims to investigate the antimicrobial action of S. acmella ethanol extract on Streptococcus mutans ( S. mutans ), and Lactobacillus fermentum ( L. fermentum ), which causes dental caries, and Porphyromonas gingivalis ( P. gingivalis ), Capnocytophaga gingivalis ( C. gingivalis ), which causes periodontal infection. The ethanol extract of S. acmella in various dilutions of 10mg/ml, 20mg/ml, 40mg/ml, 80mg/ml, and 100mg/ml was tested for its antibacterial activity against the bacteria as mentioned above using the agar well diffusion method. Erythromycin 0.125mg/ ml was used as a positive control, whereas distilled water was used as a negative control. The Minimum inhibitory concentration (MIC) and Minimum bactericidal concentration (MBC) were determined by the broth dilution method. The results of this study have shown that the ethanol extract of S. acmella demonstrated concentration-dependent inhibition of bacterial growth (13-16mm diameter), with the highest concentration of 100mg/ml showing the strongest effect. The findings of this study support the use of the S. acmella plant extract in the treatment of dental caries and periodontal infection and suggest that it may be a viable alternative to traditional antimicrobial agents. INTRODUCTION The oral cavity has around 500 different types of bacteria, making it one of the most diverse and complex microbial flora in the human body. 1 Dental caries and periodontal disease are two of the most common diseases of the oral cavity.The most common microorganisms causing dental caries are S. mutans and L. fermentum.Similarly, periodontal infections are caused by T. denticola and P. gingivalis.The use of an antimicrobial agent to control the bacteria in tooth plaque that cause caries and periodontitis is essential for dental disease prevention. 2 India has a long history of using traditional medicine in the field of dentistry.In the modern period, the use of traditional remedies is increasing in newer areas.Plant extracts continue to be a novel source of structurally significant chemicals that lead to the creation of medicines. 3Plant extract products are safe and their use as a medication is a viable option for the treatment of patients. 4The increased recognition of plant products as non-narcotic, without side effects, and conveniently available at low rates is driving the demand for medicinal plants in both developing and developed countries. 5S. acmella is renowned for its diverse medicinal attributes, encompassing anti-inflammatory, analgesic, and antimicrobial properties.Extensive research has revealed that the plant harbors a plethora of phytochemicals, such as flavonoids, terpenoids, and alkamides, which are accountable for their therapeutic effects. 6. acmella is an important medicinal plant commonly known as the 'Akarkara plant' with a rich source of therapeutic constituents.7][8] The whole plant is claimed to possess medicinal properties.0][11] There are no studies on the efficacy of S. acmella on micro-organisms causing dental caries and periodontal infection.For instance, they could address the lack of investigations into the potential synergistic effects of S. acmella in combination with traditional dental treatments, or the scarce understanding of its efficacy against specific strains of oral pathogens commonly associated with dental diseases.Additionally, exploring the specific mechanisms of action that make S. acmella effective against oral infections could add depth to their research.As a result, the purpose of this research is to investigate the antimicrobial action of S. acmella essential oil on S. mutans, L. fermentum, which causes dental caries, and P. gingivalis, C. gingivalis, which causes periodontal infection. MATERIALS AND METHODS The present study is an In-vitro microbiological assay that was conducted in the Department of Microbiology, School of Life Sciences, JSS Academy of Higher Education and Research, Mysore.The ethical clearance (JSSDCH IEC no: 86/2021) was obtained priorly from the local ethical committee. Collection and processing of plant material The researchers have identified different species within the genus such as S. oleracea, S. paniculata, and S. radicans based on morphological characteristics.The plant material was collected and then separated into flowers, leaves, and stems. 11. acmella plant leaves were randomly collected from at least 50 healthy plants from Mysuru district, Karnataka, India, and were provided by the Department of Pharmacognosy, JSS College of Pharmacy, Mysuru (Feb to May 2022).The voucher specimen is preserved in our laboratory for future reference.The dried leaves material was then ground into a coarse powder and extracted with different solvents such as 70% v/v of ethanol using a Soxhlet apparatus.The liquid extracts were then filtered and evaporated under reduced pressure at 40°C using a rotary evaporator to get a soft mass for about 24 hours.This extract of S. acmella was used to create different dilutions at a ratio of 3:1 (10mg/ml, 20mg/ml, 40mg/ml, 80mg/ml, and 100mg/ml) for further testing. The type of bacteria used In this study, specific strains of bacteria were used to assess the antibacterial activity of S. acmella plant extracts.Two strains of aerobic bacteria, Streptococcus mutans (ATCC 25175) and Lactobacillus fermentum (ATCC 14932), which are known to cause dental cavities, were used.Additionally, two strains of anaerobic bacteria, Porphyromonas gingivalis (ATCC 33277) and Capnocytophaga gingivalis (ATCC 33624), which are associated with periodontal infection, were also used.These strains were obtained from the American Type Culture Collection (ATCC), which is well-established and commonly used as reference strains in research. Evaluation of antibacterial activity The well diffusion procedure and Minimum Inhibitory Concentration (MIC) methods were used to determine the antibiotic susceptibility of the test substance.A standardized inoculum was prepared from bacterial culture (Using 0.25 McFarland standards) and spread onto Petri plates (i.e.Muller Hinton Agar supplemented with 5% sheep blood).Wells were made in the Agar plate using a cork borer and different concentrations of the test substance were added to each well (0.1ml).All the plates were kept in a refrigerator at 2 to 8°C for 2 hours for effective diffusion of test compounds and standards.The plates were then incubated at 37°C for 24-48 hrs.The diameter of the zone of inhibition or the MIC was measured and recorded.The S. acmella extract was prepared with 5 different dilutions (10mg/ml, 20mg/ml, 40mg/ml, 80mg/ml, and 100mg/ml) and Erythromycin (Colpaldas Visram and Co. Ltd, India; 0.125mg/ml) was used as a positive control, while distilled water was used as a negative control.The experiments were performed three times, and the mean diameter of the zone of inhibition was measured and recorded. 11 Statistical analysis The study data were analyzed using SPSS software version 24.0, with each result being the average of three measurements of inhibition zones in millimeters.The student's t-test was used to determine if there was a statistically significant difference between the means.Standard errors of the mean were represented by horizontal bars.Differences between the test groups and the control group were considered significant when P < 0.05 (95% confidence level). RESULTS The results of this study have shown that the ethanol extract of S. acmella possesses significant antibacterial activity against S. mutans, L. fermentum, P. gingivalis, and C. gingivalis, which are commonly associated with dental caries and periodontal infections.The extract demonstrated a dose-dependent inhibition of bacterial growth, with the highest concentration of 100mg/ml showing the strongest effect. The ethanol extract of S. acmella at a dilution of 10mg/ml demonstrated inhibition zones of 7mm, 6mm, 8mm, and 7mm against S. mutans, L. fermentum, P. gingivalis, and C. gingivalis respectively.A higher concentration of 20mg/ml and an increased concentration of 60mg/ ml showed inhibition zones of 10mm, 10mm, 12mm, and 10mm and 12mm, 11mm, 14mm, and 13mm respectively.The highest dilution of 100mg/ ml exhibited the greatest inhibition zone of 14mm, 13mm, 16mm, and 15mm.Erythromycin was used as the positive control drug. In this research study, the Kirby-Bauer method was employed to assess the antimicrobial susceptibility of various bacterial strains.Antibioticimpregnated discs were placed on agar plates containing bacterial cultures.The formation of clear zones around the discs indicated bacterial inhibition.Measurements of these zones were compared to established standards, revealing the susceptibility of bacteria to specific antibiotics.The standard values for the erythromycin breakpoints disk diffusion method (Kirby-Bauer method) were Sensitive (S): ≥ 23 mm zone diameter, Intermediate (I): 15-22 mm zone diameter, Resistant (R): ≤ 14 mm with a zone diameter.A concentration of 0.125mg/ml showed the largest inhibition zone of 22mm, 20mm, 24mm, and 24mm, respectively, against the same bacterial species.It is important to note that the negative control (distilled water) showed no zone of inhibition on any of the bacteria studied as shown in Table and Figure.There is a significant impact of antibacterial activity against the mentioned bacterial species as the concentration of S. acemella ethanol extract increases.Therefore, the results indicate that the ethanol extract of S. acmella has the potential as an antimicrobial agent for the treatment of oral conditions caused by these bacteria. DISCUSSION The plant Spilanthes acmella (S. acmella) has been traditionally used in Ayurvedic and folk medicine for the treatment of various ailments, including oral health problems. 2,3However, there have not been many studies investigating the antibacterial activity of S. acmella against the specific bacteria associated with dental caries and periodontal disease.This may be because of a lack of interest in natural remedies, insufficient funding for research on alternative medicine, limited knowledge about the medicinal properties of S. acmella, [12][13][14][15] or limited research without speculating on the reasons behind it. Our study explores the potential of S. acmella extract as a natural alternative to traditional antimicrobial agents in the treatment of these bacterial infections.Additionally, there may not have been many studies done in the past on this topic, making it an area of interest for further research. Our study used the agar well diffusion method to test the S. acmella extract, erythromycin, and distilled water against S. mutans, L. fermentum, P. gingivalis, and C. gingivalis.The results of this study have demonstrated that the ethanol extracts of S. acmella possess antibacterial activity against the bacterial species S. mutans, L. fermentum, P. gingivalis, and C. gingivalis, which are commonly associated with dental caries and periodontal infections.The extract showed a dosedependent inhibition of bacterial growth, with the highest concentration of 100mg/ml showing the strongest effect.This suggests that S. acmella has the potential as an antimicrobial agent for the treatment of these oral conditions.Similar to our study, there is previous research on the antimicrobial activity of S. acmella by Onoriode O et al. found that the ethanol extract of the plant possessed strong antibacterial activity against S. mutans, L. fermentum, with an inhibition zone of 28mm-29mm and 24mm. 16However, our research investigates the antibacterial activity of S. amella extract against two additional species, namely P. gingivalis and C. gingivalis, both of which are associated with periodontal problems. The use of plant extracts as an alternative to traditional antimicrobial agents has been gaining popularity in recent years due to the increasing resistance of bacteria to conventional antibiotics. 17S. acmella is known to have a wide range of medicinal properties, including antiinflammatory, analgesic, and antimicrobial effects.[19] Limitation of the study One limitation of this study is that it only tested the antimicrobial activity of S. acmella ethanol extract against a small number of specific bacteria associated with dental caries and periodontal disease, and it is not clear if the extract would be effective against other types of bacteria or pathogens.Additionally, the study only used an in-vitro testing method, so further in-vivo studies are needed to confirm the efficacy of S. acmella in treating these conditions in a living organism.The studies are limited to the laboratory setting and more clinical studies are needed to confirm the results. Further elaboration on the uncharted territories of S. acmella's dental applications would not only justify the need for the current study but also pique the interest of the scientific community and potentially spur further research in this promising area. CONCLUSION This study has shown that the ethanol extract of S. acmella possesses strong antibacterial activity against the bacteria commonly associated with dental caries and periodontal infections.This supports the traditional use of the plant in the treatment of these oral conditions and suggests that it may be a viable alternative to traditional antimicrobial agents.However, further studies are needed to fully understand the mechanism of action, and safety and to evaluate the extract's potential as a therapeutic agent in vivo. Figure . Figure.Showing the effects of Spilanthes acmella extracts at various dilutions as zones of inhibition in mm on bacteria such as Streptococcus mutans (SM), Lactobacillus (LB), Treponema denticola (TD), and Porphyromonas gingivalis (PG) Table . Showing the effects of Spilanthes acmella extracts at various dilutions as zones of inhibition in mm on bacteria such as Streptococcus mutans (SM), Lactobacillus (LB), Treponema denticola (TD), and Porphyromonas gingivalis (PG)
2024-03-03T17:38:43.419Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "d4acf34039b0466ade8467a6958d831ea2e6f2de", "oa_license": "CCBY", "oa_url": "https://microbiologyjournal.org/download/90618/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9e227bb22a4e665ee96e9bb6e752c7b688a26f2a", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
47015266
pes2o/s2orc
v3-fos-license
Role of Natural Killer T Cells in the Development of Obesity and Insulin Resistance: Insights From Recent Progress Natural killer T (NKT) cells play important roles in adipose tissue inflammation, and thus influence the development of diet-induced obesity and insulin resistance. The interactions between cluster of differentiation (CD)1d and NKT T cell receptor are thought to be critical in this process, as demonstrated in two NKT cell-deficient mouse models—systemic CD1d gene knockout (KO) and prototypic Jα18 KO mice. The latter lacks some repertoires besides invariant (i)NKT cells due to manipulation of the Jα18 gene segment; therefore, the role of iNKT vs. variant NKT cells must be reinterpreted considering the availability of new Jα18 KO mice. NKT cells have varied roles in the development of obesity; indeed, studies have reported contradictory results depending on the mouse model, diet, and rearing conditions, all of which could affect the microbiome. In this mini-review, we discuss these points considering recent findings from our laboratory and others as well as the role of NKT cells in the development of obesity and insulin resistance based on data obtained from studies on conditional CD1d1 KO and new Jα18 KO mice generated through gene editing. Natural killer T (NKT) cells play important roles in adipose tissue inflammation, and thus influence the development of diet-induced obesity and insulin resistance. The interactions between cluster of differentiation (CD)1d and NKT T cell receptor are thought to be critical in this process, as demonstrated in two NKT cell-deficient mouse modelssystemic CD1d gene knockout (KO) and prototypic Jα18 KO mice. The latter lacks some repertoires besides invariant (i)NKT cells due to manipulation of the Jα18 gene segment; therefore, the role of iNKT vs. variant NKT cells must be reinterpreted considering the availability of new Jα18 KO mice. NKT cells have varied roles in the development of obesity; indeed, studies have reported contradictory results depending on the mouse model, diet, and rearing conditions, all of which could affect the microbiome. In this mini-review, we discuss these points considering recent findings from our laboratory and others as well as the role of NKT cells in the development of obesity and insulin resistance based on data obtained from studies on conditional CD1d1 KO and new Jα18 KO mice generated through gene editing. Keywords: natural killer T cell, cluster of differentiation 1d, adipocyte, lipid, obesity, insulin resistance, adipose tissue inflammation inTRODUCTiOn Obesity as a Chronic inflammatory Disorder Inflammation in adipose tissue (AT) is induced by hypertrophy of adipocytes that secrete inflam matory cytokines and chemokines (1) and thus recruit various immune cells such as macrophages, T cells [αβ, γδ, regulatory T cells (Tregs), and natural killer T (NKT) cells], B cells, NK cells, and leukocytes that exist in a steady state in immune organs (2,3). Fat accumulation is a major factor contributing to metainflammation and metabolic dysfunction (1,4). Obesity alters the microenvi ronment in AT from an antiinflammatory to a proinflammatory state, leading to impaired immune balance (5,6). Visceral (V)AT in the lean state predominantly contains M2 macrophages, eosino phils, and Tregs that suppress inflammation and maintain insulin sensitivity (7,8). By contrast, VAT in obese individuals has more M1 macrophages, cluster of differentiation (CD)8 + T cells, NK cells, B cells, and neutrophils that enhance inflammation and reduce insulin sensitivity (9)(10)(11)(12)(13). Notably, chronic lowgrade inflammation accompanied by obesity is implicated in the etiology of lifestyle related diseases such as atherosclerosis, type 2 diabetes, and various cancers (14). nKT Cells Natural killer T cells are a unique T cell subset that recognize lipid antigen presented by CD1d (15,16). αGalactosylceramide (αGalCer) is a prototypical ligand recognized by invariant (i)NKT cells that harbors an invariant T cell receptor (TCR) αchain (Vα14Jα18 in mouse and Vα24Jα18 in human) (17). Another type of NKT cell known as variant (v)NKT cells express diverse TCRs that are presumed to recognize various lipid anti gens including sulfatide (18). Activated NKT cells secrete large amounts of cytokine that modulate immune balance, implying that they can either enhance or suppress inflammatory and immune responses. NKT cells have been reported to exacerbate, protect against, or have no role in the development of obesity through modulation of AT inflammation (19). Here, we summarize the correlation between the CD1d/ NKT cell axis and obesity with a focus on AT inflammation and discuss factors that may contribute to the discrepancies among reports considering recent progress. OPPOSinG FUnCTiOnS OF nKT CeLLS in THe DeveLOPMenT OF OBeSiTY Many studies have examined whether NKT cells play a role in dietinduced obesity (DIO) and have reported variable results. nKT Cells as an Aggravator of DiO Ohmura et al. first demonstrated that iNKT cells induce AT inflammation and glucose intolerance in β2microglobulin (β2m) knockout (KO) mice fed a highfat diet (HFD) and treated with the NKT cell stimulator αGalCer (20). Since β2m KO mice also lack CD8 + T cells, the role of NKT cells in obesity has been exam ined using CD1d KO mice fed an HFD. However, two subsequent studies showed that NKT cell deficiency is insufficient to protect against or aggravate DIO (21) and that CD1d is important for the modulation of metabolic functions via a nonNKT cellmediated mechanism (22). By contrast, we showed that CD1d KO mice lacking both iNKT and vNKT cells showed a reduced body weight (BW) gain along with improved AT inflammation and insulin resistance (23). Meanwhile, Jα18 KO mice lacking only iNKT cells demonstrated similar pathology to wildtype (WT) mice, sug gesting that vNKT cells may contribute to DIO in the absence of iNKT cells (23). Wu et al. reported that iNKT cells responded to lipid excess and produced proinflammatory cytokines that promoted AT inflammation and insulin resistance (24). inKT vs. vnKT Cells We investigated whether iNKT cells (24) or vNKT cells (23) con tribute to the exacerbation of DIO, since distinct measures must be taken to control either subset. We speculated that vNKT cells contribute to the development of DIO in the absence of iNKT cells based on the aforementioned results (i.e., no difference in BW between WT and Jα18 KO mice on an HFD) and some additional observations (23): (1) the NK1.1 + TCRβ + population in AT was activated upon consumption of an HFD and contained more CD8 + but fewer CD4 − CD8 − subsets in Jα18 KO (referred here after as Jα281 KO) (25) mice, which differed from observations in either WT or CD1d KO mice; (2) WT mice harbored more noniNKT (=vNKT) cells in AT; and (3) hepatic mononuclear cells from Jα281 KO mice [which are enriched in vNKT cells including CD1d + antigenpresenting cells (APCs)] transferred insulin resistance to CD1d KO hosts. However, the Jα281 KO strain was shown to exhibit a marked reduction in TCR diversity, which could affect immune responses (26). Four novel Jα18 KO mouse strains were independently gene rated after the report (26) by deleting only the T-cell receptor alpha joining (Traj)18 locus and leaving the remaining Jα repertoire unperturbed using novel technologies [clustered regularly inter spaced short palindromic repeats (CRISPR)/CRISPRassociated protein9 nuclease or transcription activatorlike effector nuclease] (27-30). New Traj18 KO (referred hereafter as simply Traj18 KO) mice gained less weight and had heightened sensitivity to insulin compared with WT mice, suggesting that iNKT cells play a pathogenic role in DIO (30). In that study, the mice were fed the same HFD (HFD32; CLEA Japan, Tokyo, Japan) as those in our experiments, and Jα281 KO mice fed this diet showed similar BW gain to WT mice. The interpretation of the results from Traj18 KO mice was that iNKT but not vNKT cells exacerbate the development of DIO. Experiments using Vα14Jα18 transgenic mice lacking lowdensity lipoprotein receptor also demonstrated that the abundance of iNKT cells increased adiposity by indu cing metabolic abnormalities and AT inflammation (31). The DIO results from Traj18 KO mice also imply that reduced TCR diversity or the lack of particular T cell subsets in Jα281 KO but not Traj18 KO mice account for the discrepancy among reports on the involvement of iNKT vs. vNKT cells. Mucosalassociated invariant T (MAIT) cells that utilize Jα33 may be lost in Jα281 KO mice and may thus affect the development of obesity, as was suggested in studies of human subjects (32, 33). However, the actual role of MAIT cells in obesity and their involvement (or that of other T cell subsets) in DIO in Jα281 KO mice require further investigation. Protective Role of nKT Cells Against Obesity Some studies have reported that NKT cells play a protective role against obesity. Regulatory cytokines such as interleukin (IL)4 and 10 produced by AT iNKT cells prevented the development of DIO (34, 35) and insulin resistance even in mice fed a lowfat diet (36). IL13producing innate immune cells such as type 2 innate lymphoid cells (ILC2s), iNKT cells, and vNKT cells were shown to prevent DIO by suppressing inflammation in AT (37). ATresident iNKT cells express transcriptional repressor E4binding protein (E4BP) 4 (also known as nuclear factor, IL3 regulated) but not promyelocytic leukemia zinc finger protein (PLZF), unlike iNKT cells in other tissues, reflecting their anti inflammatory phenotype (38); moreover, IL10producing iNKT cells (NKT10) are enriched in subcutaneous white (W)AT (39). Interestingly, an F108Y substitution in TCRβ altered NKT cell development to an adiposelike phenotype (40) without affecting TCR activation nor its ability to bind CD1d-ligand complexes, suggesting that a hydrophobic patch created after TCRα-TCRβ pairing is essential for the development of a distinct NKT cell population (40). iNKT cells with TCRβ F108Y express E4BP4 but not PLZF, similar to ATresident NKT cells (38). These results suggest that NKT cells in AT constitute a specialized subset and are not regular iNKT cells that localize there as passersby. Mechanism of Fat Reduction via Thermogenesis and Relationship with Protective nKT Cells In the development of obesity, the inflammatory environment created by NKT cell activation leads to insulin resistance and impaired glucose tolerance, which further accelerates metabolic changes that promote weight gain through increased fat mass. Meanwhile, recent studies on the suppression of obesity have provided insight into how NKT cells prevent obesity other than by producing antiinflammatory cytokines. Fat mass is actively reduced in brown (B)AT through thermogenensis (41). BAT con tains thermogenic mitochondria that express uncoupling protein (UCP) 1 and contribute to energy expenditure, in contrast to WAT (42). UCP1expressing adipocytes with thermogenic capacityknown as beige or brite cells-also develop in WAT in response to various stimuli (43). The relationship between iNKT cells and thermogenesis was demonstrated by the finding that acti vated iNKT cells enhanced fibroblast growth factor 21 produc tion and increased the number of beige cells in WAT, which in turn increased thermogenesis and weight loss (44). Several recent studies have demonstrated that innate immune cells play an important role in the induction of beige cells. vNKT cells and ILC2s induced by IL25 produce IL13 and regulate glucose homeostasis to protect against obesity (37). ILC2s also sustain eosinophils that produce IL4, which stimulates M2 macrophages in VAT (45). IL4 further stimulates M2 macrophages to secrete catecholamines for the induction of thermogenic gene expression in BAT and lipolysis in WAT (46). IL33 is also critical for the maintenance of ILC2s in the induction of beige cells in WAT and regulation of energy expenditure. ILC2s produce methionineenkephalin peptides that can act directly on adipocytes to upregulate UCP1 expression and promote beiging (47). These findings indicate that the innate immune system-including iNKT cells, macrophages, and ILCs-in AT controls thermo genesis by inducing beige cells, which is an important mechanism for the regulation of obesity and insulin resistance in addition to the control of AT inflammation via production of anti inflammatory cytokines. APC FOR nKT CeLLS in AT Natural killer T cells in DIO act as NKT1 or NKT2 (or ATresident NKT) cells through interactions with CD1dexpressing cells in AT. Many cell types in AT express CD1d including macrophages, dendritic cells, adipocytes, and possibly others. Recent studies have shown that adipocytes can activate T cells and NKT cells through antigen presentation (48,49). CD1d expressed on the surface of adipocytes can induce helper T cell (Th)1 and Th2 cytokine release by iNKT cells depending on the coexpression of microsomal triglyceride transfer protein and CCAAT/enhancer binding proteinβ and δ even in the absence of exogenous ligands (48), suggesting that adipocytes express ligands that are recognized by NKT cells. To determine whether interaction between NKT cells and adipocytes influence DIO, we analyzed mice with adipocytespecific CD1d1 deletion (adipoq cre CD1d1 fl/fl ) and found that they gained less weight than control mice fed an HFD (50), consistent with our findings from conventional CD1d KO mice (24). A decrease in IFNγ and concomitant increase in adiponectin was observed following disruption of the NKT cell/ adipocyte interaction, which ameliorated AT inflammation and insulin resistance. On the contrary, another study showed that adipocytespecific CD1d1 deletion reduced IL4 expression in adipose iNKT cells and increased AT inflammation and insulin resistance (51), in accordance with an earlier report (49). The fact that these studies reported opposite results using the same conditional (c)KO mice provides strong evidence that adipo cytes are the APCs for NKT cells in modulating AT inflamma tion, and that different HFDs can explain the discrepancy in the reported roles of NKT cells in the development of obesity (50,51). CD1d2-Restricted nKT Cells The fact that opposite results were obtained using the same cKO mice is critical, because it excludes the possibility that the results simply reflect the use of either proaggravating [CD1d1 KO; (52)] or proameliorating [CD1d1/d2 KO; (53,54)] mice ( Table 1). Although it was reported that CD1d2 does not specify a specific NKT cell population (55), CD1d2 may affect the development of obesity in CD1d1 KO mice. Indeed, it was recently reported that CD1d2 can present distinct species of glycosylceramide (GlyCer) and affect the differentiation of NKT cells (56). Thus, the possible contribution of CD1d2restricted NKT cells to the development of obesity remains to be determined, although contradictory results were obtained regarding DIO using the same cKO mice that express neither CD1d1 nor CD1d2 on adipocytes (50,51). In addition to studies of genetically engineered mice, other factors affecting the development of obesity have been investi gated, including microbiota-especially those in the gut-and fat composition, both of which are related to diet and influence the presentation of ligands to NKT cells. Microbiota The findings that gut microbiota composition is a critical factor in the development of obesity come from studies using germfree (GF) animals. Conventionally raised mice have higher total body fat than GF mice, although the latter consume more food (57). When the two types of mice are fed a sugarrich HFD, GF mice are protected from DIO owing to increased fatty acid (FA) oxidation and AMPactivated protein kinase activity (58). On the other hand, pathogenic alterations in gut microbiome profiles (i.e., dysbiosis) in obesity affect energy metabolism (59). In fact, the abundance of Firmicutes is increased whereas that of Bacteroidetes is decreased in ob/ob mice with a mutation in the gene encoding leptin; on the contrary, lean ob/+ mice fed a polysacchariderich diet predominantly harbor Bacteroidetes (60). Similar differences in gut microbiota composition are also observed between obese and lean human subjects (61). Furthermore, GF mice inoculated with microbiota from obese twin donors developed increased adiposity when compared with those receiving transplants from lean twin donors and did not develop increased adiposity when they were cohoused with the latter mice (62). It is unclear whether microbiota or diet (calorie excess) is responsible for obesity. Although gut microbiotas are transmissible and can be alte red by diet, they may have the ability to directly alter systemic energy metabolism and thereby control weight gain. Several studies have demonstrated that NKT cells play a central role in maintaining homeostasis at mucosal surfaces (63,64). CD1d KO mice exhibit altered gut microbiome profiles, which exacerbate intestinal inflammation induced by dextran sodium sulfate treat ment and even in the steady state. Compared to nonlittermate B6 mice, these mice have a higher abundance of segmented filamentous bacteria that can induce Th17 cells but reduced levels of Akkermansia, which may protect mice from developing colitis (65). A recent study using CD1d fl/fl CD11c Cre cKO mice also showed that CD1d expression on CD11c + cells contributes to the maintenance of intestinal homeostasis by regulation of the immu noglobulin A repertoire and induction of Tregs in the gut (66). Bacteroides fragilis, a prominent gut bacterial species, produces the glycosphingolipid αGalCerBf, which is structurally related to the prototypic ligand αGalCer or KRN7000 (67). αGalCerBf stimulates iNKT cells in the context of CD1d, suggesting that alterations in the abundance of B. fragilis caused by obesity can affect NKT cell homeostasis. Disruption of the NKT cell/ CD1d interaction-which is required to maintain intestinal homeostasis-may affect energy consumption and fat storage by modulating gut microbiota composition. Fat Composition αGalactosylceramide is a potent activator of NKT cells, and vari ous analogs have been synthesized that elicit distinct cytokine responses (68). For instance, αCGalCer and RCAI56 promote Th1biased responses (69,70), whereas OCH and 20:2 promote Th2biased responses (71). Natural ligands for CD1d have also been identified. Several mammalian glycosphingolipids such as isoglobotrihexosylceramide and βglucosylceramide (βGlcCer) were shown to act as selflipid antigens (72). However, a recent study showed that a small quantity of stimulatory αGlyCer was present in βGlcCer preparations (73). Accordingly, pure βGlcCer may not activate iNKT cells, which can respond to a minor fraction of αGlyCer. Phospholipids (PHLs) that are a major component of mammalian cell membranes including phosphatidylinositol (PI), phosphatidylethanolamine (PE), and phosphatidylglycerol (PG) are natural antigens recognized by NKT cells (74). Lysophosphatidylcholine is also a natural ligand that is recognized not only by human iNKT cell clones (75) but also by Vα24 − /Vβ11 − vNKT cells (76). Characteristic lipid abnormalities observed during the course of obesity include an increase in triacylglycerol and cholesterol levels in the lowdensity lipoprotein fraction, with a corresponding decrease in highdensity lipoprotein cholesterol. In addition, obesityrelated changes of serum lipids such as FAs, PHLs, and their oxidation products as well as oxylipins, sphingolipids, and their metabolites contribute to the health status and risk of comorbidities in obese patients (77). A lipidomics analysis dem onstrated that changes in PHL concentrations may contribute to the development of insulin resistance and metabolic syndrome (78). Elevated circulating levels of phosphatidylcholine, PI, PE, and PG have been detected in subjects with when compared with those without nonalcoholic steatohepatitis (79). Although the molecular basis for the correlation between NKT cell activa tion and altered PHL levels in obese subjects remains unclear, some PHLs may affect NKT cell biology, based on the observa tions that the concentration of Cer species (C18:0, C20:0, and C24:1) and total Cer level was higher in type 2 diabetes; insulin sensitivity was inversely correlated with C18:0, C20:0, C24:1, C24:0, and total Cer levels; and increased tumor necrosis factor (TNF)α concentration was correlated with the levels of C18:1 and C18:0 ceramide (80). The mechanism of insulin resistance in obese patients with an elevated Cer concentration may involve inflammation induced FiGURe 1 | Natural killer T (NKT) cell-based modulation of AT inflammation. NKT cells exhibit opposing functions: a pro-inflammatory response that promotes AT inflammation and insulin resistance through release of IFN-γ (NKT1) that reduces the pro-ameliorating adipokine adiponectin, and a regulatory response that suppresses inflammation via production of IL-4 (NKT2) and -10 (NKT10) and increased thermogenesis leading to energy expenditure. These NKT [mostly cluster of differentiation (CD)1d1-restricted and very few, if any, CD1d2-restricted] cell functions are presumed to be affected by two mutually interacting factors-namely, dietary fat composition and a microbiome in which the Bacteroidetes and Firmicutes phyla predominate. COnCLUSiOn Obesityassociated inflammation in AT contributes to metabolic syndrome and is controlled by adipocytes and NKT cells with other immune cells, as discussed in this review. NKT cells appear to respond to lipid antigens on adipocytes and modulate inflam mation (either by ameliorating or by aggravating this process) depending on the input-i.e., dietary lipids and ligands derived from the microbiome (Figure 1). Although the critical factors that give rise to the distinct outcomes of NKT cells remain elusive, future investigations should focus on two mutually interactive topics: (1) gut microorganisms that regulate energy consump tion and modulation/maintenance by NKT cells and (2) diet/fat composition that can alter gut microbiota, the balance of lipid species, and the synthesis of endogenous lipid antigens that affect NKT cell activation. Furthermore, elucidating the mechanism of BAT maintenance and WAT beiging by NKT cells can provide a basis for the development of strategies to reverse metabolic dysregulation and reduce fat mass.
2018-06-11T13:03:28.949Z
2018-06-11T00:00:00.000
{ "year": 2018, "sha1": "59d14d83f982bf378bb1bf17a60e184864c41819", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fimmu.2018.01314", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59d14d83f982bf378bb1bf17a60e184864c41819", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219026851
pes2o/s2orc
v3-fos-license
Simulation of the Drying Process of Polysaccharide Extract Solution in a Spray Dryer The drying process of droplets into particles on a spray dryer is influenced by several factors, including the temperature and the flow rate of inlet air as a carrier gas and a drying media. The effects of temperature of the inlet air in the spray dryer system was elucidated in this study. A solution containing 35 w% polysaccharide extract and water as the solvent was injected into a spray dryer. The inlet air temperatures were varied at 135, 170, and 200 °C with 3 L/min. The geometry of spray dryer consisted of the tubular-conical with a length of 48.8 cm and 15.2 cm diameter cylinder. The spray dryer unit was simulated using CFD code, Fluent Ansys R15.0 with the second phase of droplets/particles was modelled by the Lagrangian discrete phase model (DPM). The inlet air temperature also affected the size of the particles. The increase of the air temperature led to the increase of the average particle diameter leaving the chamber with the smaller standard deviation. The air flow rate affected the particle size. The higher the air flow rate, the smaller the size of the average diameter of the particles and smaller standard deviation would be. The evaporation rate increased with the increase of the flow rate and inlet air temperature. The resulting particles had an average size of 1.28∼1.9 μm diameter to drying with hot air that flowed continuously. The simulation result was validated by the experimental work with discrepancy less than 20%. Introduction Increasing consumer demand for food, health, and the polymer industry makes the demand to improve the quality of products that can be stored for a long time and does not require a large place. One method used to improve product quality is by drying. The results of the drying process can be granules, lumps, and powder. The advantage that will be obtained if the product produced in powder form is that the product can be stored for a longer time, minimizing product distribution costs because the dried product has a lighter weight and smaller size thereby minimizing production costs (Patel et al., 2009). One method used in the process of drying with powder products is to use a spray dryer. Spray dryers are widely used in the food industry, pharmaceuticals, biochemicals, plastics, resins, ceramic materials, detergents, pesticides, fertilizers, organic and inorganic chemicals, skim powder, milk, baby food, instant coffee, tea, dried fruits, juices enzymes and vitamins. Spray dryer has several advantages, such as the rate of drying speed, a large range for operating temperatures, producing uniform products, and high capacity. Generally spray dryers are used in the final process because they are used to control the final quality of the product. Removal of water on the product affects the quality of the product. However, the production of powder in the spray dryer cannot be controlled optimal conditions that can affect the final production of powder on an industrial scale. This is because the drying process in a spray dryer involves a complex turbulent flow and the interaction between large particles and air is difficult to observe in a spray dryer (Defraeye, 2014;Salem et al., 2011). Therefore, to observe the phenomena that occur in the spray dryer Computational Fluid Dynamics (CFD) is needed. The CFD application to analyze the drying mechanism of the spray dryer with a rotary atomizer in three dimensions was carried out (Huang et al., 2003). The hot air co-current flow pattern system affects 26th Regional Symposium on Chemical Engineering (RSCE 2019) IOP Conf. Series: Materials Science and Engineering 778 (2020) 012168 IOP Publishing doi:10.1088/1757-899X/778/1/012168 2 the particle flow pattern on the spray dryer and affects the particle deposition (Rajashekhara and Raghavendra, 2015). The higher the flow rate of hot air will also make the average particle diameter bigger (Woo et al., 2008). CFD simulations can predict the whole phenomenon that occurs in a spray dryer. However, the complex interaction process of drying droplets by air makes simulation more difficult. CFD simulations still require a lot of important data from experiments such as speed, temperature and droplet characteristics selected in the operation of the spray dryer (Lee et al., 2013). In the present study, the performance of spray drying system was studied. The simulation results were validated with the experimental work. The effect of air temperature and air flow rate was evaluated to have more understanding the performance of the spray drying system. The CFD can provide the detail condition in the whole system. Therefore, the flow pattern in the spray drying system was presented. Simulation procedures The simulation was carried out using software ANSYS® 15 Academic Package. The system geometry was modelled using DesignModeler® and the grid and node number was determined using Meshing®. The iteration procedure of Computational Fluid Dynamics (CFD) used FLUENT®. The geometry system and grid generation The spray dryer system was consisted of two fluid nozzle as atomizer and chamber. The chamber had two zones, i.e. cylindrical and conical zones. The spray dryer geometry was based on the experiment (Matsunaga et al., 2013). Figure 1(a) shows the detail geometry and the size of each part to be modelled in the spray dryer system. The two fluid nozzle was used as an atomizer. The detail geometry of the two fluid nozzle was shown in inset of Figure 1(a). The feed inlet was located on the center of the nozzle with diameter of 1.1938 mm. The air was flowed through a slit with a width of 0.0508 mm and a slope towards the center of 45q. The diameter of the cylindrical chamber was 48.8 cm. The outlet flow was located in the bottom of the chamber with cone diameter of 3.6 cm. Grid generation of geometry system based on the tetrahedral cells consisting of 142,656 cells, 432,928 faces, and 147,684 nodes, as shown in Figure 1(b). The skewness was 0.078 indicating that the grid was excellently developed. The feed inlet was polysaccharide solution with concentration of 35 w%, temperature of 160 qC, and flowrate of 1 mL/min. The air temperature was varied at temperature of 135, 170, and 200 qC. The flow rate of air was maintained constant at 3 L/min. Model selection The three dimension simulation was carried out in steady state system. The turbulence flow of the system was modelled based on the Reynolds stress turbulent model using RNG k -H model. The droplet trajectory, the rate of mass and heat transfer between droplet and air was modelled using Lagrangian discrete phase model (DPM). In DPM model, the fluid phase is treated as a continuum by solving the Navier-Stokes equations, while the dispersed phase is solved by tracking the droplets through the (Anonim, 2009). DPM disregards the interaction among droplets that the model was realizable for volume fraction of droplet less than 0.25 compared to the total volume. In the spray system, the droplet concentration is not more than 1 vol%. Therefore, the DPM can be applied in our system. The droplet size distribution was modelled using Rosin-Rammler parameter. The average droplet size distribution follows the following equation: ݀ ௗ is the droplet diameter, ݉̇ is the liquid flow rate, ߛ is surface tension, ߩ is the air density, and ܲ is the difference of atomization pressure. The maximum and minimum diameter of droplet applied on the simulation were 1.66 and 5.88 μm, respectively, with the spread parameter was 3.5. The droplet distribution using Rosin Rammler parameter is stated as the following equation: Where ܻ ௗ is the droplet mass fraction, ݀ is the droplet diameter, ݀ ̅ is the average droplet diameter, and ݊ is the spread parameter. Simulation validation The simulation results were validated with the experiment conducted by Matsunaga et al. (Matsunaga et al., 2013). The polysaccharide extract was dried in spray dryer using air at flow rate of 3 L/min and varied temperatures of 135, 170, and 200 qC. Figure 2(a-c) shows the simulation results and the corresponding condition of the experimental results. The predicted particle size distribution by CFD simulation and the experimental results of temperatures of 135, 170, and 200 qC have discrepancy of 19.6, 13.5, and 18.3 %, respectively. The discrepancy less than 20 % indicates that the simulation results is good enough for particle size distribution prediction. Effect of air temperature on the particle size distribution Table 1 shows the detail drying characteristics for difference air temperatures and flow rate of 3 L/min. The mean particle diameters slightly decrease by increasing the air temperature at around 1.3 μm. Increasing temperature from 135 to 200 qC led to the increase of evaporation rate from 6.50u10-11 kg/s to 6.76u10-11 kg/s. Therefore, the drying time also decreases from 726 to 676 ms with increasing the temperature from 135 to 200 qC. The CFD simulation can also identify the particles that hit the wall before collected in the bottom and directly collected in bottom without hitting the wall. Two boundary condition that available the FLUENT® are escape and trap condition. Escape condition was applied in bottom outlet of the spray drying system. In this condition, when the particle through the boundary, the calculation of that particle was stopped. The second condition was trap that applied in the wall. When the particle through the wall boundary, that particle calculation was stopped but the mass and energy In the bottom chamber, the average particle diameter increased with temperature. The average particle diameters were 1.63, 1.74, and 1.84 μm for inlet air temperature of 135, 170, and 200 qC. The results was in accordance with the most reported previous experiment results that by using higher inlet air temperature led to the production of larger particles (Tonon et al., 2008). On the other hand, the average particle diameter calculated from the particle hitting the wall decreased slightly with the increase of temperature. The experimental work by Matsunaga et al. resulted that the increase of the inlet air temperature led to the slight reduce of particle size (Matsunaga et al., 2013). The internal water in the drying droplet transferred slowly at a lower air temperature resulting in the larger particles compared to drying droplet at higher temperature. Increasing temperature resulted in the increase in evaporation rate. Larger and smaller droplets tended to locate at the center chamber and at around the wall chamber respectively. It is the reason why the smaller droplets tended to adhere in the wall. Therefore, the trapped boundary condition in the wall was appropriate to be selected. It is also supported by the flow pattern inside the whole chamber that the particle with higher velocity located in the center chamber and the particle with lower velocity resided approaching the wall chamber. The particles in the center chamber tended to directly downward to the outlet bottom chamber. The flow pattern and the path line of the particles tracking was shown in Figure 3. 26th Regional Symposium on Chemical Engineering (RSCE 2019) IOP Conf. Series: Materials Science and Engineering 778 (2020) 012168 IOP Publishing doi:10.1088/1757-899X/778/1/012168 5 The particle size distribution of particle that directly collected in the bottom corresponding to the escape boundary condition and particle that hit the wall stated by trap boundary condition are shown in Figures 4(a) and 4(b), respectively. The particle size increases with the air temperature but the deviation standard decreases with increasing temperature. The diameter of particle was in the range 0.9 to 5.88 μm for all air temperatures. Effect of air flow rate on the particle size distribution Table 2 shows the effect of the air flow rate on the average particle diameter, standard deviation, average drying time, and the evaporation rate in the spray drying system. In the overall particles collected, the average particle size was not influenced by the inlet air flow rate. However, for particles that directly collected in the bottom chamber without hitting the wall chamber, the average particle size decreased with increasing the inlet air flow rate. Increasing the inlet air flow rate also contributed on the increase of number percentage of escaped particle or particle that directly collected in the bottom chamber without hitting the wall chamber. Further investigation on the evaluating the particle size distribution affected by the inlet air flow rate is shown in Figure 5(a-b). It can be shown that increasing the inlet air flow rate led to decrease in particle size distribution for particle directly collected in the bottom chamber. On the other hand, the particle size distribution did not changed by increasing the inlet air flow rate for particle that preceded collected by hitting the wall chamber. Conclusion The computational fluid dynamics (CFD) simulation successfully predicted the experimental work of the spray dried of polysaccharide liquid extract. Three inlet air temperatures were evaluated, 135, 170, and 200 qC at constant flow rate of 3 L/min. Lagrangian discrete phase model was selected to model the second phase of droplets. Increasing the inlet air temperature led to the increase of the average particle diameter with smaller standard deviation. On the other hand, the increasing inlet air flow rate resulted in the decrease in the average particle size. The results suggested that the selected simulation model is good enough to predict the distribution particle diameter with the discrepancy less than 20%.
2020-05-07T09:13:37.944Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "e1ea83b69b5d57123d418d565f2fb51b4706e235", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/778/1/012168/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0d6f7524cd766d37122a7fbabf3760d32416c0bd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
259269855
pes2o/s2orc
v3-fos-license
Calcification Propensity (T50) Predicts a Rapid Decline of Renal Function in Kidney Transplant Recipients Background: Serum creatinine level, proteinuria, and interstitial fibrosis are predictive of renal prognosis. Fractional excretion of phosphate (FEP)/FGF23 ratio, tubular reabsorption of phosphate (TRP), serum calcification propensity (T50), and Klotho’s serum level are emerging as determinants of poor kidney outcomes in CKD patients. We aimed at analysing the use of FGF23, FEP/FGF23, TRP, T50, and Klotho in predicting the rapid decline of renal function in kidney allograft recipients. Methods: We included 103 kidney allograft recipients in a retrospective study with a prospective follow-up of 4 years. We analysed the predictive values of FGF23, FEP/FGF23, TRP, T50, and Klotho for a rapid decline of renal function defined as a drop of eGFR > 30%. Results: During a follow-up of 4 years, 23 patients displayed a rapid decline of renal function. Tertile of FGF23 (p value = 0.17), FEP/FGF23 (p value = 0.78), TRP (p value = 0.62) and Klotho (p value = 0.31) were not associated with an increased risk of rapid decline of renal function in kidney transplant recipients. The lower tertile of T50 was significantly associated with eGFR decline >30% with a hazard ratio of 3.86 (p = 0.048) and remained significant in multivariable analysis. Conclusion: T50 showed a strong association with a rapid decline of renal function in kidney allograft patients. This study underlines its role as an independent biomarker of loss of kidney function. We found no association between other phosphocalcic markers, such as FGF23, FEP/FGF23, TRP and Klotho, with a rapid decline of renal function in kidney allograft recipients. Introduction Kidney transplantation is the best treatment mode for patients with end-stage kidney disease [1]. Glomerular Filtration Rate (GFR) normally stabilizes at approximately 60% of donor renal function before presenting a gradual decline influenced by numerous variables as drug toxicity, rejection episodes and infections [2,3]. Serum creatinine, proteinuria, and interstitial fibrosis are well-known predictors of kidney function evolution [4]. Defining novel non-invasive markers that could precisely predict the individual eGFR decline in kidney allograft recipients (KARs) is important in patient management. Amongst potential tools, mineral metabolism biomarkers are of interest as disorders in phosphorus and calcium homeostasis are common. Klotho is a protein mainly expressed in kidney proximal and distal tubular cells. During the early phases of experimental chronic kidney disease (CKD), Klotho expression is decreased [5,6]. Lower Klotho serum levels are associated with a higher prevalence of cardiovascular disease, arterial stiffness and vascular calcification in the experimental setting and some clinical observations [7][8][9]. Low serum Klotho levels are also significantly associated with an increased risk of poor kidney outcomes in CKD, dialysed or transplanted patients [10]. Despite the unknown effect of Klotho in kidney allografts, its protective role in renal tubular cells and inhibition of renal fibrosis seems to impact the post-transplant ischemia-reperfusion injury and eventually alleviate delayed graft function [11,12]. FGF23 is a key phosphaturic hormone produced by osteocytes and osteoblasts, which increases early in CKD. FGF23 appears to be a sensitive marker of kidney disease and cardiovascular complications in the CKD population [13]. The phosphaturic action of FGF23 is based on its capacity to suppress renal phosphate reabsorption at the proximal tubules and thus to increase its urinary excretion at each nephron, referred as the fractional excretion of phosphate (FEP). Tubular reabsorption of phosphate (TRP), defined as 1-FEP, emerges as a possible surrogate marker for phosphate regulation in pre-dialysis CKD patients, as it correlates with renal function [14,15]. With the postulation that excreting phosphate is a marker of nephron stress, Bellasi et al. showed in a retrospective study that FEP is associated with end stage renal disease (ESRD), but not with all-cause mortality risk in a large cohort of stages 3b to 5 CKD patients [16]. Yamada and Kuro-o proposed the FEP to FGF23 ratio as an index that theoretically represents the number of healthy nephrons [17]. This ratio is an independent risk factor for renal progression [18] and was shown to be associated with aortic calcification in CKD [19]. Moreover, FGF23 is not only a risk marker for CKD progression and cardiovascular mortality in primary CKD but seems also important in KARs [13,[20][21][22][23]. Elevated c-terminal FGF23 (cFGF23) concentration was associated with overall graft loss in a previous observation [24]. Kidney transplantation incompletely mitigates the cardiovascular risk despite restoring renal function. KARs have markedly accelerated vascular calcification even with a stable renal function [25][26][27]. Serum calcification propensity test (T50) was developed to monitor the maturation time of calciprotein particles in serum [28]. High calcification propensity (or low T50) is closely associated with progressive aortic stiffening and increased long-term mortality in CKD patients [29]. Conversely, a prolonged T50 indicates a high residual capacity of serum in the patient, which prevents the formation of secondary calciprotein particles and is, therefore, indicative of an intact endogenous defence against calcification. T50 was shown as an independent determinant of graft failure in kidney transplant recipients [30,31]. Whether T50 is associated with rapid decline of renal function in KARs is unknown. Altogether, mineral metabolism markers may be interesting as non-invasive prognosis markers of renal function loss in KARs. In this study, we aim to analyse the association between FGF23, FEP/FGF23, TRP, T50, and Klotho with the rapid decline of renal function in KARs, defined as a loss of eGFR > 30% at 4-years follow-up. Material and Methods We designed a retrospective study including adult KARs in whom serum was kept for clinical reasons. Patients aged 18 or older, who received a kidney transplant between 1982 and 2013, who were followed routinely and whose serum was collected in 2015 at our hospital, were eligible for enrolment. Two nephrologists, not in charge of the patients, randomly selected 150 patients between January 2007 to December 2014. We excluded 21 patients because of the lack of available serum samples and 26 patients because of the lack of available follow-up, leaving 103 patients for the current analysis ( Figure 1). Baseline characteristics, including medical history, co-morbidities and transplantationrelated outcomes were collected through patient records. The patient's blood pressure, weight and size were measured routinely during follow-up visits. Serum creatinine and other standard laboratory values were measured during routine follow-up visits or hospitalizations and recorded at the time of the collected sample. Standard biochemical analyses were performed in our hospital using the routine automated analysers. eGFR was calculated using the Chronic Kidney Disease Epidemiology Collaboration equation from 2012 (CKD-EPI 2012) [32]. Creatinine was measured using IDMS-traceable Jaffe's kinetic method. Baseline characteristics, including medical history, co-morbidities and transplantation-related outcomes were collected through patient records. The patient's blood pressure, weight and size were measured routinely during follow-up visits. Serum creatinine and other standard laboratory values were measured during routine follow-up visits or hospitalizations and recorded at the time of the collected sample. Standard biochemical analyses were performed in our hospital using the routine automated analysers. eGFR was calculated using the Chronic Kidney Disease Epidemiology Collaboration equation from 2012 (CKD-EPI 2012) [32]. Creatinine was measured using IDMS-traceable Jaffe's kinetic method. We defined a threshold value for the rapid decline of eGFR as a decrease higher than 30% at 4-years follow-up, as already described in previous studies [33][34][35]. Frozen samples We defined a threshold value for the rapid decline of eGFR as a decrease higher than 30% at 4-years follow-up, as already described in previous studies [33][34][35]. Frozen samples were used to measure FGF23, Klotho and T50 in batch. Serum FGF23 levels were measured by the ELISA system using the C-TER Immunotopics kit [36]. An ELISA essay kit from IBL was used to measure serum levels of soluble Klotho [37]. The serum calcification propensity test (T50) was performed using a Nephelostar nephelometer [28]. Fractional excretion of phosphate (FEP) was calculated using the following formula: FEP = (24 h urine phosphate × serum creatinine)/(serum phosphate × 24 h urine creatinine) × 100. TRP was calculated as 1-FEP. TmP/GFR calculation depends on TRP: if TRP is ≤ 0.86, TmP/GFR = TRP × serum phosphate. If TRP > 0.86, TmP/GFR is defined as ((0.3 × TRP)/(1 − (0.8 × TRP)) × serum phosphate. Technicians from the laboratories were blinded to the clinical data and other results. The study was approved by the local ethical committee for human studies (CER 14-149) and performed according to the Declaration of Helsinki principles. All the patients were contacted to provide written informed consent to participate in this retrospective study. Statistical Analysis Continuous variables are expressed as mean +/− SD and categorical variables are expressed as numbers and percentages. p-values were calculated with Student's t-test for continuous variables or Chi 2 for categorical variables. FGF23 and proteinuria were logarithmically transformed before analysis due to abnormal distribution. The statistical significance was determined as a p < 0.05, and all tests were two-tailed. For simple correlation analyses between phosphocalcic parameters, including T50 and renal function, we performed Pearson's tests after controlling the linear associations with scatter plots. To test the hypothesis that phosphocalcic markers could predict a rapid decline of renal function, we performed log-rank tests for trends, comparing for each variable the risk of renal function loss. Time-to-event data were evaluated using Kaplan-Meier estimates and Cox proportional-hazard models. Hazard ratios, 95% confidence intervals and p values were calculated. Proportionality of hazard was graphically verified by plotting log minus log of survival against time. Statistical analyses were performed using STATA 16.1 (StataCorp, College Station, TX, USA). Outcomes The primary outcome was the rapid decline of renal function, defined as a decline of eGFR of >30% over 4 years. For participants who died or were lost to follow-up, the last available eGFR was used to assess the primary outcome. Characteristics of the Study Population We included 150 KARs at different time points from transplantation ( Figure 1). Among them, only 103 were analysed because 21 patients had no serum available and 26 patients were lost to follow-up at 4 years. The mean age was 56 years old, with mainly male gender (63%) and Caucasian origin (98%). The clinical characteristics of the study participants are shown in Table 1. Demographics data from the 2 groups are comparable. In total, 23 patients were defined as having a rapid decline in renal function, defined as >30% of eGFR decline. Univariable and Multivariable Analysis of Predictors of Renal Function Decline During a follow-up of 4 years, 23 patients (23/103 = 22.3%) displayed a rapid decline of renal function according to our definition. Baseline mean eGFR was 54.4 ± 19.9 mL/min/1.73 m 2 in the group with rapid decline of renal function and 56 ± 18.4 mL/min/1.73 m 2 in the non-rapid decline of renal function group. We tested the known risk factors of a rapid decline of renal function used in the kidney failure risk calculation: age, sex, eGFR, albuminuria, albumin, phosphate, bicarbonate, and calcium at baseline. Sex, albumin, phosphate, bicarbonate, and calcium were not associated with a rapid decline of renal function in our population (Supplementary Table S1 and Figure S1). Albuminuria > 300 mg/24 h was strongly associated (HR 4.37, 95% CI [1.46-13.3], p value = 0.008) with a rapid decline of renal function (Supplementary Table S1 and Figure S1G). Tertiles of Klotho, FGF23, FEP, FEP/FGF23 ratio, TRP and TmP/GFR were not associated with a rapid decline of renal function. However, the first tertile of T50 was strongly associated with a rapid decline of renal function in kidney allograft patients ( For clarity in graphs and the interpretation of results, we decided to use the higher values of the third T50 tertile as a reference and the lower as a first tertile. The same reflexion applies to tertile TmP/GFR. Kaplan Meier curves are shown in Figure 3. Multivariable Analysis of Predictors of Renal Function Decline In the multivariable analysis, including eGFR at baseline, albuminuria and T50, eGFR < 30 mL/min, albuminuria >300 mg/day and lower T50 were associated with a rapid decline of renal function (Table 3). Discussion In this retrospective study including 103 kidney allograft recipients followed up to 4 years, phosphocalcic biomarkers such as FGF23, Klotho, FEP/FGF23 ratio and TRP were not associated with a rapid decline of renal function. T50 was in contrast associated with a rapid decline of renal function and this association remained significant in multivariable cox analysis including albuminuria and eGFR at baseline. A shortened T50 reflects an abnormal mineral metabolism that predisposes to vascular vessels calcification leading to fatal or nonfatal cardiovascular disease in ESRD patients [38]. A high arterial calcification burden increases arterial stiffness and is associated with a faster decline of kidney function in patients with arterial hypertension and/or CKD. Although the underlying mechanisms are not completely understood, Pruijm et al. found that a shortened T50 was associated with lower renal tissue oxygenation (confirmed by MRI) and perfusion in hypertensive and CKD patients [39]. As restoration of renal function after transplantation does not mitigate cardiovascular risk due to accelerated vascular calcification in the transplanted patient, such alterations of perfusion may also occur in KARs. Since inflammation may occur for various reasons, and pro-inflammatory cytokines restrict the synthesis of Fetuin-A in the liver, the ability to counteract the calcium mineral disbalance-driven injury progression is further reduced [40]. This may lead to alterations of T50 and progression of vasculopathy in the kidney. Consequently, accumulating calciprotein particles might subsequently affect graft function by ischemia, leading to a more rapid decline of renal function. Further studies are needed to evaluate prospectively the impact of T50 measurement on KARs prognosis and care. In a study including stable kidney transplant recipients with a median transplant vintage of 3.9 years, patients with lower T50 had a 2.3-fold increased risk of cardiovascular disease. Even after multivariate adjustment, each standard deviation decrease in T50 was independently associated with a 22% greater CVD risk [41]. While low T50 values are shown to be independently associated with increased all-cause mortality ( [31], this is the first time that T50 emerges as a new marker of renal function decline in KARs. This marker may thus be of interest to determine renal prognosis and as a potential future therapeutic target. We did not observe any association between Klotho level after transplantation and the rapid decline of renal function at 4 years. The inclusion of patients after transplantation may explain those results. Indeed, the pre-transplant Klotho level seems to be decisive for the post-transplant period, independent of transplant or donor characteristics [42]. Deceased donor Klotho polymorphism, also predicts early transplant glomerular lesions and function [43]. While Klotho levels of the recipient and donor before transplantation may be significant, Klotho levels measured after transplantation are not associated with decline of renal function in our cohort. No association between FGF23 and the rapid decline of renal function in KARs was observed. Up to now, few studies have evaluated the role of FGF-23 in mortality and graft loss prediction in KARs, and their results are contradictory. In recent studies including patients with a median transplantation vintage of about 6-7 years, FGF23 was an independent risk factor for allograft loss, cardiovascular, and all-cause mortality [22,44]. In contrast, FGF23 was not an independent risk factor for mortality in the short period (until 48 months) post-transplant in a French cohort [45]. Thus, FGF23 is not a clear marker for renal function decline in KARs. TRP was also not associated with a rapid decline of renal function in kidney allograft patients despite previous promising studies. The major limitation of our study is its monocentric, retrospective and cross-sectional design. A sample size calculation was made at the time of the research protocol and the risk of a negative study due to a lack of power was low. Some of our negative results might nevertheless be explained by the small sample size. Technical limitations in regard to Klotho measurement in human research may also have impacted results interpretation [37]. In addition, Klotho serum levels in KARs may not reflect renal expression. The same limitation applies to FGF23 measurement. In our study, we measured the C-terminal portion of FGF23 (cFGF23). We therefore cannot completely exclude that intact FGF23 (iFGF23) would have a different predictive value over the C-terminal portion measurement, although it has been described that cFGF23 concentrations have a better discriminatory ability than iFGF23 concentration in predicting overall (all-cause) graft loss [24]. Conclusions In this retrospective study, T50 was the only parameter associated with a rapid decline of renal function in kidney allograft patients. T50 is a well-described predictive marker for its reflection of intravascular calcification and risk of cardiovascular mortality. We confirm that this marker is predictive of renal function evolution in KARs, but we were not able to demonstrate an association between FGF23, Klotho, FEP/FGF23 and TRP and the rapid decline of renal function. Our study confirms the need to initiate large-scale, prospective, multicentre studies for longitudinal follow-up of KARs and to integrate T50 as a marker of renal prognosis in this population. Three Statements (1). What is known: a. Serum creatinine and proteinuria are predictive of renal prognosis. b. Novel individualized non-invasive markers of renal function decline are needed in kidney allograft recipients (KARs). (2). What this study adds: a. This study tries to define mineral metabolism markers that could precisely predict the individual eGFR decline in KARs. b. T50 is associated with a rapid decline of renal function (3). What impact this may have on practice: a. Finding new non-invasive predictors of renal function decline in KARs could help clinicians to monitor and anticipate kidney allograft dysfunction, and thus prevent invasive procedures such as kidney biopsy. b. T50 may be used to predict a rapid decline of renal function. However, other mineral metabolism alterations may not play the same role in KARs' renal prognosis as in native kidney patients. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/jcm12123965/s1, Figure S1: Kaplan-Meier curves of a rapid decline of renal function by (A) sex, (B) calcium, (C) phosphate, (D) bicarbonate, (E) GFR, (F) albuminemia and (G) albuminuria; Table S1: Predictors of decline of renal function by univariable analysis. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of CCER Commission cantonale d'éthique de la recherche du canton de Genève (CER 14-149). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2023-06-29T05:10:43.297Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "ad2af9d17c66213b32bf166425316e321de0f098", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/jcm12123965", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad2af9d17c66213b32bf166425316e321de0f098", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13678817
pes2o/s2orc
v3-fos-license
Factors influencing quality of life in patients followed in the neurosonology laboratory for carotid stenosis Background Quality of life (QoL) is one of the main endpoints in stroke prevention or acute stroke treatment studies. The aim of the current study was to identify risk factors affecting the QoL of patients with carotid stenosis in stroke prevention. Methods Self-sufficient patients (50–80 years of age) with ≥20% carotid artery stenosis followed in the neurosonology laboratory, and without any severe illnesses within the last 12 months, dementia, or psychiatric disorders were selected for the study after signing informed consent. Patients completed two standardized QoL questionnaires (WHOQoL-BREF and EQ-5D-3 L) and a visual pain scale, provided covariate variables (medication, age, gender, education, and social situation), and the blood pressure and body mass indexes were recorded. Logistic regression (forward stepwise method) was used to identify factors affecting the individual domains of QoL questionnaires. Results Of the 584 consecutive patients, 502 met the inclusion criteria and 344 completely filled both QoL questionnaires (164 men; mean age, 69.7 ± 7.8 years). An independent predictor of worse QoL in all domains was pain. Independent factors decreasing the QoL were lower level of education and blood pressure in the physical health domain, female gender in the psychological domain, and male gender in the social relationships domain. Independent factors decreasing satisfaction with health status were female gender and higher blood pressure. Factors negatively influencing the satisfaction with the QoL were living alone, lower level of education, and higher diastolic blood pressure (WHOQoL-BREF). Factors negatively influencing mobility were age, male gender, living alone, lower level of education, and higher body mass index (EQ-5D-3 L; p < 0.05 in all cases). Conclusions Pain, blood pressure, body mass index, education, living alone, gender, and age were associated with the QoL in patients with carotid stenosis. Trial registration ClinicalTrials.gov, NCT02360137. Registered on 26 January 2015. Background Atherosclerotic disease is the leading cause of death and morbidity in developed countries in the past decades [1]. The carotid bifurcation and internal carotid arteries are sites with a very high predilection for the formation of atherosclerotic plaques [2]. Atherosclerotic carotid stenosis is a main cause of stroke [3] and, stroke is the second most common cause of death and the leading cause of disability worldwide [4,5]. In fact, about 20% of 15 million stroke patients worldwide are in need of medical care and rehabilitation procedures each year after suffering of stroke, and approximately 5.7 million patients die [6][7][8]. New treatment methods (i.e., intravenous thrombolysis, endovascular treatment, and neurointensive care) have led to a decrease in the number of stroke patients with permanent disability [9][10][11][12]. Nevertheless, only about 50% of patients reach full independency after stroke despite of new treatment use [10][11][12]. The persisting impairment in motor function is the main, but not the only, reason for dependency in activities of daily living among stroke patients [13,14]. Post-stroke depression, cognitive impairment, urinary incontinence, and other non-motor function impairment are relatively frequent health problems after stroke, thus leading to a decrease in the quality of life (QoL) [13][14][15]. Thus, QoL has become one of the main endpoints in stroke prevention or acute stroke treatment studies and, evaluation of QoL has become the standard tool for evaluation of the effectiveness of prevention and acute treatment of stroke [16][17][18]. The prevalence of carotid stenosis is approximately 10% in subjects > 70 years of age, the majority of whom are asymptomatic [19]; however, there are a lack of studies evaluating QoL in patients with carotid stenosis. Moreover, the majority of published studies have only included patients with carotid stenosis indicated for carotid revascularization, e.g., carotid endarterectomy or stenting [20][21][22][23][24]. A systematic review and meta-analysis of studies evaluating QoL after carotid revascularization showed that QoL did not change significantly in any domain in patients 1 year after carotid endarterectomy or stenting. Nevertheless, physical function, vitality, body pain, and social function domains were transiently worse 2 weeks after the procedure, and occurred more frequently after carotid endarterectomy than after carotid stenting [24]. Middleton et al. [25] showed that QoL of patients 3 months after carotid revascularization was better than QoL in the general population of patients with a previous history of stroke, but remained worse than in patients without a previous stroke. Thus, one may hypothesize that risk factors and clinical consequences of atherosclerosis in patients with carotid stenosis may significantly influence the QoL. Identification of the factors influencing the QoL in a prevention of stroke is necessary for treatment optimization and to preserve QoL. The aim of the current study was to identify risk factors affecting the QoL of patients with carotid atherosclerotic stenosis in stroke prevention. Questionnaires A quantitative cross-sectional study with standardized QoL questionnaires (World Health Organization Quality of Life [short version] {WHOQoL-BREF} and three-Level EuroQol-5D [EQ-5D-3 L]) was conducted to identify the factors influencing QoL in patients with carotid atherosclerotic stenosis in stroke prevention including risk factors for atherosclerosis (age, gender, weight, height and body mass index, systolic and diastolic blood pressure, arterial hypertension, diabetes mellitus, hyperlipidemia, smoking and alcohol misuse), diseases caused by atherosclerosis (coronary heart disease, myocardial disease, atrial fibrillation and other heart disease, transient ischemic attack, stroke, and peripheral arterial disease), arterial interventions (carotid endarterectomy, coronary artery bypass graft, surgery for peripheral arterial disease, carotid artery stenting, coronary artery stenting, and stenting of other arteries), and other concomitant factors (pain, social situation, and education). For this purpose, one generic questionnaire (WHOQoL-BREF) and one generic questionnaire widely used in stroke patients (EQ-5D-3 L) were selected [26,27]. The reason for using two different generic questionnaires was to compare the usability of both questionnaires for identifying risk factors influencing QoL. The WHOQoL-BREF questionnaire included two questions assessing the individual's overall perception of QoL and the overall perception of their health, and 24 questions in four domains (physical health -DOM1, psychological -DOM2, social relationships -DOM3, and environment -DOM4). Particular items were assessed using a five-point Likert scale [26]. The mean score of items within each domain was used to calculate the domain score. The mean score of the first two items (How would you rate your quality of life? -Q1, How satisfied are you with your health? -Q2) was calculated separately as defined in WHOQoL User Manual [28]. The official Czech version of the WHOQoL-BREF questionnaire was used with permission from The World Health Organization. The second questionnaire was the generic questionnaire EQ-5D-3 L [27]. The reason for using this second generic questionnaire was that the second questionnaire has been frequently used in stroke patients and contains different domains in comparison with WHOQoL-BREF. The EQ-5D-3 L contains five domains (questions) involving QoL (mobility -DOM1, self-care -DOM2, usual activities -DOM3, pain/discomfort -DOM4, and anxiety/depression -DOM5). The respondents used a three-level evaluation of the health state description (no problems, some or moderate problems, and an inability to do/ extreme problems). The second part of the questionnaire was the visual analogue 100-point scale, which evaluated the current health status of the individual [29]. The official Czech version of the EQ-5D-3 L questionnaire was used with permission from The EuroQol Research Foundation. The three-level EQ-5D questionnaire, instead of the five-level questionnaire, was used due to the non-existence of an official Czech version of EQ-5D-5 L when the study was designed. Participants Participants from the observational stroke prevention study (ANTIQUE Trial, ClinicalTrials.gov Identifier: NCT02360137, registered on January 26, 2015) who were followed in the Neurosonology Laboratory were selected for participation in the study. The inclusion criteria were as follows: a) self-sufficiency with 0-2 points on the modified Rankin scale (mRS); b) carotid atherosclerotic stenosis ≥20% using ECST study criteria [30]; c) 50-80 years of age; d) and signed informed consent. The exclusion criteria were as follows: a) hospitalization for a severe illness, including stroke, during the last 12 months; b) dementia (Mini Mental State Examination < 20 points; c) psychiatric disease, including depression (Beck depression Inventory ≥20 points); d) severe visual or hearing impairment or other inability to complete the questionnaires based on the patient's judgement; e) terminal stage of the disease including active cancer with a life expectancy < 2 years (according to the physician opinion); and f ) living in a retirement home, nursing home, or hospital. The entire study was conducted in accordance with the Helsinki Declaration of 1975, as revised in 2004 and 2008. The study was approved by the local Ethics Committee of the Faculty of Health Sciences, Palacký University Olomouc (No. UPOL-7279/1040-2015). All subjects provided written informed consent before enrollment. Clinical examination The neurologic and physical examinations, and duplex sonography of the cervical arteries were performed in all patients. The covariate variables (diseases, surgical procedures, medication, age, gender, level of education [primary, secondary, secondary with graduation, and tertiary], social situation [marital status, living alone, living with a partner or with family members], blood pressure, ten-level visual analogue pain scale, body mass index [BMI], sufficiency using mRS, smoking, alcohol consumption [the usual daily dose of alcohol reported by the patient], and percent of carotid stenosis) were recorded. Data were collected from medical and selfreports of patients. Statistics Pre-study calculations (expected difference of 0.5 point in WHOQoL domain for the variable presented in 50% of subjects) showed that a minimum of 502 respondents were required to reach significant results for with an alpha value of 0.05 (two-tailed) and a beta value of 0.8, assuming that 60% of subjects (301 respondents) will pass inclusion criteria and return completely filled questionnaires. Both questionnaires were evaluated as complete when ≤20% of items were missing. Covariate missing value did not exclude the patients from analysis, with the exception of logistic regression. The normality of data distribution was checked using the Shapiro-Wilk test. All data except body height were not normally distributed. Demographic data are reported as the median, mean and standard deviation or number and percentage. Data from both questionnaires were processed as ordinal data with 5 (WHOQoL-BREF) or 3 (EQ-5D-3 L) values, except for the visual analogue scale in EQ-5D-3 L, in which data were processed as quantitative. Categorical variables in the two arms (completers and non-completers) were compared by Fisher's exact test. Continuous variables were compared by the Student's ttest for normally distributed values. The Mann-Whitney U test (for variables with 2 groups) or Kruskal-Wallis test (for variables with more than 2 groups) was used. The Spearman correlation coefficient was calculated for evaluation of the correlation between factors with qualitative or ordinal quantities and questions or domains of QoL questionnaires. Logistic regression (forward stepwise method) was used to identify factors affecting the individual domains of QoL questionnaires (separate multivariable logistic model for each domain or question; totally 12 models). The following variables were used for logistic regression analysis: age (quantitative data); gender (qualitative data); marital status (semi-quantitative data); social situation (semi-quantitative data); level of education (semi-quantitative data); presence of arterial hypertension, diabetes mellitus, hyperlipidemia, coronary heart disease, or atrial fibrillation; history of myocardial infarction, other heart disease, stroke, transient ischemic attack, carotid endarterectomy, carotid artery stenting, coronary artery bypass graft, surgery for peripheral arterial disease, coronary artery stenting (all qualitative data; combination of selfreports and medical reports); smoking (self-report); alcohol consumption (self-report; 1 international unit = 10 mL of pure alcohol); BMI; systolic blood pressure; diastolic blood pressure; visual pain scale (all quantitative data). The quantitative values of the 4 domains in WHOQoL-BREF were dichotomized with a cut-off value of 13, Q1 and Q2 in WHOQoL-BREF with a cut-off value of 3 (1 + 2 vs. 3 + 4 + 5), 5 domains in EQ-5D-3 L with a cut-off value of 2 (1 vs. 2 + 3), and the visual analogue scale in EQ-5D-3 L with a cut-off value of 51. All tests were carried out at an alpha level of significance of 0.05. All data were analyzed using IBM SPSS Statistics (v22.0; SPSS, Inc., Chicago, IL, USA). Results Of the 584 consecutive patients examined in the Neurosonology Laboratory, 502 met the inclusion criteria, and 344 completed both QoL questionnaires (164 men; mean age, 69.7 ± 7.8 years) over a 3-month interval (April-June 2016) - Fig. 1. Demographic data are presented in Table 1. There was no statistically significant difference in any demographic parameter between completers (patients who completed the questionnaires) and non-completers (p < 0.05 for all items). Cronbach's alpha for particular subscales in WHOQoL-BREF in the presented study varied between 0.73 and 0.82. Cronbach's alpha for EQ-5D-3 L was 0.74. The correlations between observed factors and QoL in particular domains are shown in Table 2. Factors negatively influencing the QoL were identified using the forward stepwise method of multiple logistic regression and are presented in Tables 3 and 4. In the WHOQoL-BREF questionnaire, pain was identified as an independent predictor of worse QoL in all domains and questions (OR per 1 unit in the visual pain scale = 0.593-0.852, p < 0.01 for all cases) - Table 3 In the EQ-5D-3 L questionnaire, the independent predictor of worse QoL in all domains and current health status was pain (OR per 1 level in the 10-level visual analogue pain scale = 0.505-0.787, p < 0.01 for all cases) - Table 4 A history of stroke, transient ischemic attack, myocardial infarction, arterial hypertension, diabetes mellitus, hyperlipidemia, coronary heart disease, atrial fibrillation, arterial surgery, stenting, smoking, and alcohol consumption had no significant influence on QoL in both questionnaires (p > 0.05 for all cases). Discussion The present study demonstrated that a history of vascular events (stroke, transient ischemic attack, coronary heart disease, and myocardial infarction), risk factors influencing progression of atherosclerosis (arterial hypertension, diabetes mellitus, hyperlipidemia, smoking, and alcohol consumption), and vascular interventions for atherosclerotic stenoses were not associated with QoL in self-sufficient patients with carotid atherosclerotic stenosis and without dementia or moderate or severe depression. The only factors influencing the QoL in these patients were pain, blood pressure, and BMI, living situation, level of education, age and gender. Thus, the current patient's situation and health status, but not the medical history were the main factors influencing the evaluation of QoL in these patients. The interesting result of our study was that the presence of arterial hypertension was not identified as a factor influencing the QoL in both questionnaires, in contrast to actual blood pressure, which was negatively correlated with satisfaction with health status, satisfaction with the QoL, and physical health domain evaluation measured on the WHOQoL-BREF, and the current health status measured on the EQ-5D-3 L. Lower blood pressure was associated with a better QoL and a better sense of patient well-being, as in previous studies [31,32]. Obesity represents another factor with potential influence on the QoL [33,34]. BMI was identified as a factor negatively correlated with QoL in the mobility domain measured on the EQ-5D-3 L in our study. Ford et al. [35] showed, also, that increased BMI significantly impaired healthrelated QoL and affected a physical functioning more strongly than mental functioning. Social situation was a factor influencing the overall perception of QoL measured on the WHOQoL-BREF and mobility measured on the EQ-5D-3 L. Patients living alone scored significantly worse in both domains. Loneliness is a known factor negatively influencing QoL in chronically ill patients or stroke survivors [36][37][38]. In agreement with other studies, pain was identified as a strong independent predictor of lower QoL in all domains of both questionnaires in our study [39][40][41]. Gender was identified as a factor significantly influencing QoL in the psychological domain and satisfaction with present health status (worse in females), and social relationships domains (worse in males) measured on the WHOQoL-BREF, and mobility measured on the EQ-5D-3 L. The results of published studies evaluating the influence of gender on QoL are inconclusive. Jönsson et al. [42] reported that female gender is associated with higher scores for the physical role, emotional function, and the general health in stroke survivors. In contrast, van Eeden et al. [43] demonstrated higher QoL in males compared to females 2, 6, and 12 months after stroke; however, it should be pointed out that not only poststroke patients were enrolled in our study. Age was the second non-modifiable factor influencing the QoL. Nevertheless, age only correlated significantly with QoL in the mobility domain measured on the EQ-5D-3 L. A recently published Dutch study confirmed that age influenced the elderly predominantly in the mobility domain of all domains in the EQ-5D-3 L questionnaire [44]. The last identified factor influencing the QoL was level of education. Level of education was negatively correlated with the overall perception of QoL measured on the WHOQoL-BREF, satisfaction of present health status, and QoL in the mobility domain measured on the EQ-5D-3 L. The World Health Organization has determined education to be one of the social determinants of health because low education levels are linked with poor health, more stress and lower self-confidence [45]. Education has also been identified as an independent factor positively influencing QoL in the study performed by Vlajinac et al. [22]. The severity and character of persisting neurologic deficits after stroke could be additional factors influencing the QoL in patients with carotid stenosis [46][47][48][49]. A Korean study showed that patients with stroke and facial palsy evaluated their QoL worse than patients with dysarthria [47]. Also, persistent visual deficits, hemiparesis, and recurrent stroke could influence the QoL significantly [48,49]. We did not identify persistent neurologic deficits as a factor influencing QoL in patients after stroke if self-sufficient. Nevertheless, the character and severity of neurologic deficits were not evaluated in the present study. Comparing the ability of both questionnaires to identify factors influencing QoL, the EQ-5D-3 L questionnaire identified not only the same five independent factors (gender, level of education, living alone, pain, and blood pressure) as the WHOQoL-BREF questionnaire, but two additional factors (age and body mass index). Furthermore, the EQ-5D-3 L questionnaire consisted of only 5 questions and 1 visual analogue scale in comparison with 26 questions in the WHOQoL-BREF. These results showed that the EQ-5D-3 L questionnaire is more suitable than the WHOQoL-BREF for patients with carotid stenosis. The main limitation of the study was patient selection. We enrolled only self-sufficient patients visiting the Neurosonology Laboratory for the evaluation of atherosclerosis of the carotid arteries. Thus, patients with other etiologies of stroke could be neglected. The second limitation was the monocentric character of the study. Third, patients recently hospitalized for a severe illness, patients with dementia, psychiatric disease, including moderate or severe depression, severe visual or hearing impairment, patients in a terminal stage of the disease, and patients living in a retirement home, nursing home, or hospital were excluded to avoid uncontrolled bias. Only 4% of screened patients were excluded due to these reasons. Thus, the results should be generalizable. Nevertheless, in further studies, the extension of inclusion criteria, recorded variables and sample size may enable enrollment of a more heterogenous group of patients with carotid stenosis and may subsequently identify more predictors of QoL. Conclusion Pain, blood pressure, BMI, education, living alone, gender, and age, but not a previous stroke or myocardial infarction, affect the QoL in self-sufficient patients with carotid stenosis without dementia or severe depression. Thus, current social and health status factors should be recorded in studies with carotid stenosis patients. Awareness and understanding of the factors influencing QoL in patients with carotid stenosis should be important to support a maintained or increased QoL and may also lead to more holistic management and patient care.
2018-05-03T02:09:31.837Z
2018-04-27T00:00:00.000
{ "year": 2018, "sha1": "90e79b33d86986234c9e5679f7b852d02b0c5a18", "oa_license": "CCBY", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-018-0902-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90e79b33d86986234c9e5679f7b852d02b0c5a18", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
164382287
pes2o/s2orc
v3-fos-license
Traffic Performance of U-Turn Effects at Median Opening on Four-Lane Divided of Urban Street (Study Case: Yogyakarta, Indonesia) The main purpose of this study was to determine traffic performance at a median opening on the urban street. In order to achieve the research purpose, data were obtained at four U-turn locations in sequence at Gejayan area of Yogyakarta. By using VISSIM software, it analyzed traffic delay, queue length, and average travel speed on the existing condition then suggested the solution to improve the performance. This research showed that the longest queue length was around 200 m and the delay was almost 25 s. The result suggested that U-turn location with bad performance should be omitted. Reducing the number of U-turn locations will improve more than 50% of the traffic performance in the urban street. In addition, 100 m should be the minimum distance among Uturns to maintain the traffic performance. Introduction Median is a section of road which is located in the middle of the road and not used for vehicular traffic. The main function of the median is to separate traffic flow in the opposite direction and reduce conflict area for vehicles. In addition, the median also acts as a speed change and deceleration/acceleration lane for right turn traffic and U-turn. Basically, vehicles making a U-turn at a median opening are decelerating their speed thus will affect the traffic flow in the same direction. However, since the limited maneuver area makes short radius of the road geometry, it causes vehicles to be unable to make a direct U-turn. This condition leads other vehicles to be impaired or even stopped either from the same direction or the opposite. For instance, Liu et al. [1] estimated the potential capacity of U-turns at unsignalized median openings on six-lane streets. The results of Liu, et al in [1] suggest that the Highway Capacity Manual can be applied to estimate U-turn capacity in the research area. They also built a procedure to estimate the capacity of U-turn movement at median openings on multilane highways. The research result by Liu et al at [2] found that vehicles making U-turns at median openings with wide medians (median nose width ⩾ 6.4m) have the larger capacity than those making U-turns at median openings with narrow medians (median nose width < 6.4m). Zhou et al. said in [3] that the location of U-turn median openings has a great impact on the operations of U-turns. Mohapatra et al at [4] identified the conflict zone between a turning vehicle and oncoming vehicles at uncontrolled median openings on urban roads. A simple model was proposed to identify the boundary of the conflict zone at median openings. A geometrical augmentation required at the median opening was suggested. Mohapatra et al in [5] also said that the conflicting traffic on six-lane roads was found to be relatively low, but on fourlane roads, almost all of the opposing traffic becomes conflicting traffic for U-turns. However, as stated by Liu et al in [6] that every movement at U-turn was very interesting to visualize by traffic simulation program. VISSIM, as one of the traffic simulation programs, provides reasonable capacity estimation for U-turn at unsignalized intersection with raised median cross sections, after crucial parameters were properly defined and calibrated. According to Yang and Zhou [7], a U-turn movement simulation using other software program indicates that the delay, as well as the travel time of direct travel time, was higher than without using U-turn. Hummer and Reid [8] determined the importance of arterial geometry related to total system time, average stops per vehicle, and average speed. The result showed that the median U-turn and super street median can improve the travel time and average speed compared with the traditional two-way left turn lane. Della et al. [9] made a simulated case study using VISSIM with different width median openings and the result showed that, in narrow width median opening, it will lead to major delay and long travel time. Pirdavani et al made an innovation to improve the road performance by fly-over U-Turn then made the traffic simulation by VISSIM. They defined that median opening U-turn has number of delay than fly-over U-turn. Other innovation of U-turn should be built for example new types of U-turn intersection which are geometrically designed with a raised island providing a protected U-turn movement. Pirdavani et al [10] found that this new type of U-turn facility commonly produces shorter travel time. In this study, there was a two-way four-lane street with a median that has more than 5 U-turn locations in the 3-km road length. Gejayan area of Yogyakarta City is one of the leading business districts that have a high traffic flow. There are many points that accommodate median opening and U-turn or right turn. Some vehicles cannot pass the Uturn perfectly so as to disturb the traffic in the same direction and cause queues. Since the distance between U-turn locations at median opening is very close, there are long queues that often occur along the street. Therefore, it is necessary to find the influence of U-turn movement by determining the traffic delay at median opening on the urban street and appropriate problem-solving for optimizing the traffic performance. In this research, there were 4 U-turn locations in sequence along the urban street at Yogyakarta City. The locations were divided by some segments as research locations can be seen in Figure 1. The detailed geometry can be seen in Figure 2 as follows. Data Collection The four-lane two-way urban street with median at the research location was divided into some segments with 4 U-turn locations in sequence. Primary data was taken for this research in collecting the road geometry, traffic volume, and volume of U-turn traffic. The position of the surveyor can be conceived in Traffic Simulation Traffic evaluation was analyzed to calculate the length of queues and delays on road as a result of the U-turn motion at the median opening. The traffic simulation was conducted using VISSIM software. After getting the results of the existing condition, some scenarios to improve the traffic performance were simulated. Still, the traffic simulation in solving the problems used VISSIM software as a way of the engineer to find the most ideal conditions. Traffic Volume and Characteristic The highest peak hour volume was obtained on Monday Afternoon at 16:30 to 17:30 with 5,559 vehicles/hour in the North-South direction and 3,072 vehicles/hour in the South-North direction. Motorcycles (MC) showed the highest user in all segments. Meanwhile, the highest volume of traffic occurred in the U4 from North to South direction. In general, a number of heavy vehicles and unmotorized were the lowest popular user in this area. The detail volume can be seen in Figure 4 below. Detail Geometry at U3 Detail Geometry at U4 At the peak hour, the number of vehicle doing U-turn at the research locations can be seen in Figure 5 as follows. The busiest location of vehicles doing U-Turn was at U4 where there were shopping areas around the location. Mostly, the number of vehicles doing U-Turn at the peak hour was less than 100 vehicles/hour. Besides, as can be seen in Figure 6, motorcycle (MC) was the vehicle with the most U-turns as well as the most frequently used vehicle in Indonesia. In order to make accurate traffic simulation from VISSIM, a parameter calibration was made. Since the driving behavior in a developing country such as Indonesia is different, some calibrations were made such as the desired position at free flow, distance standing and driving, etc. As a result, the difference of the traffic volume between the data and VISSIM simulation was less than 6.5%. The detailed result can be seen in Table 1 as follows. Since the traffic volume result indicates the field data, the model was used as the traffic simulation to find the traffic performance in the existing condition and future proposed scenario. Queue Length and Delay As stated by Dastidar and Adeli at [11] that it was important to track the queue length to analyze the congestion characteristics. Still, the delay and queue length were important to measure the traffic performance. In addition for the traffic simulation, as the past research by Liu et al [12], it was possible to count the queue length by an equation model. Traffic simulation by software to find the queue length was counted. By the previous research of Mystkowski and Adeli [13], some programs produced a result 70 to 85 percent of the time, while some others produced a result 50 to 85 percent of the time. Instead of traffic simulation, Murat et al [14] said that regression modeling can be used to find the delay at isolated signalized intersection. As presented in Table 2 concerning traffic simulations at the U-Turn in sequence, the longest delay and queue length occurred at the U-Turn with alley access to the major street. The shortest delay and queue only occurred at U3 location without any access nuisance from the alley. Traffic Performance Analysis on Existing Condition Based on the Guidelines of the Minister of Transportation of Indonesia [15] concerning the management and activities of traffic engineering, the traffic performance was determined by the speed parameter. From the result of the traffic simulation by VISSIM, speed was used as an average travel speed. Value of travel time was taken from vehicle travel time result with the observation distance of 50 meters on each installation of vehicle travel time counter. The detailed result of travel speed can be seen in Figure 7. From the figure, it is important to note that U3 has the highest speed location. However, on the average, U1 has the worst performance since the average speed in both directions were 17 km/hour. Overall, the average travel speed from the south direction at the research location was better than from the north. Alternative Solution I In order to increase the traffic performance, some solutions were proposed. Alternative I was moved the median opening 2 (U2) as far as 60 meters to the north. This solution was selected because the access from the alley -at the west side of the main road-at U2 makes the traffic worse. In addition, the distance between the median opening 1 (U1) and median opening 2 (U2) was relatively close to each other (67.2 meters). This solution will allow the traffic from the alley to switch to the south at the extension location of U-turn 2 (U2). The detailed geometry before and after of the solution can be seen in Figure 8. Alternative Solution II Alternative II was designed by closing two U-Turn in the middle of the sequence of U-Turn. As an alternative, median opening 2 (U2) and median opening 3 (U3) were closed so that the traffic flow was diverted to median opening 1 (U1) and median opening 4 (U4). Besides, the traffic flow was lower than the others. The detailed geometry on alternative II can be seen in Figure 9 and 10. Results of Alternative Solutions From the proposed solution, the traffic performance was analyzed to find the alteration of traffic. As shown in Figure 11, the queue length and delay in the alternative solutions mostly decrease to 40% approximately. The queue length of alternative I was reduced significantly at U1 and disappeared at U2 and U3 indeed. Besides, the delay of the solutions decreases by more than 50%. Yet, the queue at U4-A was getting worse slightly in all the proposed solutions. In addition, the delay was increased less than 36% at U3 on alternative I and at U1 on alternative II. However, the travel speed analysis for all the alternatives can be seen in Figure 12. From the figure, it can be concluded that the speed increase significantly as much as 40% on alternative I and 64% on alternative II. On the average, both alternatives can improve the speed by more than 50%. Conclusion U-Turn locations at the median opening along a street with less than 100 m distance in sequence will lead to bad performance of traffic. This research shows that the longest queue length was around 200 m and the delay was almost 25s. The result suggests that the U-turn location with bad performance should be omitted so as to reduce the number of the U-turn locations facing the aisle will improve more than 50% traffic performance at the urban street. In addition, 100 m should be the minimum distance between U-turn locations at the median opening to maintain the traffic performance.
2019-05-26T14:15:57.407Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "864a8986a116b61eb123aeafa45ae76880053759", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/29/matecconf_icsbe2019_02002.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ea12ba33701d6fd7840233215d0ac3f3ebca30e7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Geography" ] }
18524639
pes2o/s2orc
v3-fos-license
Influence of Rotator Cuff Tear Size and Repair Technique on the Creation and Management of Dog Ear Deformities in a Transosseous-Equivalent Rotator Cuff Repair Model Background: Redundancies in the rotator cuff tissue, commonly referred to as “dog ear” deformities, are frequently encountered during rotator cuff repair. Knowledge of how these deformities are created and their impact on rotator cuff footprint restoration is limited. Purpose: The goals of this study were to assess the impact of tear size and repair method on the creation and management of dog ear deformities in a human cadaveric model. Study Design: Controlled laboratory study. Methods: Crescent-shaped tears were systematically created in the supraspinatus tendon of 7 cadaveric shoulders with increasing medial to lateral widths (0.5, 1.0, and 1.5 cm). Repair of the 1.5-cm tear was performed on each shoulder with 3 methods in a randomized order: suture bridge, double-row repair with 2-mm fiber tape, and fiber tape with peripheral No. 2 nonabsorbable looped sutures. Resulting dog ear deformities were injected with an acrylic resin mixture, digitized 3-dimensionally (3D), and photographed perpendicular to the footprint with calibration. The volume, height, and width of the rotator cuff tissue not in contact with the greater tuberosity footprint were calculated using the volume injected, 3D reconstructions, and calibrated photographs. Comparisons were made between tear size, dog ear measurement technique, and repair method utilizing 2-way analysis of variance and Student-Newman-Keuls multiple-comparison tests. Results: Utilizing 3D digitized and injection-derived volumes and dimensions, anterior dog ear volume, height, and width were significantly smaller for rotator cuff repair with peripheral looped sutures compared with a suture bridge (P < .05) or double-row repair with 2-mm fiber tape alone (P < .05). Similarly, posterior height and width were significantly smaller for repair with looped peripheral sutures compared with a suture bridge (P < .05). Dog ear volumes and heights trended larger for the 1.5-cm tear, but this was not statistically significant. Conclusion: When combined with a standard transosseous-equivalent repair technique, peripheral No. 2 nonabsorbable looped sutures significantly decreased the volume, height, and width of dog ear deformities, better restoring the anatomic footprint of the rotator cuff. Clinical Relevance: Dog ear deformities are commonly encountered during rotator cuff repair. Knowledge of a repair technique that reliably decreases their size, and thus increases contact at the anatomic footprint of the rotator cuff, will aid sports medicine surgeons in the management of these deformities. The transosseous-equivalent rotator cuff repair is a contemporary technique that has been devised to optimize the construct biomechanics, footprint restoration, and tendon healing of a rotator cuff tear. 15 This method utilizes medially and laterally positioned suture anchors to establish a suture bridge that compresses the rotator cuff footprint, optimizes tendon-bone contact, and creates a construct that is biomechanically strong enough to withstand the physiologic stresses experienced in the early healing period. When compared with other commonly utilized rotator cuff repair techniques, transosseous-equivalent rotator cuff constructs have been shown to significantly improve footprint contact characteristics, 16 exhibit superior biomechanical properties, 17 and prevent extravasation of synovial fluid that inhibits tendon-bone healing. 1 Although studies have not conclusively shown the superiority of this technique based on clinical results, 14 multiple authors argue that this method should be the technique of choice. 4 Despite the inherent advantages of this technique, the transosseous-equivalent rotator cuff repair also possesses some frequently encountered shortcomings, notably redundancies in the rotator cuff tissue, which are commonly referred to as ''dog ear'' deformities. Various suture patterns [10][11][12][13] have been created to minimize the occurrence of these deformities or manage them once they are formed, as consequences may include inadequate footprint restoration and impaired healing of the cuff to the greater tuberosity. Commonly encountered in other surgical wounds as well, dog ear deformities are the result of an asymmetric puckering or accumulation of tissue, typically in the center or at the apices of an incision. The etiology and management of these deformities are well described in the plastic and dermatologic surgery literature 2,6,7,19,21,22 ; however, there is a relative paucity of literature regarding the causes and implications of dog ear deformities in rotator cuff repairs. This study was performed to quantify the effects of dog ear formation on rotator cuff footprint restoration after repair as well as to assess the association between rotator cuff repair method and the formation and management of dog ear deformities in a human cadaveric model. We hypothesized that the 3 tear sizes would be equally as likely to produce a dog ear deformity and that the 3 repair methods would manage dog ear deformities equivalently. MATERIALS AND METHODS The study used 10 fresh-frozen human cadaveric shoulders (6 male, 4 female) with a mean age of 61.2 ± 1.7 years at time of death (range, 58-70 years). Specimens were stored at -20 C and thawed completely before dissection. Each specimen was visually inspected to rule out any evidence of prior surgery or fracture. On dissection, 3 specimens were found to have massive rotator cuff tears retracted to the glenoid and were thus excluded. The remaining 7 shoulders were randomized and included in the study protocol. Specimen Preparation Specimens were mounted upright in standard fashion via a vice clamp to the scapula. Skin, superficial fat, and the deltoid were dissected from the scapula, clavicle, and proximal humerus to reveal the rotator cuff musculature and its tendon insertions. The acromion was removed at its base, and the clavicle was excised for improved visualization of the rotator cuff tendons. A nonabsorbable suture was placed in running, locking fashion into the muscle belly of the supraspinatus for approximately 6 cm, beginning just medial to the musculotendinous junction. The free ends of the suture were tied and draped over a pulley, and a 200-g weight was applied to provide constant tension on the rotator cuff repairs (Figure 1, A and B). A 7-mm drill hole was made from lateral to medial 8 cm distal to the greater tuberosity for passage of traction sutures used to reduce the cuff to the lateral footprint in the tear creation portion of the protocol (Figure 2). Tear Creation Tears were systematically created in the supraspinatus tendon by sharply dividing it from its insertion on the greater tuberosity, starting anteriorly at the posterior aspect of the bicipital groove and extending 3 cm posteriorly. Serial ellipses of tissue were resected, creating elliptical-shaped tears of 0.5, 1.0, and 1.5 cm at the apex in the medial to lateral direction, thereby modeling the spectrum from crescent to U-shaped tears and simulating tear retraction. Two 4.5-mm PEEK Corkscrew FT suture anchors (Arthrex, Naples, Florida, USA) were placed at the articular margin evenly spaced from anterior to posterior (1 cm apart in the 3-cm tear space) at 45 relative to the footprint surface (the deadman angle 20 ). After each tear was created, a microsuture lasso was used to pass the medial row sutures in a horizontal mattress fashion from deep to superficial 1 cm medial to the lateral edge of the rotator cuff tear, centered over each corresponding suture anchor. Rather than tying these sutures, they were tensioned, passed through the 7-mm drill hole in the midshaft of the humerus, and clamped on the medial aspect of the humerus to maintain reduction of the lateral edge of the rotator cuff to the greater tuberosity. Resulting dog ear deformities were injected with an acrylic resin mixture, and the rotator cuff was digitized from its lateral insertion to 1.6 cm medial 5 using the Micro-Scribe 3D Digitizer (Solution Technologies, Oella, Maryland, USA). Calibrated photographs were taken perpendicular to the footprint by photographing the rotator cuff and a 2-cm ruler. The sutures were then removed from the rotator cuff tissue, and an additional ellipse of rotator cuff measuring 5 mm medial to lateral at its apex was resected. Sutures were replaced 1 cm from the newly formed edge of the cuff, and the sequence of injection, photographs, and digitization was repeated. This same sequence was again repeated on a tear size of 1.5 cm in the medial-lateral direction. Injection Technique A mixture of 2 mL poly-methylmethacrylate (Acraweld Repair Powder; Henry Schein, Melville, NY, USA) and 1 mL acrylic liquid was placed into a 1-mL syringe. An 18-gauge intravenous catheter tip was used on the end of the syringe for the injection. The edge of the tissue forceps was used to compress the rotator cuff to the humerus 1.6 cm medial to the lateral edge ( Figure 3). This dam prevented the acrylic resin mixture from passing medially into the patulous intra-articular space medial to the rotator cuff footprint. The mixture was then injected into the dog ears in similar fashion to cementing techniques, retracting the syringe as the potential space was filled with the compound. The volume of acrylic resin mixture required to fill the dog ear deformity was recorded to quantify the size of the dog ear deformity. After injection, 30 seconds were allotted to allow the mixture to cure and harden, providing a firm surface to digitize using the MicroScribe 3D Digitizer. The resin was then completely removed from the specimen after each repair technique was performed. Repair Techniques The 3 Â 1.5-cm tear was repaired by 3 methods on each specimen in a randomized order. These techniques included a traditional transosseous-equivalent repair (Suture-Bridge; Arthrex), a knotless transosseous-equivalent repair with nonabsorbable suture tape (SpeedBridge; Arthrex), and a knotless transosseous-equivalent repair with the addition of peripheral looped FiberLink (Arthrex) sutures placed at the apex of the subsequently created dog ears. All measurements for placement of anchors and passage of sutures through the supraspinatus tendon were carefully performed with a ruler to standardize the technique for each specimen. A single surgeon performed all repairs on all specimens. SutureBridge This repair was performed with 2 4.5-mm PEEK Corkscrew FT suture anchors medially and 2 knotless 4.75-mm Swive-Lock anchors (Arthrex) laterally. The medial row anchors were placed at the lateral edge of the articular margin of the humeral head evenly spaced from anterior to posterior (1 cm apart in the 3-cm tear space) at 45 relative to the footprint surface (deadman angle 20 ). The medial row sutures were passed in a horizontal mattress fashion 1 cm medial to the lateral edge of the rotator cuff tear, centered over each corresponding suture anchor. Both medial row knots were tied with a sliding double half-hitch knot first, followed by 3 alternating simple half-hitches. One limb from each suture anchor was threaded through each of the 2 lateral knotless anchors. The lateral row anchors were then inserted 1.5 cm lateral to the rotator cuff footprint, spaced 1 cm apart (Figure 4). If a specimen was randomized to either of the 2 SpeedBridge arms prior to the Suture-Bridge, 5.5-mm PEEK Corkscrew FT suture anchors were substituted for the 4.5-mm medial suture anchors to accommodate the larger medial anchor holes from the previously placed 4.75-mm anchors. SpeedBridge This technique was performed with 4 knotless 4.75-mm SwiveLock anchors, with the 2 medial anchors loaded with FiberTape (Arthrex). The medial row anchors were placed in the same locations previously described. Both limbs of the FiberTape from each anchor were passed through the supraspinatus tendon 1 cm medial to the lateral edge of the rotator cuff tear. One FiberTape limb from each medial anchor was threaded through each of the 2 lateral knotless anchors. As previously described, the lateral anchors were inserted 1.5 cm lateral to the rotator cuff footprint, spaced 1 cm apart. SpeedBridge þ FiberLink This technique was performed as previously described for the SpeedBridge with the addition of peripheral looped FiberLink sutures. A microsuture lasso was used to penetrate the cuff 5 mm posterior to the bicipital groove and 1 cm medial to the lateral edge of the rotator cuff tear, at the apex of the dog ear deformity. The FiberLink was then shuttled through the cuff tissue, ensuring that the loop was on the superior aspect of the cuff. This step was repeated for the posterior dog ear deformity, placing a second looped peripheral FiberLink suture 5 mm posterior to the posterior medial row anchor. The tails of the FiberLink sutures were incorporated into the knotless lateral row anchors ( Figure 5). Measurement of Dog Ear Deformities Immediately following each successive tear creation as well as after each of the 3 repair techniques of the 1.5cm tear, the rotator cuff tissue and greater tuberosity footprint were digitized using the MicroScribe 3D system via creation of point clouds. Digitization was performed by a single member of the research team in a systematic fashion, using a probe to outline the rotator cuff with multiple medial to lateral lines starting anteriorly and moving posteriorly ( Figure 6A). The boundaries of the rotator cuff footprint were then outlined systematically starting with the medial border, followed by the lateral cuff edge, then finally the greater tuberosity footprint edge ( Figure 6B). Surface curves were traced, lofted, and smoothed with Rhinoceros software (McNeel North America, Seattle, Washington, USA) to create 3-dimensional volumetric representations of each repair ( Figure 6C). Dog ear height, width, and volume of rotator cuff tissue not in contact with the greater tuberosity footprint were calculated for the anterior and posterior dog ear in each repair. Dog ear height and width were then measured from the calibrated photographs of the specimens previously described (Figure 7). Data Analysis The volume, height, and width of rotator cuff tissue not in contact with the greater tuberosity footprint were calculated using the volume injected, MicroScribe-derived 3D reconstructions, and the calibrated photographs. Comparisons were made between tear sizes, dog ear measurement techniques, and repair methods utilizing 2-way general linear model analysis of variance (GLM ANOVA) and Student-Newman-Keuls multiple-comparison tests. Statistical significance was defined a priori as P < .05. Intrarater Reliability As dog ear deformities have not been previously quantified in the literature, novel methods of measuring these redundancies were created. Therefore, intrarater reliability testing was performed to demonstrate the reproducibility of our data collection methodology. To determine the repeatability of the calculations used to compute the dimensions of the heights and widths of the anterior and posterior dog ears as measured from photographs, a single observer computed the dimensions on 3 separate occasions, separated by at least 1 week between each calculation. Two-way ANOVAs (SAS, Cary, North Carolina, USA) were run for each dimension, with the condition of the cuff (0.5-mm tear, 1.0-cm tear, 1.5-cm tear, SutureBridge repair, SpeedBridge repair, and FiberLink repair) as one factor and measurement (first trial, second trial, third trial) as the second factor. Both factors were repeated on the 7 subjects that were studied. Student-Newman-Keuls multiple comparison tests were run to discern differences between levels of a factor if significant differences were found (P < .05). RESULTS No statistical differences were found between trials for the anterior height (P ¼ .9767), posterior height (P ¼ .9035), anterior width (P ¼ .7496), or posterior width (P ¼ .2139). Repeatability was not affected by the condition of the rotator cuff for the anterior height (P ¼ .9568), posterior height (P ¼ .7030), the anterior width (P ¼ .6437), and posterior width (P ¼ .0659). In terms of the precision of the length calculations, the standard deviation for all 3 trials was divided by the mean of all 3 trials for each condition for all subjects. The averages and standard deviations of these values in terms of percentage were 3.46% ± 4.57% for anterior heights, 4.67% ± 6.56% for posterior heights, 3.47% ± 2.59% for anterior widths, and 6.29% ± 5.38% for posterior widths. Therefore, on average, the variation in all dimensions computed was within 6.5%. To determine the repeatability of the calculations used to compute the volumes of the anterior and posterior dog ears when a 1.6 cm-deep dog ear was selected, a single observer computed the volumes on 3 separate occasions, separated by at least 1 week between each calculation. Two-way ANOVAs (SAS) were run for each volume, with the condition of the cuff (0.5-cm tear, 1-cm tear, 1.5-cm tear, SutureBridge repair, SpeedBridge repair, and FiberLink repair) as one factor and measurement (first trial, second trial, third trial) as the second factor. Both factors were repeated on the 5 subjects that were studied. Student-Newman-Keuls multiplecomparison tests were run to discern differences between levels of a factor if significant differences were found (P < .05). No statistical differences were found between trials for the anterior dog ear volume (P ¼ .2025) or posterior volume (P ¼ .7103). Repeatability was not affected by the condition of the rotator cuff for both the anterior measurement Tear Creation Dog ear volumes and heights trended larger for the 1.5-cm tear, but there was no statistical difference between 0.5-cm, 1.0-cm, and 1.5-cm tears with any measurement modality ( Figure 8, A-C). Repair Methods of 1.5-cm Tear Volume Measurements. When comparing volume measurement modalities, MicroScribe-derived volumes were significantly larger compared with injection volumes (P < .001). This result was expected, as the MicroScribe volume includes the thickness of the rotator cuff tissue while the volume of injected acrylic resin only fills the potential space of the dog ear deformity. Anterior dog ear volume was significantly smaller for rotator cuff repair with peripheral FiberLink sutures compared with Suture-Bridge (P < .05) or SpeedBridge (P < .05) alone (Figure 9). This result was similar utilizing both MicroScribeand injection-derived volumes for the anterior dog ear (P ¼ .2851) and posterior dog ear (P ¼ .1446). There was no statistically significant difference between repair methods for the posterior dog ear volume using any measurement modality. Height and Width. Utilizing both MicroScribe-and calibrated photograph-derived dimensions, anterior dog ear height and width were significantly smaller for rotator cuff repair with peripheral FiberLink sutures compared with SutureBridge (P < .05) or SpeedBridge alone (P < .05). Similarly, posterior height and width were significantly smaller for rotator cuff repair with peripheral FiberLink sutures compared with SutureBridge alone (P < .05). When comparing measurement modalities, there was no statistical difference in the measurements between calibrated photograph-and MicroScribe-derived data for anterior height (P ¼ .2114), posterior height (P ¼ .1753), anterior width (P ¼ .6502), or posterior width (P ¼ .1226). DISCUSSION Our study has shown reduced dog ear deformity volume for repair with peripheral looped FiberLink sutures compared with SutureBridge or SpeedBridge transosseousequivalent techniques alone. The additional peripheral looped suture further compresses the rotator cuff tendon against the bone and can be placed at the apex of a dog ear deformity, maximizing tendon-bone contact. With this increased contact area at the greater tuberosity footprint, we would expect better healing, in part because of less fluid extravasation and thus less interference with healing. 1 While we were unable to determine a threshold tear size at which the surgeon should expect marginal dog ear deformities, with larger tears, the surgeon should be prepared to address these deformities with either the modified suture bridge 11 or peripheral FiberLink sutures, as described here. The goals of a rotator cuff repair include restoration of the anatomic footprint without excessive tension, minimizing gap formation, and stable initial fixation that can be sustained until healing. 18 The transosseousequivalent technique of rotator cuff repair achieves these goals in the majority of cases. However, the formation of peripheral dog ear deformities is not uncommon with this method. 10 A marginal dog ear deformity after repair may be considered similar to a bursal-sided partial-thickness rotator cuff tear as it can result in the lack of contact between a substantial portion of the rotator cuff and the greater tuberosity. Beyond the understanding that dog ear deformities are more frequent in large tears than in medium tears, 10 knowledge of how these deformities are created and their impact on rotator cuff footprint restoration is limited. The ability to characterize, quantify, and manage these imperfections will prove useful to all who perform arthroscopic rotator cuff repairs. To our knowledge, this is the first quantitative study assessing the creation and management of dog ear deformities in rotator cuff repairs in a human cadaveric model. apices, thus creating the appearance of dog ears. Wounds on convex surfaces, similar to the rotator cuff footprint at the greater tuberosity, are more susceptible to this deformity. 21 Techniques used in plastic and dermatologic surgery to manage dog ear deformities, including fusiform excision, S-plasty, M-plasty, and V-excision, 2,[6][7][8]19,21,22 are not applicable to dog ear deformities in rotator cuff repairs. For large, retracted, U-shaped tears, margin convergence can be used to minimize formation of dog ear deformities. 3 Kandemir et al 9 introduced the concept of a repair ratio as a guide for application of margin convergence. The authors defined a repair ratio as the ratio of the length of the torn tendon edge and the length of the avulsed insertion site. For a repair ratio of 1, the entire avulsed tendon edge can be repaired to the greater tuberosity footprint. However, for a repair ratio of 2, only 50% of the tendon edge can be repaired to bone. The authors state that the remainder of the torn edge that cannot be repaired to bone should be repaired in a side-to-side fashion. Some tears, however, are not amenable to side-to-side repair and are destined to form dog ears at the periphery. While our data show that larger tears were more likely to result in larger dog ears, it lacked the power to show this relationship to be statistically significant. Alternatively, Kim et al 11 reported on a technique of using the suture tails from the lateral row anchors to create an additional site of compression in the marginal dog ear deformity, the so-called modified suture bridge. Other authors have purported using additional anchors 11 or a triple-mattress repair with a single row of triple-loaded suture anchors 12,13 to manage these deformities. However, there is no literature to quantify the size of these dog ear deformities before and after these various repair techniques. This is the first study to quantify the height, width, and volume of dog ear deformities after various rotator cuff repair methods. It was performed on human cadavers with ages consistent with the pathology of rotator cuff tears. Several limitations regarding this study should be considered. First, many variations in rotator cuff repair methods exist for the transosseous-equivalent repair. This study does not attempt to determine the dog ear deformity characteristics for all possible configurations, including the multitude of tear configurations, suture anchor designs, and suture material, but rather provides a simple comparison between suture bridge, transosseous-equivalent repair with 2-mm fiber tape, and transosseous-equivalent repair with 2-mm fiber tape and peripheral No. 2 nonabsorbable looped sutures incorporated in the lateral row anchors. We felt that these repair methods were representative of rotator cuff repair methods that are in clinical use. The dog ear volume determined by injection of the acrylic resin mixture may have been influenced by leakage into the glenohumeral joint space medially. An attempt was made to obstruct the escape of the mixture into the glenohumeral joint using tissue forceps compressing the cuff 1.6 cm medial to the repair edge (the mean medial to lateral width of the supraspinatus footprint). 5 The influence of this factor is negligible based on the fact that there was no statistical difference between the injection-and MicroScribe-derived volumes. One goal of our study was to determine a threshold of tear retraction at which a sizeable dog ear deformity can be expected. We did not show a significant difference in the dog ear sizes between tears with simulated retraction of 0.5, 1.0, and 1.5 cm. Tears of these sizes may behave differently in an in vivo model as they are likely to retract over time, a phenomenon that we did not observe in our cadavers. Our study only looked at 1 tear configuration, a 3-cm ellipse with a maximum 1.5-cm retraction. Further knowledge could be gained if this study were to be repeated on more cadavers with differing tear geometries, including large U-shaped tears. This is a potential limitation since massive tears with further retraction to the glenoid were not assessed. Perhaps with larger medial to lateral tear sizes we would be able to determine a threshold tear that leads to sizeable dog ear deformities. This information would prove useful for preoperative planning purposes. While this study quantifies dog ears and demonstrates a reproducible method for their management, further studies are needed to assess the biomechanical and clinical impact such deformities have on healing and postoperative function. CONCLUSION In characterizing dog ear deformities and assessing the factors impacting their creation, this study helps the surgeon better understand how to avoid and manage these irregularities in rotator cuff repair. When combined with a transosseous repair technique, peripheral looped Fiber-Link sutures significantly decreased the volume, height, and width of dog ear deformities, better restoring the anatomic footprint of the rotator cuff.
2016-05-12T22:15:10.714Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "c7c1fde9f8f7b8e3813eb3d633ae334dd13bb7ff", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1177/2325967114529257", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7c1fde9f8f7b8e3813eb3d633ae334dd13bb7ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6187767
pes2o/s2orc
v3-fos-license
Post-synthetic Ti Exchanged UiO-66 Metal-Organic Frameworks that Deliver Exceptional Gas Permeability in Mixed Matrix Membranes Gas separation membranes are one of the lowest energy technologies available for the separation of carbon dioxide from flue gas. Key to handling the immense scale of this separation is maximised membrane permeability at sufficient selectivity for CO2 over N2. For the first time it is revealed that metals can be post-synthetically exchanged in MOFs to drastically enhance gas transport performance in membranes. Ti-exchanged UiO-66 MOFs have been found to triple the gas permeability without a loss in selectivity due to several effects that include increased affinity for CO2 and stronger interactions between the polymer matrix and the Ti-MOFs. As a result, it is also shown that MOFs optimized in previous works for batch-wise adsorption applications can be applied to membranes, which have lower demands on material quantities. These membranes exhibit exceptional CO2 permeability enhancement of as much as 153% when compared to the non-exchanged UiO-66 mixed-matrix controls, which places them well above the Robeson upper bound at just a 5 wt.% loading. The fact that maximum permeability enhancement occurs at such low loadings, significantly less than the optimum for other MMMs, is a major advantage in large-scale application due to the more attainable quantities of MOF needed. The synthesis of PIM-1 polymer is based on a rapid polycondensation reaction of TFTPN (12.5 mmol) and TTSBI (12.7 mmol) in the presence of excess K 2 CO 3 (38.8 mmol), however a previously unreported solvent mixture of DMAc (25 mL) and CS 2 (12.5 mL) was used. S1-3 The reaction mixture (under inert atmosphere) was refluxed using a dean stark trap at 165 °C. Precipitation of an orange solid occurred within the first 5 minutes, however the reaction was continued for an hour to ensure high conversion of oligomers to polymer. The oligomer solution product fraction was decanted to separate from the solid polymer fraction, before dissolution in chloroform and recrystallization from methanol. NMR NMR spectra ( 13 C and 1 H) of the PIM-1 used in this study were recorded using a Bruker Av500 and Bruker Av400X, respectively. Comparison of these spectra to that of our previous work and those previously reported, S3 confirm the synthesis solvent mixture used did not cause any reaction by-products. FTIR Fourier-Transform Infrared (FTIR) spectroscopy was undertaken with a Thermo Scientific NICOLET 6700 FT-IR on the product PIM-1 polymer, the synthesis reactants, and the range of Ti x UiO-66 used in the study. Polymer purity was confirmed by the absence of Characteristic high intensity K 2 CO 3 peaks in the PIM-1 spectra ( Figures S3-S6). FTIR spectra of the prepared Ti x UiO-66 MOFs (Figures S8-S11) are nearly identical, but differ significantly from that of the native UiO-66 ( Figure S7), suggesting Ti exchange had significant effects on the bonding conditions within MOF structure. As observed in previous studies, S4, S5 extended exchange periods have only minor further effects on metal and ligand exchange. The decrease in peaks at 1655 cm -1 and 2929 cm -1 , are associated to the loss of terminal benzoic acid and singly coordinated benzene dicarboxylic acid from the crystal surface of the MOF, respectively. These groups are most labile, due to reduced chelation, and preferentially dissociate into the solvent during Post-Synthesis Exchange (PSE). Ti-exchanged samples also exhibit a higher intensity than UiO-66 for the significant broad peak (3000-3600 cm -1 ), labelled as 'O-H stretch'. This is caused by the increased reactivity and exposure of the Tiexchanged SBU's, which due to the random nature of transmetallation, exhibit a range of hydroxylation bonding environments (Ti x Zr 6-x O 6 to Ti x Zr 6-x O 4 (OH) 4 ) after contact with atmospheric moisture. Gel Permeation Chromatography Gel Permeation Chromatography (GPC) for PIM-1 polymer samples were performed on Waters Alliance e2695 liquid chromatograph equipped with a Waters 2414 differential refractometer and 3× mixed C and 1 mixed E PLgel columns (each 300 mm × 7.5 mm) from Polymer Laboratories. The eluent was tetrahydrofuran (THF) at 30 °C (flow rate: 1 mL min −1 ). Number (M n ) and weight-average (M w ) molar masses were evaluated using Waters Empower Pro software. The GPC columns were calibrated with low dispersity polystyrene (PSt) standards (Polymer Laboratories) and molar masses are reported as PSt equivalents. A third order polynomial was used to fit the log M p vs time calibration curve, which was linear across the molar mass range 2 × 10 2 to 2 × 10 6 g mol −1 . The modified rapid polycondensation synthesis route used in this study generated a polymer with comparable average Molecular weight than previous literature, however had a higher number of short chain polymer and oligomer species. S1, S3 UiO-66 was prepared as previously reported. S4, S6 Equimolar quantities (43 mmol) of zirconium tetrachloride with terephthalic acid are reacted in the presence of a large excess of (684 mmol) benzoic acid in a dimethylformamide:water (1650:83 mL) solvent. The resulting product was washed sequentially with DMF and MeOH before being drying under vacuum at 100 °C. PXRD Powder X-Ray Diffraction (PXRD) measurements were completed on a Bruker D8 Advance X-ray Diffractometer, using Cu K-alpha radiation (40kV, 40mA) equipped with a LynxEye silicon strip detector. Samples were scanned over the 2θ range 5° to 85° with a step size of 0.02° 2θ and a count time of 0.4 seconds per step. Figure S12: Powder X-Ray Diffraction patterns for the prepared native UiO-66 and Ti x UiO-66. Patterns are stacked for comparison. Inset shows peak broadening with Ti substitution. Isotherm Adsorption curves Adsorption Isotherms for carbon dioxide (273 K, 298 K) and nitrogen (77 K, 273 K) of the prepared Titanium exchanged and native UiO-66 were measured using an ASAP 2420. Metal Organic Framework samples were activated at 120 °C under vacuum overnight prior to analysis. Langmuir and BET surface areas were calculated from nitrogen isotherms at 77 K. Membrane Preparation Membranes were cast from ~0.2 g/mL chloroform solutions in 75 mm diameter PTFE dishes, covered in perforated aluminium foil, and vacuum dried (353 K) overnight before use. Single gas (N 2 , H 2 , CH 4 , and CO 2 ) permeation measurements were undertaken in duplicate on the day following casting to maintain a consistent processing history. S7 Units: Barrer = 10 -10 (cm 3 STP) cm cm -2 s -1 cmHg -1 Differential Scanning Calorimetry DSC measurements were made using a Mettler Toledo Differential Scanning Calorimeter. Samples were encapsulated in aluminium pans and heated from 25 °C to 500 °C at 10 °C/min. Figure S17: Comparison of Ti 5 UiO-66 membranes against MOF loading DSC plots. Arrows highlight observed changes in peak decomposition temperature and endothermic peak area (Figures S18, S19). Pycnometry Pycnometry measurements were made using an AccuPyc Pycnometer (He) to determine the relative density, and by extension, free volume present in the prepared Ti x UiO-66 PIM-1 membranes. SEM A Philips XL30 Field Emission Scanning Electron Microscope (FESEM) with an accelerating voltage of 5kV was used for imaging the cross sectional surface of membrane samples ( Figure S21-25). Crosssection surfaces were prepared by fracturing membranes in liquid nitrogen and mounting to SEM stubs using carbon tape, before sputter coating with platinum. Membrane solubility coefficients were calculated from High-pressure Sorption (Setaram PCT Pro) measurements. Dual-mode sorption parameters were calculated by curve fitting to equation 1, as reported. S7 Viscosity Viscosity measurements were made using a SCHOTT AV350 Viscometer (standard ASTM D445) using a 52610/I U-tube calibrated with a de-ionised water standard at 20°C. Samples were prepared as ~0.2g/mL PIM-1 membrane solutions, with Ti x UiO-66 loading of 5 wt. %, equivalent to the highest performing membranes reported. Results are averaged from 10 duplicates, excluding outliers exceeding two standard deviations. Fresh samples are prepared without agitation, while Aged samples are stirred for 24hrs prior to testing to maximize polymer-MOF interaction. Units: centipoise, cP = 10 -3 Pa.S ; DI water standard (20°C) = 1.0020 cP. Error reported as one standard deviation of final data.
2016-05-14T22:57:29.427Z
2015-01-16T00:00:00.000
{ "year": 2015, "sha1": "739bc884b43335f12bee7e7ca38b3a291ac4ca42", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/srep07823.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a65a0d7c7e1c1dbee501b6fb545ca80249397842", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
237869000
pes2o/s2orc
v3-fos-license
Obstetric Outcomes after Perforation of Uterine Cavity Abstract: Purpose We aimed to evaluate the pregnancy characteristics and obstetric outcomes in patients after perforation of the uterus. Study design: A retrospective cohort study was conducted and included all patients who were diagnosed with uterine perforation and treated in a tertiary referral medical center between the years 1996 and 2018. Up to two deliveries after perforations were investigated. Results: During the study period, 51 women were diagnosed with uterine perforation during gynecological procedures, including intrauterine device (IUD) insertion. The mean age of patients at the time of diagnosis was 27.9 (±4.7) years. The majority, 76.5% (n = 39), experienced perforation during IUD insertion, and 23.5% (n = 12) of the patients experienced perforation during surgical procedures. Most of the patients were multiparous or grand multiparous, 45.8. % (n = 22) and 39.6% (n = 19) respectively. Anteflexed uterus was found in 86.4% of the patients (n = 38). Five patients (9.8%) had pelvic abscesses after the IUD insertion. A total of 50 patients had 71 deliveries subsequent to uterine perforation. One patient experienced intrauterine fetal death due to fetal malformations. One patient experienced uterine rupture. No other major obstetric complications were noted. Conclusions: Uterine perforation may be associated with adverse obstetric outcomes. The possibility of uterine rupture must be considered while managing the deliveries of patients after uterine perforation. Moreover, a larger cohort and further studies are needed to establish an association between uterine perforation and adverse outcomes of the subsequent deliveries. Introduction Perforation of the uterus may be a complication that potentially results from any kind of uterine manipulation [1,2]. The incidence of perforations varies from 0.1 to 5%, depending on the procedure and the performer's skill level [3][4][5][6]. While these numbers are relatively small, it is thought that the actual prevalence of perforations is much higher, as many perforations are unrecognized or underreported [7]. Typically, the damage occurs during dilatation of the cervix or the introduction of an operative instrument [8]. Common locations for uterine perforation are the uterine fundus, the uterine anterior wall and the cervix [6,9]. Different risk factors have been identified for perforation during uterine procedures, including the following: stenotic or scarred cervix (primigravida, cervix after previous procedure or conization); altered position and direction of the uterus (retroflection, hyperanteflection, deformity after cesarean section, fibroids or other uterine pathology); and reduced strength of the myometrium (pregnancy, multiparity, infection, postpartum period and lactation, especially for IUD insertion) [4,[8][9][10]. Patient outcomes after uterine perforation are usually good, unless the complication was diagnosed late or there was intraabdominal organ damage [3]. The question arises, however, about the obstetric outcomes in those patients. There are several case reports that describe uterine rupture after perforation [11,12]. Some authors hypothesize that cases of uterine rupture in an un-scarred uterus are due to undiagnosed perforation and this hypothesis is supported by the fact that about 50% of patients with uterine rupture had previous surgical intervention [13]. The aim of our study is to evaluate obstetrical outcomes following uterine perforation. Materials and Methods A retrospective, cohort study was conducted at Soroka University Medical Center (SUMC) for patients treated between the years 1996 and 2018. We included all patients with a confirmed diagnosis of uterine perforation, who were treated in our hospital and had subsequent deliveries. Data including demographic characteristics, general health status, perforation management details and surgical reports were collected from the patient's electronic medical records. Pregnancy, delivery characteristics and perinatal outcomes were gathered from the computerized obstetric database of the Obstetrics and Gynecology department. Up to two deliveries after perforations were included. Patients with missing data were excluded from the analysis. Informed consent was not obtained due to the retrospective study design. It was waived by the Institutional Review Board of Soroka University Medical Center (#SOR-0149-17 approved on 3 August 2017). Statistical analysis was performed with the SPSS package, version 20 (SPSS Inc, Chicago, IL, USA). Categorical variable data were presented using percentiles and statistical significance was tested using the X 2 and Fisher's exact test, as appropriate. Continuous variable data were presented using mean and standard deviation and Student's t-test was used for statistical analysis. The Institutional Review Board of Soroka University Medical Center approved the study that was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments (#SOR-0149-17 approved on 3 August 2017). The study was designed according to the STROBE [14] statement checklist with items for cohort studies. Results During the study period, 51 women were identified with a diagnosis of uterine perforation and subsequent delivery. The mean age of patients at the time of diagnosis was 27.9 ± 4.7 years. The majority of the patients were multiparous or grand multiparous, 45.8% (n = 22) and 39.6% (n = 19), respectively. The demographic characteristics are presented in Table 1. The most common procedure to cause the perforation was intrauterine device (IUD) insertion in outpatient clinics, which accounted for 76.5% of the patients (n = 39). The rest, 23.5% (n = 12), of the patients experienced perforation during surgical procedures, predominantly during dilatation and curettage ( Table 2). Anteflexed uterus was found in 86.4% of the patients (n = 38). The most frequent location of damage was the parametria (n = 17, 34%), probably after cervix or uterine isthmus perforation. Five patients (9.8%), who were referred for laparoscopy due to lost IUD, were diagnosed with capsulated pelvic abscesses. The abscess was asymptomatic or caused mild chronic abdominal pain; laparoscopy in all five patients revealed adhesions of the omentum and bowel around the region of IUD location in the abdomen. Background and uterine condition characteristics are presented in the Table 3. The study group of 51 patients had 71 deliveries subsequent to the uterine perforation. The median of time from perforation to first delivery was 36.50 (18.83-63.77) months. One patient experienced intrauterine fetal death due to fetal malformations. One patient experienced uterine rupture at 24 weeks gestation, following fundal-posterior wall perforation in her previous pregnancy. The perforation occurred during postpartum curettage due to adherent placenta. Subsequent uterine rupture manifested with acute and significant abdominal pain at 24 weeks of gestation and an urgent cesarean section was performed, during which a uterine defect was identified in the perforation region and sutured. This pregnancy resulted in neonatal death at day 3, due to prematurity complications. No other major complications were associated with any of the pregnancies. Most of patients had vaginal delivery, 84.5% (n = 60), with a mean gestational age of 38.29 ± 2.9 weeks. Placenta previa was diagnosed in one case, abruption of placenta in two cases, three patients had post-partum hemorrhage (PPH), and four patients underwent manual removal of the placenta (manualysis). Pregnancy course and obstetric outcomes are presented in the Table 4. 3270.9 ± 721.2 Apgar at 1 9 (9-9) Apgar at 5 10 (10-10) Data are presented as number (percentage), mean ± standard deviation and median (interquartile range). PROM, premature rupture of membranes. Discussion We conducted a retrospective cohort study in a tertiary referral medical center. We identified 51 patients who had deliveries following uterine perforation. Most of our patients were multiparous, a known risk factor for perforation [15]. The majority of our study population had an anteflexed uterus, as opposed to previous publications that described a retroverted uterus to be a significant risk factor for perforation [16]. A possible explanation may be uterus hyperanteflexion in these patients, but unfortunately, these data were not available. The prevalence of post-partum hemorrhage among our population correlated with that reported in previous studies [17]. We found an increased number of pathological placental conditions in our study group. Manualysis after delivery was documented in four patients (5.6%), while adherent placenta complicates 1-3% of deliveries in the general population [18] and the reported incidence of the manual removal of the placenta is 2.7% [19]. Placenta previa was also found in 1.4% of our patients, while the incidence in the general population is reported to be 0.3-0.5% [20]. Uterine rupture occurred in one patient, manifesting with abdominal pain in a nonlaboring patient. Uterine rupture is a dramatic and rare complication, which mostly occurs after cesarean section. The incidence of uterine rupture after cesarean delivery is 5.3/10,000 births [21,22], whereas the incidence of uterine rupture in an unscarred uterus is estimated as 0.6/10,000 [23]. Higher rates of maternal and fetal mortality were found in cases with the rupture of an unscarred uterus, possibly as a consequence of this complication being unexpected [24]. Unrecognized uterine perforation from a previous uterine procedure may be the risk factor for uterine rupture in those cases [11][12][13]25]. In addition, a potential explanation for uterine rupture after previous perforation may be associated with abnormal and not well-organized uterine activity, due to the interruption of the circuit of normal muscular fibers. Our study has several limitations. This is a retrospective cohort study, limited by its sample size. The rare nature of the condition, specifically during the years of fertility, the underdiagnoses and underreporting all account for our small sample size. We also have heterogeneity in perforation type and treatment. Our data suggest that previous uterine perforation may be followed by obstetric complications, although we cannot establish a significant correlation, due to the limitations of our study. This supports the implementation of preventive measures during uterine procedures, such as cervical priming for all patients and the use of sonographic guidance in such procedures, when appropriate [26]. Our findings emphasize the importance of previous history of uterine manipulation or perforations in the management of a current pregnancy and further studies are needed to establish appropriate recommendations. Conclusions Precautions should be taken during all intrauterine procedures, especially in multiparous women. Ultrasound guidance may be considered, according to the circumstances. Uterine perforation may be associated with adverse obstetric outcomes. The possibility of uterine rupture must be considered while managing the deliveries of patients after uterine perforation. Moreover, a larger cohort and further studies are needed to establish an association between uterine perforation and adverse outcomes of subsequent deliveries.
2021-09-01T15:13:28.507Z
2021-06-21T00:00:00.000
{ "year": 2022, "sha1": "eeadaf20189f3ad6f491d746661df461a78c8c2f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/11/15/4439/pdf?version=1659168416", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8a4a8f4b475e7f3ef9b82bb534681b701b93958", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245216538
pes2o/s2orc
v3-fos-license
Development of flow control methods in free and impinging jets Submerged and impinging jets with harmonic perturbations added to the inlet velocity profile and with nozzle vibrations are simulated numerically at different Reynolds (Re) and Strouhal (St) numbers by solving the Navier–Stokes equations. The effects of Re, St and forcing amplitudes on flow behavior and jet splitting phenomena are studied. Introduction Study of jet flows is relevant in energy issues, such as the flow control in various burner nozzles, for transport applications, where jets are used to reduce engine noise, aircraft drag, fuel consumption. The present computation of free and impinging round jets is performed by the OpenFOAM software with the rhoPimpleFoam solver. The two active flow control methods are explored, i.e. the axial and helical harmonic oscillations (with frequencies 2f and f, amplitude A) of the inlet velocity distribution [1,2], u(x = 0, r < R) = up(r, θ, t) = U + A[sin(4πft) + sin (2πft + θ)·sin(2πr/R)], u(x = 0, r ≥ R) = 0, (1) the transverse vibration of the nozzle (with A = 0, frequency f, transverse velocity v, amplitude Z) [2], and their combinations. The axial oscillations promote the generation of ring vortexes due to Kelvin-Helmholtz instability, and the helical oscillations direct the vortexes away from the jet axis. Different parameters of Reynolds (Re = UD/ν) and Strouhal (St = f D/U) numbers, where U is the mean inlet velocity, D = 2R is the nozzle diameter, ν is the molecular viscosity, are considered. Computation set-up To simulate a homogenous submerged jet, the unsteady Navier-Stokes equations are solved at wide range 50 ≤ Re ≤ 23 000. DNS is utilized at Re ≤ 3000, whereas LES with the eddy-viscosity subgridscale model and subgrid-scale turbulent kinetic energy equation is used at Re > 3000. A jet enters the cylindrical or rectangular computational domain through a round orifice in the inlet wall. The top-hat inlet velocity profile, u(x = 0, t) = U, is chosen. In the orifice proximity, the uniform fine mesh is arranged, and it coarsens in transverse and axial directions away from the nozzle. For impinging jet, the round nozzle is replaced by a pipe segment with mesh resolution increased near the pipe wall as well as the colliding wall. The domain sizes and mesh resolutions vary from case to case and have been chosen based on a series of preliminary runs. Further details of computation set-up are given in [2]. Results for free jets As in [3,4], the jet branching is obtained in wide ranges of amplitudes of oscillations of the inlet velocity profile (0.01 ≤ A/U ≤ 0.2), nozzle (0.05 ≤ Z/D ≤ 0.5), the Reynolds numbers Re > 50, and the Strouhal numbers St = f D/U (Figure 1). We should note, in previous studies, computation results for excited jet splitting at Re < 500 have not been reported yet. Only the measurements were done [3] where the jet bifurcation caused by acoustic forcing is observed at Re ≥ 20. The nozzle vibration (2) here has the similar effect to that of transverse acoustic field in [3,4]. It was also found that at low Reynolds number (Re ~ 50), the perturbations introduced at the inlet section quickly decay downstream of the nozzle exit, so even high-amplitude nozzle vibration is not enough to split a round jet ( Figure 2). The mechanism of interaction of vortex structures leading to the jet splitting is studied [2,4]. The performed estimates of the expansion angle in the bifurcation plane showed [2] its growth with Re. Calculations of free jets at 50 ≤ Re ≤ 3000 demonstrate that in order to obtain and enhance the splitting effect, optimization of the forcing parameters (type, frequency and amplitude of disturbances) is required, and the transverse mechanical oscillation (2) of the nozzle turns out to be a more effective method of flow control, than the axial-helical excitation (1) of the inlet velocity profile. The mixing enhancement (which is important for applications and produced by the jet excitation) can be parameterized by the typical jet thickness d or the expansion angles α in the bifurcation plane, as well as by centerline values of the mean velocity or the mean scalar [1,[4][5][6]. It is evident that mixing efficiency increases for the stronger jet expansion with larger d and α, resulting in larger spreading area and smaller centerline values of <u> and <c>. Figure 3 demonstrates that at the same nozzle vibration parameters (Z = 0.5D, St = 0.1) more efficient mixing is possible for higher Reynolds number Re = 250 than that for Re = 100. The stronger molecular viscosity effects at Re = 100 lead to faster decay of perturbation downstream of the nozzle exit and give the larger centerline mean velocity and the smaller expansion angle α = 39º at Re = 100 (versus α = 57º at Re = 250) estimated from the typical jet width d(x) in Figure 3(b) at 6 ≤ x/D ≤ 9 where d(x) is close to a linear function, and where the round jet bifurcation usually occurs [1,2,4]. This trend is the same as that discovered earlier [2] for the wider range 50 ≤ Re < 5000. The larger values of the angle α than those shown in [2] for Re < 500 are related to the much larger excitation magnitude (Z = 0.5D) chosen in the present numerical simulations at low Reynolds numbers. Results for impinging jets Effective flow and heat transfer control is also possible using the passive methods, for example, by introducing various grids and screens near the nozzle exit [7] and by combining active and passive methods of forcing. The present LES calculations reproduce the conditions of a number of physical experiments [7][8][9] in a non-isothermal turbulent impinging jet flowing normally onto a plate placed at distance 2 ≤ x/D ≤ 10 downstream from the inlet. The preliminary results of simulating the dynamics of a passive scalar (temperature, concentration) show ( Figure 4) the jet splitting into several branches with implementation of more efficient mixing and intensification of heat and mass transfer. Conclusions The study of active flow control methods reveals the jet splitting in wide ranges of Strouhal numbers, helical and flapping oscillation amplitudes at Reynolds number Re > 50. The effect of transverse nozzle vibration is similar to that of acoustic forcing. Both ways are shown to be effective for flow control, and lower Re cases have the smaller expansion and bifurcation angles. The results for the passive scalar dynamics in the laterally excited turbulent impinging jet show more efficient mixing and scalar transfer intensification. This can be further used for heat transfer enhancement in cooling devices.
2021-12-16T20:07:10.947Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "939c3a21e4508fa07a546cc62f3a97b69c8ecc9a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2119/1/012003", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "939c3a21e4508fa07a546cc62f3a97b69c8ecc9a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
269367009
pes2o/s2orc
v3-fos-license
Drug Traceability in Supply Chain of Healthcare Using Block chain-Based Approach The supply chain for healthcare is an intricate web of interconnected businesses that includes manufacturers, pharmacies, hospitals, patients and suppliers of raw materials. Due to a number of issues, such as centralized control, conflicting stakeholder conduct, and a lack of information, tracking supply across this network is difficult. In addition to creating inefficiencies like those brought to light by the COVID-19 pandemic [1], this complexity can make it more difficult to mitigate the problem of counterfeit pharmaceuticals because they can quickly infiltrate the healthcare supply chain. Drugs that are intentionally manufactured fraudulently and/or mislabeled with regard to identity and/or source in order to pass for authentic goods are known as counterfeit drugs[2][3]. These prescriptions may include those with no active pharmaceutical ingredient (API), the wrong dosage, the wrong kind of API, a pollutants, or repackaged goods that has expired. It is possible for certain fake drugs to be manufactured under unsatisfactory circumstances and with improper formulations [4]. INTRODUCTION The Health Research Funding Organisation estimates that up to 30% of medications supplied in developing nations are fake [5] [6].Furthermore, a recent study conducted by the World Health Organisation (WHO) revealed that fake pharmaceuticals are a leading cause of death in underdeveloped nations, with children being the majority of victims [7] [8].The pharmaceutical sector suffers enormous financial losses as a result of counterfeit pharmaceuticals, in addition to their detrimental effects on human life.Accordingly, it is estimated that the US pharmaceutical business loses $200 billion annually as a result of counterfeit medications [9] [10].Figure 1 depicts a typical medicine supply chain distribution procedure.Providing the raw materials needed to make medications that have been approved by regulatory bodies like the US Food and Drug Administration (USFDA) is the responsibility of an API provider.The medications are sent to a repackager or put into a lot by the manufacturer.Upon receiving many lots of the product, the primary distributor is incharge of distributing them to pharmacies in accordance with market demand.If there are too many lots, secondary distributors may be able to assist with the transfer of lots to pharmacies.Lastly, a pharmacy will give patients their medication [11], usually in accordance with a prescription from a doctor. Figure1: Stake holders in the drug supply chain and their interactions Drugs are typically transferred along the supply chain by independent logistics service companies like UPS or FedEx, and distributors occasionally use their own fleet of cars to deliver the goods.The intricate framework of the healthcare supply chain is the main cause of counterfeit medications finding their way into the end-user marketplace.Medication can easily go via this distribution mechanism with little to no information trail and substantiated documentation thanks to its intricacy [12].Therefore, the key to preventing counterfeit goods in the healthcare supply chain is constant product monitoring, efficient management, and tracking.A new paradigm for developing applications has been brought about by block chain technology, and it is mostly based on the data structure's successful implementation in the Bit coin application.The block chain data structure's basic idea is comparable to that of a linked list; that is, all nodes in the network share it, and each node maintains a local copy of every block (those linked to the longest chain) beginning with its genesis block [15].A wide range of real-world applications, including those for the Internet of Things [16], e-Government [17], and e-document management [18], have been developed recently.Due to the self-cryptographic validation structure among transactions (by hashes) and the public availability of a distributed ledger of transaction records in a peer-to-peer network, these applications take advantage of the advantages of block chain technology.One of the early attempts at block chain-based pharmaceutical supply chain tracing is presented in [20].Despite the fact that our solution and we take a comprehensive approach to the pharmaceutical supply chain and provide an end-to-end solution for drug traceability, previously [20] only concentrated on a portion of these issues.This is possible because of the focus on the pharmaceutical supply chain and the usage of block chains.First off, whereas [20] only includes the supplier, manufacturer, and wholesaler as stakeholders, our method identifies and involves all significant players in the drug supply chain, including the FDA, supplier, manufacturer, distributor, pharmacy, and patient.Consequently, unlike in a true drug supply, the chemists are shown as an external entity.Thirdly, we minimize human intervention and hence undesirable delays by using smart tracts technology to offer real-time, seamless traceability with push notifications.To be more precise, every drug lot is given a distinct smart contract that, in the case of a change in ownership, creates an event and sends the D App user a list of those events.But [20]'s smart contracts are configured for particular functions, such manufacturer, distributor, and supplier, necessitating manual confirmation from each party of which medications are received.This method may cause errors and delays in the unchangeable data that is kept on the ledger.Lastly, in order to assess the effectiveness of the suggested solution, we carried out a cost and security analysis and talked about how the suggested solution can within the pharmaceutical industry, numerous attempts have been undertaken to address the well-established difficulty of achieving traceability to mitigate against counterfeit pharmaceuticals.Nonetheless, a thorough examination of the literature reveals a number of shortcomings and possibilities for a thorough implementation of block chain technology for medication traceability. The main contributions of this essay in this context can be summed up as follows. Many initiatives have been made in the pharmaceutical sector to address the well-known challenge of attaining traceability to protect against counterfeit medications.However, a careful review of the literature identifies a number of drawbacks and opportunities for a comprehensive application of block chain technology for pharmaceutical traceability.The following succinctly describes the primary contributions of this essay in this context. We create a smart contract that can manage different stakeholder transactions in the pharmaceutical supply chain. We introduce, put into practice, and evaluate the smart contract that outlines the fundamentals of our suggested course of action. We analyze costs and security to assess how well the suggested block chain-based solution performs. RELATED WORK We present a critical overview of existing efforts focused at addressing the issue of product traceability in the healthcare supply chain emphasizing solutions proposed for anti-counterfeiting.We have included both block chain and non-block chain-based approaches and categorized them accordingly A. Usual Methods To Ensure Drug Traceability Traceability can be described as the capacity to obtain any or all information about an object during its lifecycle through documented identifications.Any traceable object throughout the supply chain is the subject of discussion and is known as a Traceable Resource Unit (TRU).There are two goals for traceability: monitoring the transaction history and the TRU's current location.In this situation, a traceability system needs to be able to obtain data about the medication that is the TRU in the supply chain in order to record it identify and set it apart from other TRUs.This is done by employing several identification approaches.a mechanism for documenting the connections between TRUs, and a mechanism for recording the attributes of the TRUs [21].Traditional supply chain management solutions have employed RFID tags and barcodes for identification, Wireless Sensor Networks (WSN) for data capture, and Electronic Product Codes (EPC) for product identification, capture, and sharing to enable the tracking of goods through various stages [22].In this situation, GS1 standards barcodes with unique serialized product identifiers, lot production dates, and expiration dates are used by Smart-Track [23].The GS1 barcode's data is collected during a number of supply chain operations and utilized to keep an ongoing record of ownership transfers.An end user (patient) can utilize a smart phone app to confirm authenticity through a central data repository maintained by the Global Data Synchronization Network (GDSN) as each stakeholder records possession of the product.Pharmacy and hospital units can scan the barcode at the warehouse in the downstream supply chain to confirm the product's characteristics.Similar to this, the Data-Matrix tracking system [24] generates a Data-Matrix for every medication that contains the following information: the product and manufacturer IDs, the package's unique ID, the authentication code, and any optional meta-data.This enables the patient to use the Data-Matrix that is attached to confirm the drug's provenance. B. DRUG TRACEABILITY SOLUTIONS BASED ON BLOCKCHAIN Conventional approaches to pharmaceutical supply chain traceability are usually centralized and lack openness among chain partners, allowing the central authority to change information without informing other stakeholders.A block chain-based system, on the other hand, provides provenance, data security, transparency, immutability, and authenticated transaction records.Block chain is an immutable, decentralized shared ledger that can be used in many different types of transaction-based business environments.Although the terms transparency and traceability are frequently used synonymously, they signify quite different ideas.Transparency is typically used to describe high-level supply chain information.For instance, the names of suppliers, the locations of facilities, the components of the product, etc., with the goal of mapping the entire supply chain.But traceability is tied to specifics; it considers picking a particular component to track, deciding on mutual protocols for corresponding with partners, putting procedures in place to generate and collect precise data, picking a platform to store traceability data on, and figuring out how to distribute data on the platform.Despite the fact that the two names refer to distinct ideas, they are mutually dependent since a thorough grasp of the supply chain is necessary to obtain detailed information.To this end, several current methods make use of the cryptographic features of block chain to create a verifiable, decentralized track and trace system for prescription medications.Without providing any technical information or particular application, Mettler [32] has talked of using a block chain-based solution to address a variety of healthcare-related concerns.The benefits of block chain technology in the pharmaceutical supply chain were discussed by Kurki [33].But much like in [32],there was simply conceptual discussion offered.Muni and yand OngTzeErn [20] suggested utilizing Ethereum for a traceability system in order to combat counterfeiting.The suggested solution makes use of smart contracts, but it is neither implemented or evaluated, which makes it difficult to fully comprehend its contribution.A drug traceability system called Drug ledger was proposed by Huang et al. [34].It creates both the authenticity and privacy of stakeholders' traceability information without compromising the system's robustness, and it reflects the actual drug transaction logic in the supply chain.Drug ledger uses the enlarged UTXO data structure, particularly the package, repackage, and unpackage sections, to complete its workflow.UTXO data structures are programmable, but their low state space utilization, high storage costs, and lack in programmability have raised worries about their application, as evidenced by recent research like [35].AHyperledger-basedapproachformedicationtraceabilityinthepharmaceuticalsupplychainwas presented by Faisal et al. [36].The authors indicate that the suggested system performs better in terms of throughput and minimizes delay with reduced resource consumption; however, their solution was only evaluated in a small-scale network and was not thoroughly tested.This endeavour also brought to light the difficulty in using block chain technology to achieve scalable solutions, a topic that has drawn a lot of attention recently in publications like [22].Similar reservations apply to the strategy employed by Hulseap-ple [38], who created a private block chain in tandem with Bit coin and uses it as a ledger to hash certain data in order to safeguard chain transactions.On their block chain, each thing has a permanent record that makes it. C. DRUG TRACEABILITY SYSTEM BASED ON BLOCKCHAIN FOR PHARMACEUTICAL SUPPLY CHAINS A high-level architecture for the suggested medication traceability system, together with the stakeholders and how they interact with the smart contract, are shown in Figure 2. Through software devices with a front-end layer denoted by a D App (Decentralized Application), which is connected to the smart contract, on-chain resources, and decentralized storage system by an application program interface (API) like Infura, the stakeholders are envisaged to have access to the smart contract, decentralized storage system, and on-chain resources. FIGURE 2. A high-level architecture for the proposed block chain-based system for pharmaceutical supply chain Web3, and JSON RPC.The stakeholders will interact with the smart contract to initiate pre-authorized function calls and with the decentralized storage systems to access data files.Finally, their interaction with the on-chain resources will be for obtaining information such as logs, IPFS hashes, and transactions.More details on the system components are presented below. Regulatory organizations like the FDA, producers, distributors, pharmacies, and patients are examples of stakeholders.Based on their position in the supply chain, these stakeholders participate in the smart contract and are given specialized roles.Additionally, in order to track supply chain transactions, they are granted access to on-chain resources including history and log information.They also have permission to view data kept on the IPFS, including informational pamphlets and photos of drug lots. The IPFS (Decentralised Storage System) offers an inexpensive off-chain storage solution for storing data related to supply chain transactions.This ensures the integrity, dependability, and accessibility of the stored data.Every uploaded file on the server generates a unique hash, which is used to maintain data integrity.The various hashes for the various uploaded files are then stored on the block chain and accessed via the smart contract.Any modifications made to any uploaded file are reflected in the corresponding hash.  Implementation of the supply chain is managed by Ethereum Smart Contract.In order to trace the history of transactions and handle hashes from the decentralized storage server that gives participants access to supply chain data, the smart contract is vital to the process.In addition, the smart contract defines the roles of the various supply chain players, and modifiers are used to grant authorized . In-chain the logs and events produced by the smart contract, which enable track and trace, are kept in resources.Additionally, a decentralized registration and identity system is Employed as a non-chain resource to link each participant's Ethereum address to a human-readable text.Since the D App user will only need to use the proposed solution to confirm that the drug under consideration is not counterfeit and came from a reputable manufacturer, no real-time tracking willbenecessaryasthesystemcomponentsareintendedtoworktogetherseamlesslytotrackthe history of the drug under consideration to verify its authenticity.There are various methods that can be used to track the real-time position of a drug lot.IoT-enabled smart containers, for instance, have sensors that follow and continually monitor the TRU from its beginning location to its destination.The Internet of Things sensor comprises a temperature sensor and a Global Positioning System (GPS) receiver to determine the TRU's location. Figure3 illustrates interaction among different participants of the supply chain within proposed system and can be loosely divided into three phases explained below. Manufacturing: To begin the process of manufacturing a drug lot, a manufacturer usually sends an FDA approval request.The producer starts the manufacturing process as soon as the FDA grants approval for the request, and everyone involved is informed of the occurrence photographs of the medication Lot will be uploaded by the manufacturer to the IPFS, which will then send a hash to the smart contract so that authorized parties can view the photographs at a later time.The medication lot will be shipped to the distributor, who will package it, bringing the production process to an end.Distribution: The distribution procedure will now begin.The distributor will package the medication lot and upload a picture of it to the IPFS, which will provide a hash to the smart contract.The distribution phase will come to an end when the drug lot packages are distributed to pharmacies.Purchase/Usage The contact between the pharmacist and the patient is the subject of the final stage in the sequence diagram.Here, the pharmacy will start selling drug lot boxes and notify all supply chain participants of the sale.Next, a picture of the drug package that was sold will be posted to the IPFS, and the IPFS will send a hash to. IMPLEMENTATION OF PROPOSED TRACEABILITY SYSTEM The Ethereum block chain platform is utilized in the development of the suggested solution.Since Ethereum is a permissionless public block chain, everyone can access it.Remix IDE is used to compile and test the Solidity-written smart contract.Remix is an online web-based development environment that lets users create and run smart contract scripts as well as test and debug the Solidity code's environment.The entire code1 is now accessible to the general audience. CONCLUSION In this paper, investigated the challenge of drug traceability within pharmaceutical supply chains highlighting its significance specially to protect against counterfeit drugs.We have developed and evaluated a block chain-based solution for the pharmaceutical supply chain to track and trace drugs in a decentralized manner.Specifically, our proposed solution leverages cryptographic fundamentals underlying block chain technology to achieve tamper-proof logs of events within the supply chain and utilizes smart contracts within Ethereum block chain to achieve automated recording of events that are accessible to all participating stakeholders.Demonstrated that our proposed solution is cost efficient in terms of the amount of gas spent in executing the different functions that are triggered within the smart contract .Moreover, the conducted security analysis has shown that our proposed solution achieves protection against malicious attempts targeting is integrity, availability and non repudiation of transaction data which is critical in a complex multi-party settings such as the pharmaceutical supply chain.Continue our efforts to enhance the efficiency of pharmaceutical supply chains and envision to focus on extending the proposed system to achieve end to end transparency and verifiability of drugs use as future work. Figure4 Figure4: System Architecture Modules Pharmacy Seller
2024-04-26T15:12:03.486Z
2024-04-23T00:00:00.000
{ "year": 2024, "sha1": "6a2a6eb9b74c0440101f0eefdb3ea30683b43702", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2024/2/17739.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b02ad5d1d40ad67f6ecb6a3752357770fc9d0c69", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
62893679
pes2o/s2orc
v3-fos-license
Embedding some bordered Riemann surfaces in the affine plane We study the existence of proper holomorphic embeddings of bordered Riemann surfaces into the complex plane C^2. Denote by M(R) the moduli space consisting of all equivalence classes of complex structures J on a given smooth oriented bordered surface R. We introduce a class F(R) in M(R)with the following properties: (1) F(R) is nonempty and open (in a natural topology on M(R)); (2) The interior of any Riemann surface (R,J) in the class F(R) admits a proper holomorphic embedding in C^2; (3) If R is a finitely connected planar domain then F(R)=M(R); (4) Each hyperelliptic bordered Riemann surface (R,J) belongs to the class F(R) and hence admits a proper holomorphic embedding in C^2. Part (3) above is equivalent to the theorem of Globevnik and Stensones (Holomorphic embeddings of planar domains into C^2, Math. Ann. 303, 579-597, 1995). Our approach builds upon the earlier work of Cerne and Globevnik (On holomorphic embedding of planar domains into C^2, J. d'Analyse Math. 8, 269-282, 2000). 1.1 Theorem. On each bordered surface R there exists a complex structure such that the open Riemann surface R = R\bR admits a proper holomorphic embedding in C 2 . We give an elementary proof of theorem 1.1 in section 2. In fact we will show that there is a non-empty open set of non-equivalent complex structures on R for which R embeds into C 2 (theorem 1.5). The precise regularity of R up to the boundary is not important since R is biholomorphically equivalent to a domain bounded by m ≥ 1 real-analytic curves in a compact Riemann surface R. The simplest way to obtain such R is to fill each hole of R by attaching a disc D j such that we identify bD j with a boundary curve C j ⊂ bR and extend the complex structure across C j using reflection. Another possibility is to embed R in the (Shottky) double R which is obtained by gluing two copies of R (with the opposite orientation) along bR. The genus of R equals 2g R + m − 1. The details of this 'doubling construction' can be found in [BS,, [Spr,p. 217] or [SS]. Hence we shall assume from now on that R is smooth up to the boundary. We shall now give a more precise embedding theorem. For each k ≥ 0 we denote by A k (R) the algebra of all C k -functions on R which are holomorphic in R. A nonconstant function f ∈ A(R) satisfying |f | = 1 on bR is called an inner function on R. The restriction of an inner function to R is a proper holomorphic map of R onto the disc U ; conversely, any proper holomorphic map f : R → U extends to an inner function on R = R ∪ bR. There is an integer d = deg(f ) ∈ IN, called the degree (or multiplicity) of f , such that for all except finitely many points z ∈ U the fiber R z = f −1 (z) consists of d distinct points in R while the exceptional fibers consist of less than d points. Definition 1. A bordered Riemann surface R of genus g R and with m boundary components is said to be of class F if it admits an injective immersion F = (f, g): R → U × C which is holomorphic in R and such that f is an inner function with deg(f ) ≥ 2g R + m − 1. Clearly this property is biholomorphically invariant. The reason for introducing this class is the following result which is proved in section 3. 1.2 Theorem. If R is a bordered Riemann surface of class F then R = R\bR admits a proper holomorphic embedding in C 2 . Examples: 1. On each smoothly bounded domain Ω ⊂⊂ C with m boundary components there exists an inner function f with deg(f ) = m [Ahl]. The map F (x) = (f (x), x) ∈ C 2 for x ∈ Ω satisfies the hypothesis of theorem 1.2 and hence Ω embeds in C 2 . This is the theorem of Globevnik and Stensønes [GS]. A compact Riemann surface is called hyperelliptic if it is the normalization of a complex curve in CIP 2 given by w 2 = Π k j=1 (z −z j ) for some choice of points z 1 , . . . , z k ∈ C [FK]. We shall call a bordered Riemann surface R hyperelliptic if its double R is hyperelliptic. Such R has either one or two boundary components and it admits a pair of inner functions (f, g) which embed R in the polydisc U 2 such that bR is mapped to the torus (bU ) 2 ; moreover, one of the two functions has degree 2g R + m and the other one has degree 2 (see [Ru1] and sect. 2 in [Gou]). Thus R is of class F and we get 1.3 Corollary. If R is a hyperelliptic bordered Riemann surface then R admits a proper holomorphic embedding in C 2 . In particular, each torus with one hole embeds properly holomorphically into C 2 . Indeed it is shown in [Gou] that the double R of a hyperelliptic bordered Riemann surface R can be represented as a curve in CIP 2 given by the equation for some choice of distinct points α j ∈ U (1 ≤ j ≤ĝ +1), whereĝ = 2g R +m−1 is the genus of R, such that R = {(x, y) ∈ R: |x| ≤ 1}. The pair of inner functions on R given by f = y/ ĝ+1 j=1 (1 − α j x), g = x, provides an embedding F = (f, g): R → (U ) 2 . Clearly g has multiplicity 2 on R. From the relation f 2 = ĝ+1 j=1 (g − α j )/(1 − α j g) which follows from (1.1) we see that f has multiplicityĝ + 1 = 2g R + m and hence theorem 1.2 applies. Sikorav [Sik] gave a slightly different proof of corollary 1.3 for tori with one hole (unpublished); these are all hyperelliptic. Sketch of proof of theorem 1.2. Let P ⊂⊂ C 2 be a polydisc. According to [Glo] (see also [ČG] and [Stn]) there exist Fatou-Bieberbach domains Ω ⊂ C 2 such that bΩ ∩ P is an arbitrarily small perturbation of the cylinder (bU × C) ∩ P (proposition 2.1 below). Such domains Ω can be constructed using sequences of compositions of shears in coordinate directions on C 2 . Let φ: Ω → C 2 be a biholomorphic map onto C 2 . If F = (f, g): R → U × C is as in theorem 1.2, we choose the polydisc P large enough to contain F (R) and solve a Riemann-Hilbert boundary value problem to find a new holomorphic embedding F = ( f , g): R → Ω ∩ P such that F (bR) ⊂ bΩ. The map G = φ • F : R → C 2 is then a proper holomorphic embedding of R to C 2 . The details are carried out in section 3. This approach was used in [ČG] for finitely connected planar domains to provide a different proof of the embedding theorem of Globevnik and Stensønes [GS]. The proof of theorem 1.1 (section 2) is similar but more elementary. ♠ The proof of theorem 1.1 shows that each bordered surface R carries a complex structure J such that (R, J) is of class F . We will show that the set of such complex structures on R is open. To be precise, fix a number α ∈ (0, 1) and denote by End α IR (T R) the set of all endomorphisms of T R which are Hölder continuous of class C α , endowed with the C α topology. Let 1.4 Theorem. Let R be a smooth bordered surface and 0 < α < 1. The set F α R , consisting of all J ∈ J α R such that the Riemann surface (R, J) is of class Theorem 1.4 is proved in section 4. The main point is that inner functions on (R, J) with multiplicity at least 2g R + m − 1 are stable under small perturbations of the complex structure J (proposition 4.1). For each α ∈ (0, 1) we can realize the moduli space M(R) as the quotient M(R) = J α R /∼ where J 0 , J 1 ∈ J α R satisfy J 0 ∼ J 1 if and only if there exists a diffeomorphism φ: R → R of class C 1,α with dφ • J 0 = J 1 • dφ. Let π: J α R → M(R) denote the quotient projection. We endow M(R) with the quotient topology. The set F (R) = π(F α R ) ⊂ M(R) consists of moduli of Riemann surface structures of class F ; this makes sense since the property F is biholomorphically invariant and therefore F α R = π −1 (F (R)). We can now summarize the above results as follows. The last statement above is the theorem of [GS]. The main question is whether F (R) = M(R) for every bordered surface R. The discussion in section 2 seems to indicate a negative answer; see proposition 2.2 and the remark following its proof. Comments regarding class F . It is proved in [Ahl,] that on every bordered Riemann surface R of genus g R with m boundary components there is an inner function f with multiplicity 2g R +m (although the so-called Ahlfors functions may have smaller multiplicity). A generic choice of g ∈ A 1 (R) gives an immersion F = (f, g): R → U × C with at most finitely many double points (normal crossings). The main difficulty is to find g such that F = (f, g) is injective on R. We do not know whether such g always exists as Oka's principle does not apply in this situation (proposition 2.2). ♠ It would be of interest to relax the condition in theorem 1.2 that one of the components be an inner function. In this direction we pose the following Problem: Let R be a bordered Riemann surface and let F = (f, g): R → C 2 be a holomorphic embedding whose image F (R) is polynomially convex in C 2 . Does R embed (properly holomorphically) in C 2 ? A possible approach would be to use sequences of automorphisms of C 2 to carry the boundary points of F (bR) towards infinity. The problem of holomorphic embeddability of a bordered Riemann surface R in C 2 is related to the question whether certain algebras of holomorphic functions on R are doubly generated. If F = (f, g): R → C 2 is an A kembedding whose image F (R) is polynomially convex then f and g generate a dense subalgebra of A k (R); in such case we say that the algebra is doubly generated. Conversely, if f and g generate a dense subalgebra in A k (R) then F = (f, g): R → C 2 is an injective immersion (not necessarily proper). The question whether A(R) is always doubly generated is to our knowledge still open. In 1978 Tsanov [Tsa] proved that for any bordered surface R there is a non-empty open set in the Teichmüller space T (R) (or in the reduced Teichmüller space T # (R)) consisting of Riemann surfaces for which the algebra A 0 (R) is doubly generated. In view of the above remark this also follows from theorem 1.5. Higher dimensional analogues of open Riemann surfaces are Stein manifolds. A complex manifold is Stein if it admits a proper holomorphic embedding in some complex Euclidean space C N [GR, Hör]. It is known that every Stein manifold of complex dimension n ≥ 2 admits a proper holomorphic embedding into C N with N = [3n/2] + 1 and this N is in general the smallest possible [EG, Sch]. For n = 1 this would give N = 2; however, the proof, which is based on the 'removal of singularities' method, does not apply for n = 1 since its main ingredient breaks down. This crucial ingredient is the homotopy principle, also called the Oka-Grauert principle, for sections of holomorphic vector bundles over Stein manifolds which avoid certain complex analytic subvariety of the total space [Gra,Gro2,FP1,FP2,FP3]. When n = 1, the subset to be avoided is a complex hypersurface and the Oka-Grauert principle fails in general (proposition 2.2). Acknowledgements. We wish to thank Josip Globevnik and Berit Stensønes for stimulating discussions on this topic. This research was supported in part by the Ministry of Science of the Republic of Slovenia. In this section we outline an approach to construct embeddings of bordered Riemann surfaces R in the tube U ×C by the 'elimination of singularities' method which has been used successfuly in higher dimensions [EG, Sch]. Although we cannot prove the existence of such an embedding for every complex structure on a given surface R, we obtain an elementary proof of theorem 1.1 based on the following result of Globevnik [Glo]. We quote the version proved in [ČG]. As before, U denotes the unit disc in C and rU = {|z| < r}. We denote the coordinates on C 2 by (z, w). Fix r > 0 and let P = (2U ) × (rU ) ⊂ C 2 . Remark. In fact there exist Fatou-Bieberbach domains Ω ⊂ C 2 with smooth boundary such that bΩ ∩ P is a small perturbation of (bU × C) ∩ P (Stensønes [Stn]; see also Globevnik [Glo] for the C 1 version). The weaker result quoted above will suffice for our purposes. Proof of theorem 1.1. We may assume that R is a compact domain with smooth real-analytic boundary in a Riemann surface R (sect. 1). We denote by O(R) the algebra of functions f holomorphic in a neighborhood V f of R in R. By Ahlfors [Ahl] there is an inner function f on R. Let Z = {z 1 , z 2 , . . . , z p } ⊂ U be the set of critical values of f . By the Hopf lemma we have df x = 0 for x ∈ bR and hence Z is contained in the open disc U . Choose a function g 1 ∈ O(R) such that (a) g 1 separates points on the (finitely many) fibers R z for z ∈ Z, and Clearly g 1 will separate points on all except perhaps finitely many fibers R z (z ∈ U ). Condition (b) insures that F 1 = (f, g 1 ): R → U × C is an immersion. A generic choice of g 1 also insures that F 1 has only finitely many double points in R and no double point on bR. Now choose g 2 ∈ O(R) which vanishes to second order at each point of f −1 (Z) and such that the pair (g 1 , g 2 ) separates points on all fibers R z for z ∈ U ; clearly such g 2 exists since we must satisfy the separation condition only at finitely many points. We wish to find g ∈ O(R) such that F = (f, g): R → U × C is an embedding. As in [EG] and [Sch] we seek g in the form where α: U → C is a holomorphic function to be selected. Since g 2 vanishes to second order at each point Our goal is to choose α such that F is injective globally on R. To formulate the relevant condition on α we fix a point z ∈ U and write f −1 (z) = {x 1 , . . . , x d } (distinct points!), where d = deg(f ) for all except finitely many z ∈ U . Denote by Σ z ⊂ C the (finite) set of solutions of the equations Equivalently, a number a ∈ C belongs to Σ z if it solves the equation for some i = j. By the choice of g 1 and g 2 at least one of the differences above is nonzero for each pair of indices i, j and hence each equation has either one solution or no solutions. The set Σ = ∪ z∈U {z} × Σ z ⊂ U × C is a closed one-dimensional complex analytic subset of U × C. The function g determined by α according to (2.1) separates the points on all fibers of f if and only if the graph of α avoids Σ, that is, if α(z) / ∈ Σ z for all z ∈ U . Choose a simple smooth arc C ⊂ U containing the set Z of critical values of f . By dimension reasons there is a smooth function α 0 : C → C whose graph over C avoids Σ. We can approximate α 0 uniformly on C by holomorphic polynomials α. If the approximation is sufficiently close, the graph of α will avoid Σ over an open set V ⊂⊂ U containing C. If g is the corresponding function (2.1) then F = (f, g) is a proper holomorphic embedding of Choose a simply connected closed domain D 0 ⊂⊂ V with real-analytic boundary and containing the arc C in its interior. There are a domain V 1 ⊂ V containing D 0 and an injective holomorphic map σ: V 1 → C which maps D 0 conformally onto U . The map F ′ = (σ • f, g): f −1 (V 1 ) → C 2 is a holomorphic embedding which maps the closed domain R 0 = f −1 (D 0 ) ⊂ R to U × C and it maps bR 0 to bU × C. Choose a number r > sup{|g(x)|: x ∈ R}. Let Ω be as in proposition 2.1 such that bΩ ∩ (2U × rU ) is a small C 3 -perturbation of the cylinder bU × rU . If the approximation is sufficiently good then bΩ intersects the image of F ′ transversely and the set To conclude the proof of theorem 1.1 it remains to show that R ′ is diffeomorphic to R. This can be seen as follows. Since D 0 is a closed simply connected domain with smooth boundary in U , there is a smooth function ρ: U → IR such that D 0 = {ρ ≤ 0}, ρ = 1 on bU , and ρ has no critical points in U \C (hence 0 < ρ(x) < 1 for x ∈ U \D 0 ). For 0 ≤ t ≤ 1 set D t = {x ∈ U : ρ(x) ≤ t}; thus D 0 is the given set and D 1 = U . Since f has no critical values in U \C, the function ρ • f : R → IR has no critical points in R\f −1 (C). By Morse theory the set R t = {x ∈ R: f (x) ∈ D t } = {x ∈ R: ρ(f (x)) ≤ t} is diffeomorphic to R = R 1 for each t ∈ [0, 1]. The same is true for R ′ which is a small C 3 -perturbation of R 0 . This completes the proof of theorem 1.1. ♠ Remarks. 1. In the above proof we could use the Fatou-Bieberbach domains constructed in [Glo] whose boundaries inside a polydisc are small C 1perturbations of the cylinder bU × U ; we shall need more smoothness in the proof of theorem 1.2. 2. To embed R holomorphically into U × C using this scheme we would have to find a function α ∈ O(U ) whose graph avoids the complex curve Σ ⊂ U × C constructed above. The fiber Σ z over most points z ∈ U consists of d 2 points, where d = deg(f ). In the special case when d = 2 we have d 2 = 1 and hence the Oka-principle [Gra,Gro2,FP1,FP2,FP3] applies to sections of (U × C)\Σ, so we obtain a desired holomorphic function α whose graph over U avoids Σ. Note that an inner function f of degree d = 2 exists if and only if R is hyperelliptic (since we obtain by reflection a degree two meromorphic function on the double R which implies that R is hyperelliptic). On the other hand, when d ≥ 3 the generic fiber of Σ contains at least three points and hence its complement is Kobayashi hyperbolic, so the Oka principle does not apply. The following result shows that there exist complex curves Σ ⊂ U × C (not necessarily arising from our construction) which cannot be avoided by holomorphic graphs. On the other hand it is always possible to avoid such a curve by graph of a smooth function; hence the Oka-Grauert principle fails for sections of (U × C)\Σ. Proposition. There exists a closed one-dimensional complex subvariety Σ ⊂ U × C which does not contain any line {z} × C and which has a nontrivial intersection with the graph of any holomorphic function on U . Proof. Denote the coordinates on C 2 by (z, w). Let Σ k ⊂ U × C be the union of the following complex curves, intersected with U × C: Assume that for each k ∈ IN there is a holomorphic function α k on U whose graph avoids Σ k . Then α k omits the values 0 and 1 and hence the sequence {α k } k∈IN is a normal family on U . Passing to a subsequence we may assume that α k converges, uniformly on compacts in U , to a holomorphic function α: U → C or to α = ∞. Consider the first case. Choose numbers 0 < r < 1 and k 0 ∈ IN such that k 0 r > max |z|=r α(z). (2.2) For each k ≥ k 0 the winding number of the function h k (z) = kz − α(z) on the circle |z| = r equals to that of kz which is one. This means that h k has a zero on the disc rU = {|z| < r}, i.e., the graph of α intersects the line w = kz and hence Σ k . The same argument holds for any function satisfying (2.2). Since for large k ∈ IN the function α k is close to α on rU , its graph also intersects Σ k , a contradiction. In the second case when α k → ∞ we can apply a similar argument in U ×C, where C = C ∪ {∞} is the Riemann sphere, to show that for all sufficiently large k the graph of α k intersects the hyperbola zw = 1 and hence Σ k , a contradiction. This proves proposition 2.2. ♠ Remark. The above approach to construct a function g separating points on the fibers of a given inner function f is not quite as ad-hoc as it may seem. Denote by O the sheaf of germs of holomorphic functions on R. According to Grauert [Gra] the push-forward f * O is a coherent analytic sheaf of O U -modules over the disc U = f (R). For each open set V ⊂ U we may view holomorphic functions on f −1 (V ) ⊂ R as sections of f * O over V . By Cartan's Theorem A [GR] the sheaf f * O is finitely generated over each compact K ⊂⊂ U , meaning that there exist functions g 1 , . . . , g n ∈ O(R) such that any g ∈ O(R) may be written in the form g(x) = n j=1 α j (f (x))· g j (x) (x ∈ f −1 (K)) for some holomorphic functions α 1 , . . . , α n defined in a neighborhood of K. Now g separates points on the fibers f −1 (z) for z ∈ K if and only if the graph of α = (α 1 , . . . , α n ): K → C n avoids a complex hypersurface Σ ⊂ U × C n constructed as above. Proposition 2.2 indicates that this may not be possible in general (although we do not have a specific counterexample). &3. Holomorphic perturbations of bordered Riemann surfaces. In this section we prove theorem 1.2. Let P = (2U ) × U ⊂ C 2 . For any sufficiently small perturbation S of the cylinder bU × C we denote by Ω S the connected domain in P bounded by S and containing the origin. 3.1 Proposition. Let R be a bordered Riemann surface of genus g R bounded by m R smooth curves and let F 0 = (f 0 , g 0 ): R → U × U be a map of class We emphasize that the maps F and F 0 only differ in the first component. If F 0 is an embedding, it follows that F is also an embedding provided that S is sufficiently C 3 -close to bU × U . Assuming proposition 3.1 for a moment we now prove theorem 1.2. Proof of theorem 1.2. Let F 0 = (f 0 , g 0 ): R → U × C satisfy the hypothesis of theorem 1.2. We may assume that R is a domain with smooth real-analytic boundary in a larger Riemann surface R (section 1). Since f 0 maps bR to the circle bU , it extends by reflection to a holomorphic function in a neighborhood of R. Furthermore we can approximate g 0 in the C 1 (R)-sense by a function (still denoted g 0 ) which is holomorphic in a neighborhood of R. If the approximation is sufficiently close, the new map F 0 is an embedding in a neighborhood of R. We may assume that ||g 0 || R < 1. Choose a hypersurface S close to S 0 = bU × U and the associated domain Ω = Ω S ⊂ P as in proposition 2.1, with the corresponding injective holomorphic map φ: Ω → C 2 which maps sequences in Ω S converging to S to sequences going to infinity. Let F = (f, g 0 ) be a map furnished by proposition 3.1 which is C 1 -close to F 0 . If the approximation is sufficiently close then F is a holomorphic embedding of R in Ω which maps bR to bΩ. The map G = φ • F : R → C 2 is then a proper holomorphic embedding of R to C 2 . This proves theorem 1.2 granted that proposition 3.1 is correct. ♠ In the proof of proposition 3.1 we shall use some results about the linear Riemann-Hilbert problem on bordered Riemann surfaces. Fix a number 0 < α < 1. Denote by A 1,α (bR) the Banach algebra of C 1,α -functions on bR which extend holomorphically to R. (The C 1,α -norm on bR can be defined by choosing a smooth parametrization of each curve C ⊂ bR by the circle S 1 and pulling back functions on C to functions on S 1 .) Given functions a: bR → C\{0} and c: bR → IR of class C 1,α , the corresponding Riemann-Hilbert problem is to find k ∈ A 1,α (bR) such that Re a(x)· k(x) = c(x), x ∈ bR. (3.1) The existence of solutions depends on the index κ(a) which is defined as the sum of the winding numbers of a over all m R boundary components of R (the corresponding Maslov index is 2κ(a)). Here we equip R with the usual orientation induced by the complex structure and we orient the boundary bR coherently. Note that when a is an inner function on R we have κ(a) = deg(a). The following is a part of the theorem from [Kop,p. 30]; it corresponds to the case when ν = 0 (since we are dealing with functions) and the trivial divisor δ with degree n δ = 0. Notice also that, in [Kop], the surface has m + 1 holes. Theorem. (Koppelman [Kop]) Let R be a bordered Riemann surface of genus g R with m R boundary components and let a be a complex-valued Hölder continuous function on bR without zeros. If κ(a) ≥ 2g R + m R − 1 then the Riemann-Hilbert boundary value problem (3.1) is solvable for all Hölder continuous functions c on bR and the corresponding homogeneous problem (c = 0) has 2κ(a) − (2g R + m R − 2) linearly independent solutions. Remarks. 1. Even though the theorem in [Kop] is stated for the C α -case (the functions a and c are assumed to be of class C α and the solutions belong to A α (bR)), the proof carries over to the case stated here. 2. This is essentially a result concerning the solutions of the operator L = ∂ acting on sections of the trivial line bundle E = R×C → R, with the Riemann-Hilbert boundary conditions described above. The operator L is elliptic and hence Fredholm, with the (real) index Koppelman's theorem asserts that, when κ(a) ≥ 2g R + m R − 1, L is surjective and dim IR (ker L) = Ind(L). For an extension to more general ∂-type operators we refer to [Gro1] and [HLS]. 3. There is a connection (by doubling of R) between Koppelman's theorem and the Riemann-Roch theorem [HLS]. The result may be viewed as a special case of the Atiyah-Singer index theorem [AS] which expresses the index of any elliptic linear differential operator, acting on sections of a complex vector bundle E → X over a compact manifold X, in terms of the Chern class of the bundle and the cohomology class in H * (X, C) determined by the principal symbol of L. ♠ Proof of proposition 3.1. Let F 0 = (f 0 , g 0 ) be as in the proposition. Assume that ||g 0 || R < 1 so that F 0 (R) ⊂ P = (2U ) × U . Denote the coordinates on C 2 by (z, w). Set ρ 0 (z, w) = |z| 2 − 1. Then {ρ 0 = 0} ∩ P = bU × U and ρ 0 (f 0 , g 0 ) = 0. For any function ρ ∈ C 3 (P ) sufficiently close to ρ 0 the set S ρ = {ρ = 0} ∩ P is a C 3 -hypersurface close to bU × U and vice versa, any small C 3 -perturbation of bU × C within P equals S ρ for some ρ ∈ C 3 (P ) close to ρ 0 . To solve the problem it suffices to find for each ρ ∈ C 3 (P ) close to ρ 0 a function f = f ρ ∈ A 1,α (bR) close to f 0 such that ρ(f (x), g 0 (x)) = 0 for all x ∈ bR. Such f extends from bR to a function f ∈ A 1,α (R). The corresponding map F = (f, g 0 ): R → P takes bR to S ρ = {ρ = 0} and it maps the interior R to the domain Ω ρ ⊂ P bounded by S ρ and by |w| = 1 (for the last statement we need f to be C 1 -close to f 0 ; see [ČG] for the details of this argument). &4. Families of inner functions on bordered Riemann surfaces. In this section we prove theorem 1.4. The essential ingredient is the following result which is possibly of independent interest. We use the notation established in section 1. If J is a complex structure of class C k−1,α on R, we denote by A k,α (R, J) the space of all J-holomorphic functions of order C k,α (that is, their derivatives of order k are Hölder continuous of order α). 4.1 Proposition. Let R be a smooth bordered surface of genus g R and with m ≥ 1 boundary components. Fix α ∈ (0, 1). Let J 0 ∈ J α R be a complex structure on R and let f 0 ∈ A 1,α (R, J 0 ) be an inner function on (R, J 0 ) with multiplicity ≥ 2g R + m − 1. Then for each J ∈ J α R sufficiently close to J 0 there is an inner function f J ∈ A 1,α (R, J) near f 0 , with f J depending continuously on J and f J 0 = f 0 . Remark. As already mentioned, Ahlfors [Ahl] constructed inner functions of multiplicity 2g R + m on any bordered Riemann surface. Proposition 4.1 shows that such functions are stable under small perturbations of the complex structure. On the other hand this need not be true for the Ahlfors function f p which maximizes the derivative at a given point p ∈ R since the degree of f p may depend on p. ♠ Assuming proposition 4.1 for a moment we can prove theorem 1.4 as follows. Fix a number 0 < α < 1 and let F 0 = (f 0 , g 0 ): R → U × C be an embedding of class C 1,α which is J 0 -holomorphic on R for some complex structure J 0 ∈ J α R . Thus f 0 is an inner function on (R, J 0 ) of multiplicity ≥ 2g R +m−1. For each J ∈ J α R sufficiently near J 0 proposition 4.1 provides an inner function f J on (R, J) which is C 1 -close to f 0 . We can also approximate g 0 in the C 1 -sense by J-holomorphic functions g J (this is trivial since there is no boundary condition on g J ). If the approximations are sufficiently close (which is the case when J is close enough to J 0 ), the J-holomorphic map F J = (f J , g J ): R → U × C is C 1 -close to F 0 and hence is an embedding. Hence the Riemann surface (R, J) is of class F for all J sufficiently close to J 0 . This proves theorem 1.4 provided that proposition 4.1 is correct. ♠ Proof of proposition 4.1. For each complex structure J ∈ J α R we denote by ∂ J the corresponding ∂-operator which maps C 1,α -functions on R to (0, 1)-forms of class C α according to the formula 2 ∂ J (f ) = df + idf • J. Denote the space of such forms by Ω α 0,1 (R, J). Consider the Banach manifolds W = {(f, J): f ∈ C 1,α (R), |f | = 1 on bR, J ∈ J α R }, W 0,1 = {(ω, J): ω ∈ Ω α 0,1 (R, J), J ∈ J α R }. consists of J-holomorphic inner functions on R for all complex structures J on R. Denote by π: W → J α R the projection onto the second factor. We claim that Φ is a C 1 -map of Banach manifolds which is a submersion with finite corank at each point (f, J) ∈ W h for which κ(f ) ≥ 2g R + m − 1. Once this is proved, the implicit function theorem [Ca] shows that W h is a Banach submanifold of W in a neighborhood of each such point (f, J) and the projection π: W h → J α R is locally near (f, J) a trivial Banach fibration with finite dimensional fibers. The proposition then follows immediately since it amounts to choosing a local section of this fibration passing through (f, J). To find the derivative DΦ(f, J)(g, K) in the direction of a tangent vector (g, K) to W at (f, J) we choose a local C 1 path (f t , J t ) ∈ W for |t| < ǫ, with f 0 = f , J 0 = J, d dt | t=0 f t = g and d dt | t=0 J t = K. Differentiating the equations |f t | 2 = 1 resp. J 2 t = −Id with respect to t at t = 0 we see that Re(gf) = 0 on bR and JK + KJ = 0. The derivative of Φ equals DΦ(f, J)(g, K) = d dt | t=0 Φ(f t , J t ) = dg + idgJ + idf K, K = 2∂ J g + idf K, K). From this formula we see immediately that DΦ(f, J) is continuous in (f, J) and hence Φ is of class C 1 . Moreover we see that DΦ(f, J) is surjective if and only if any ω ∈ Ω α 0,1 (R, J) equals ω = 2∂ J g + idf K for some g ∈ C 1,α (R) with Re(gf) = 0 on bR. Since (idf K)J = −idf JK = df K = −i(idf K) (here we used KJ = −JK and idf J = −df ), the form idf K is of type (0, 1) with respect to J. Hence it suffices to see that g → ∂ J g is surjective as a map {g ∈ C 1,α (R): Re(gf) = 0 on bR} → ∂ J g ∈ Ω α 0,1 (R, J). (4.1) Surjectivity of this map at points (f, J) ∈ W with κ(f ) ≥ 2g R + m − 1 is guaranteed by the theorem in [Kop,p. 33] together with Corollary II in [Kop, p.
2018-12-29T09:21:26.740Z
2001-01-08T00:00:00.000
{ "year": 2001, "sha1": "3b54dd48c24ddd6513a6b74596e15590155043f0", "oa_license": null, "oa_url": "http://www.intlpress.com/site/pub/files/_fulltext/journals/mrl/2002/0009/0005/MRL-2002-0009-0005-a010.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "eb13283529002cc42d2b9e0128f8225d79237f31", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
263206602
pes2o/s2orc
v3-fos-license
Harnessing Wearable Devices for Emotional Intelligence: Therapeutic Applications in Digital Health Emotional intelligence strives to bridge the gap between human and machine interactions. The application of such systems varies and is becoming more prominent as healthcare services seek to provide more efficient care by utilizing smart digital health apps. One application in digital health is the incorporation of emotion recognition systems as a tool for therapeutic interventions. To this end, a system is designed to collect and analyze physiological signal data, such as electrodermal activity (EDA) and electrocardiogram (ECG), from smart wearable devices. The data are collected from different subjects of varying ages taking part in a study on emotion induction methods. The obtained signals are processed to identify stimulus trigger instances and classify the different reaction stages, as well as arousal strength, using signal processing and machine learning techniques. The reaction stages are identified using a support vector machine algorithm, while the arousal strength is classified using the ResNet50 network architecture. The findings indicate that the EDA signal effectively identifies the emotional trigger, registering a root mean squared error (RMSE) of 0.9871. The features collected from the ECG signal show efficient emotion detection with 94.19% accuracy. However, arousal strength classification is only able to reach 60.37% accuracy on the given dataset. The proposed system effectively detects emotional reactions and can categorize their arousal strength in response to specific stimuli. Such a system could be integrated into therapeutic settings to monitor patients’ emotional responses during therapy sessions. This real-time feedback can guide therapists in adjusting their strategies or interventions. Introduction The use of artificial intelligence (AI) in daily activities has become mainstream in recent years. Advances in technology have paved the way for computationally powerful machine learning models to cement the foundations for the future of the industrial and healthcare domains. The adoption of AI in the health sector holds a lot of potential, from patient diagnostics to health monitoring and, in some cases, treatment itself [1]. Emotional intelligence strives to bridge the gap between human and machine interactions. The application of such systems varies and is becoming more prominent as healthcare services work to provide more efficient care through the utilization of smart digital health apps. One application in digital health is for the incorporation of emotion recognition systems as a tool for therapeutic interventions. Emotion classification is currently being developed as a component in a closed-loop system [2] designed to aid in the therapeutic intervention of people with autism spectrum disorder (ASD). ASD is a neuro-developmental condition that affects a person's social skills by impairing their interaction, communication, behaviors, and interests [1,3,4]. The condition often results in more health problems due to isolation and unemployment (or reduced employment), which can lead to depression and anxiety [4]. Estimates reveal that 1 out of 59 people are affected by ASD, thus comprising~1~2% of the general population [4,5]. Emotions can be identified by three main components: 1-facial expressions; 2-speech and voice patterns; and 3-physiological signals. Emotion recognition perception is distributed as 55% facial, 35% speech, and 10% physiological signals [6]. Although facial expressions and speech patterns hold the majority for emotion determination, limited access to these data in real time in daily life makes them less convenient than physiological signals. Physiological signals can be accessed through electronic wearable devices (EWD), such as smart watches, which are increasingly prevalent and are directly associated with health management [7]. Equally, screen time, including smart phone, TV, and computer usage, stands at 28.5 ± 11.6 h a week [8]. Even if a small portion of screen time is allocated to using a health app, the data collected would still be fewer than the level of data from EWDs. Physiological signals often used to measure emotional and cognitive reactions include electrodermal activity (EDA) and electrocardiogram (ECG) [9][10][11]. Hence, physiological signals were selected for emotion detection in this study. For electrodermal activity, the parameters of the frequency of non-specific skin conductance responses (NS.SCR) and the skin conductance level (SCL) are frequently used. This is one of the most common measures used in psychophysiology and includes a wide range of applications, such as emotional reactions, attention examination, and the processing of information. EDA is measured by applying a small current through a pair of electrodes that are placed on the surface of the skin [12]. Two mechanisms contribute to the EDA measurement: 1-sweat secretion and 2-selective membrane activity in the epidermis. The more sweat produced, the more conductive the path becomes; as a result, the resistance decreases and therefore a change is observed in the EDA. ECG is one of the most widely used non-invasive clinical diagnostic tools, providing a clear observation of the heart's electrical behavior [13]. ECG records the electrical activity transmitted through the body by means of electrodes attached to the skin. Another relatively simple derivation option is the use of a chest belt. This electrical activity is the result of the heart's depolarization to induce contraction at each beat [14]. The measurements are analyzed through the QRS wave complex, and subsequently the heart rate (HR) is derived from peak to peak, e.g., RR interval, of the ECG recording across a specific time frame. The use of ECG monitoring has increased in recent years, thanks in part to the advancement of wearable devices, such as smart watch technology or fitness trackers, and people's often high adherence to their use for the monitoring of daily activity and workout routines in a lifestyle focused on well-being and healthy aging. The data used in this article were collected from a separate collaborative study conducted on emotion induction methods' influence on recognition [15]. The ground truth, defined as the subjectively perceived valence and arousal of each emotional category, was assessed using the self-assessment manikin (SAM) [15,16]. The data were gathered from EDA and ECG sensors attached to the non-dominant hand (thenar and hypothenar) and chest, respectively. In this study, the EDA-more specifically, the SCL-and ECG signals, i.e., HR and heart rate variability (HRV) were analyzed for emotional stimulus trigger marks and assessed for the different emotional reaction stages and intensity of arousal using signal processing and machine learning techniques. Features of interest, required for the machine learning algorithm, were extracted from the data by applying different signal processing methods. To evaluate the outcome of the predictions, different evaluation criteria were used. The aim of this study was to disclose the effectiveness of physiological signals-in this case, EDA and ECG-in characterizing emotional stimuli reactions and identifying their stages and arousal strength. The paper is organized with the following structure. Section 2 describes the methods used, data description, signal processes, network architecture, and analysis criteria. Key results are highlighted in Section 3, with their respective discussions rendered in Section 4. The conducted ablation studies are mentioned in Section 5, and a conclusion is drawn in Section 6. Related Work The challenges of detecting and recognizing human emotions have yielded different approaches and techniques, with a recent trend towards machine learning strategies to solve the problem. A recent search for "emotion recognition facial" and "emotion recognition physiological signal" on PubMed revealed the concentration of research works towards facial recognition (4825 articles), rather than physiological signals (191 articles), for emotion recognition, with a ratio of~25:1 over the last 5 years [17]. In Kakuba S. et al. (2022) [18], an attention-based multi-learning model (ABMD) utilizing residual dilated causal convolution (RDCC) blocks and dilated convolution (DC) with multi-head attention is proposed for emotion recognition from speech patterns, achieving 95.83% on the EMODB dataset, with notable robustness in distinguishing the emotion of happiness. In Yan Y. et al. (2022) [19], an AA-CBGRU network model is proposed for speech emotion recognition that combines spectrogram derivatives, convolutional neural networks with residual blocks, and BGRU with attention layers, showing improved weighted and unweighted accuracy on the IEMOCAP sentiment corpus. In Khaireddin Y. et al. (2021) [20], a popular VGG network architecture was deployed with fine hyperparameter tuning to achieve state of the art results on the FER2013 [21] dataset. A shallow dual network architecture was introduced in Mehendale N. (2020) [22], with one framework removing background noise while the second generated point landmark features, achieving recognition accuracies of up to 96% on a combined dataset. Zhao X. et al. (2017) [23] proposed a novel peak-piloted GoogleNet [24] network architecture in which the peak and non-peak emotional reaction was considered from an image sequence, with tests on the OULU-CASIA [13] database achieving up to 84.59% accuracy. In Kim Y. et al. (2021) [25], a facial image threshing (FIT) machine for autonomous vehicles' facial emotion recognition (FER) is introduced, utilizing advanced features from pre-trained facial recognition and the Xception algorithm, resulting in a 16.95% increase in validation accuracy and a 5% improvement in real-time testing with the FER 2013 dataset compared to conventional methods. In Canal F. et al. (2022) [26], a survey was conducted that reviewed 94 methods from 51 papers on emotion expression recognition from facial images, categorizing them into classical approaches and neural networks, finding slightly better precision for the classical methods but with lesser generalization; this work also evaluated the strengths and weaknesses of popular datasets. In Karnati M. et al. (2023) [27], a thorough survey of deep learning-based methods for facial expression recognition (FER) is provided, which discusses their components, performance, advantages, and limitations, while also examining relevant FER databases and pondering the field's future challenges and opportunities. Although the facial features provide a more distinguishable analysis of the emotional response of a person, the acquisition of the data is somewhat cumbersome. The relevant and appropriate feature extraction from facial expressions in images is also disputed. In particular, it is often not robust to differences in complexion, culture, and ethnicity. Physiological signals provide more continuous real-time monitoring compared to facial expressions. In comparable studies [28][29][30][31][32][33][34][35], the impact of using physiological signals for emotion detection and subsequent recognition is highlighted. Shukla J. et al. (2021) [28] assessed and evaluated different techniques for EDA signals and determined the optimal number of features required to yield high accuracy and real-time emotion recognition. A fine hyperparameter-tuned convolutional neural network was developed in Al Machot F. et al. (2019) [29] for use in assisted living environments using EDA signals to recognize emotions. The designed model improved the robustness of two established datasets, achieving accuracies of 78% and 82% on the MAHNOB [36] and DEAP [37] datasets, respectively, for subject-independent recognition. In Veeranki Y. R. et al. (2021) [30], different time-frequency signal analysis methods are implemented on the EDA signal and combined with machine learning techniques for emotion recognition, reaching area under the curve (AUC) accuracies of 71.30% on the DEAP [37] database. In Wenqian L. et al. (2023) [38], a review was conducted on emotion recognition and judgment using physiological signals like EEGs, EDA, ECGs, and EMG, discussing their technological applications and the effects achieved and providing a comparative analysis of different signal applications, along with considerations for future research. Heart rate (HR) monitoring, using smart watches, is often applied when following up on pre-existing health conditions or tracking workout routines for athletes [7]. However, other applications, such as stress level detection and emotion recognition, are also studied [31,39]. In Shu L. et al. (2020) [31], HR signals recorded by a smart wearable device were assessed for the recognition of paired emotions using machine learning models. The approach achieved accuracy of 84% for three emotional states' classification, using a gradient boosted decision tree algorithm on the collected dataset. Zhang Z. et al. (2016) [35] took a different approach to recognizing emotions, using the accelerometer data from wearable devices. The results revealed accuracy of 81.2% in classifying three emotional categories, using a support vector machine (SVM) with a radial basis (RBF) kernel function as a classifier. A combination, more commonly known as fusion, of more than one signal for emotion recognition has also been studied, with promising results. Greco A. et al. (2019) explored the fusion of both EDA signals and speech patterns to improve arousal level recognition, yielding a marginal classifier improvement of 11.64% using an SVM classifier with recursive feature elimination [32]. Du G. et al. (2020) investigated the combination of facial expressions and HR for emotion recognition in gaming environments, increasing the recognition accuracy by 8.30% [33]. In Fernández-Aguilar L. et al. (2019) [34], the fusion of EDA signals and HR variability (HRV) was used for emotion classification, achieving 82.37% overall accuracy for both young and elderly age groups combined, for seven emotion classes, using an SVM classifier with a quadratic kernel. Hence, both EDA and ECG signals were used in the present study for emotion identification and its subsequent arousal level determination. This study was distinct from prior research as it did not focus on identifying the relative emotional response but rather the ability to identify the physiological reaction and its subsequent arousal intensity. This approach offers a more detailed understanding of an individual's level of engagement with the presented stimuli. Database Description The data used in this research were collected as part of a study on emotion induction techniques, under controlled laboratory conditions [15]. Physiological measurements of ECG and EDA were recorded, along with videos of the facial expressions. In total, 24 subjects (10 male, 14 female), from different age groups, volunteered. The experiment consisted of having the subjects sit and watch a slideshow recording containing 7 different image stimuli, comprising the six basic emotions of anger, disgust, fear, happiness, sadness, and surprise, and a seventh neutral category. Each stimulus was applied for 30 s, designed to induce an emotional reaction, followed by a rest time of 1 min between each stimulus. After the rest period, subjects were asked to reflect for a period of 30 s on a situation in their lives where such an emotional trigger had occurred (autobiographical recall), followed another rest period of 1 min. Subjects also assessed each stimulus using the SAM [16], where this information was used as ground truth for system development. A more detailed description of the experiment can be found in Schmid et al. [15]. Physiological signals were recorded from two sensors on the hand and chest. For the ECG, the "EcgMove4" sensor (Movisens GmbH, Karlsruhe, Germany) with a dry electrode chest belt was used. The "Ecg-Move4" records ECG signals at a rate of 1024 Hz and 12-bit resolution with an input range of 560 mV [40]. To measure EDA, the "EdaMove4" sensor (Movisens GmbH, Karlsruhe, Germany) was used. The "EdaMove4" sensor was attached to the subject's non-dominant wrist with the two electrodes placed on the palm (thenar and hypothenar), as depicted in Figure 1. The EDA sensor records at a sample rate of 32 Hz with a 14-bit resolution and an input range of 2 to 100 µS [41]. The collected dataset consisted of 24 ECG and EDA signals. For system development, the signal sequences were annotated for each subject and signal, based on the used emotional categories (anger, disgust, fear, happiness, neutral, sadness, and surprise) and the participants' assessment using the SAM [16]. The following measurement times (recording sequences) were used for each emotional category: (a) during image presentation (30 s), (b) rest period after image presentation (60 s), (c) during autobiographical recall (30 s), (d) rest period after autobiographical recall (60 s), and (e) a baseline measurement recorded at the beginning of the experiment. The arousal level was retrieved from the SAM assessments using a 9-point scale (from 1-low arousal to 9-high arousal) based on pictograms. In this study, a two-class classification model was first established to classify the state of the signal as either an emotion or resting stage. Afterwards, a three-class classification model was developed to identify the arousal strength of the detected emotion. The 9-point arousal scale was converted to a three-class arousal strength by setting the values 1 to 3 as low, 4 to 6 as mid, and 7 to 9 as high. Table 1 represents the arousal scale conversion. The baseline and emotion classes consisted of recordings of 30 s, while the rest period had a 60 s duration. System Methodology The workflow of the proposed system in real-time applications is depicted in Figure 2. The physiological signal analysis was separated into two paths, one for EDA and another for ECG. The EDA data obtained from the experiments had to be pre-processed to address disturbances, such as invalid measurements and signal discontinuity, during data gathering and post-processing, which included skin conductance level (SCL) calculation. Signals were then processed to determine emotional stimulus trigger time stamps. This key information was used in conjunction with the ECG signal classification model. Flow chart of the system workflow for EDA and ECG signal analysis. The EDA analysis path is used to detect the changes in signal activity. The trigger period is then used for the ECG signal path analysis and classification of the emotional state and arousal strength. The red font indicates a flow process that was rejected and removed from further processing, unless illustrated otherwise. The ECG signals collected were then separated into signal snippets based on the information from the EDA analysis. The ECG signal was first down-sampled and then standardized for a consistent stimulus activity period between the subjects. This processing was performed to address data synchronization issues. Outliers were then removed and heart rate variability (HRV) calculated using two different time-and frequency-based methods [42]. The HRV was then used as input to classification model 1, designed to find a pattern within the data and classify the two states of the subject, emotion and rest. Next, the emotion signal was passed through a continuous wavelet transform (CWT) to convert the signal into an image, and then passed through classification model 2, where the emotion signal arousal strength was classified. EDA Signal Processing Given the placement positions of the electrodes and sensor for EDA data collection, inconsistencies and noise were unavoidable. To counter these disturbances, the SCL output derived from the EDA signal underwent a pre-processing stage. During the pre-processing stage, the SCL signal was scanned for missing data, such as not-a-number (nan) errors, for each subject. If a discontinuity was detected, piecewise cubic spline interpolation was used to fill the gap. After this, a threshold was set to change any non-physiological value below zero to zero to counteract false measurements. Figure 3 shows an example before and after pre-processing. To detect emotional stimulus trigger marks from the SCL data, a second-order derivative was performed to determine the deflection points in the signal. The output was then used to extract the peaks, which represent the instance where a change in the EDA is observed. The time frame between two consecutive trigger marks was later used as the basis for the ECG signal snippet. ECG Signal Processing The ECG signal was first down-sampled from 1024 to 256 Hz, and then subdivided into 29 shorter signals representing the stimulus reactions from the experiment, the 14 emotions (7 from visual stimulus and 7 from autobiographical recall), the 14 corresponding rest stages, and a baseline measurement at the beginning of the experiment. Next, outliers detected in the signals were removed by applying a 1 s sliding window with a stride of one second to extract the minimum (min) and maximum (max) values across each stimulus response. For each subject, the mean of the min and max was calculated in the respective window frame and a threshold value set, so that any min and max value less than and greater than, respectively, 2.5 times the mean min and max value was tagged for removal. The tagged signal was then replaced with either its predecessor or successor of the same length depending on the position of the highlighted signal. The algorithm used for outlier removal is described in Appendix A. An example of the outlier removal algorithm applied to the baseline measurement is shown in Figure 4. After removing the outliers from the raw ECG signal, the RR intervals were calculated between the peaks of the QRS complex wave. When analyzing the output of the RR intervals, different outliers were observed. Therefore, a separate outlier removal algorithm was implemented on the RR intervals using a generalized extreme Studentized deviate test [43] and a modified Akima cubic Hermite interpolation [44,45] to fill gaps caused by the discarded information. Outliers were removed to enhance the accuracy and robustness of the analysis. Outliers can distort underlying trends in the data, leading to potentially misleading results. By excluding these anomalies, the analysis benefits from a more consistent and representative dataset, thereby ensuring the validity of the conclusions drawn. Feature Extraction To achieve robust prediction, meaningful features need to be extracted. Since the ECG information was used to classify the different stages of the response, the heart rate variability (HRV) was selected as a relevant feature. The HRV can be calculated using time-or frequency-based techniques. In total, eight features were selected as input to the classifier, 4 time-based and 4 frequency-based. Time-based HRV features extracted comprised 1-the root mean square of successive differences between heartbeats (RMSSD), 2-the standard deviation of the RR intervals measured in ms (SDNN), 3-the mean of the RR intervals (RR_Avg), and 4-the heart rate (HR). Frequency-based HRV measures comprised 1-the high-frequency power (HF), 2-the low-frequency power (LF), 3-very low-frequency power (VLF), and 4-the ratio of high-frequency to low-frequency power (HF2LF). These features were selected since HRV captures the variability between successive heartbeats and offers insights into the autonomic nervous system (ANS), which is integral to emotional processing. Time-based HRV features measure overall heart rate variability and its rapid changes, with alterations indicating different emotional responses. In the frequency-based HRV, the balance between low-frequency and high-frequency components can reflect shifts in emotional states, with specific patterns potentially distinguishing emotions like joy from sadness or anger. Overall, HRV serves as a valuable tool in deciphering the body's autonomic responses to emotions, aiding in understanding emotional regulation and processing. Time-Based HRV The RMSSD is calculated as the difference in time between two consecutive R waves in milliseconds (ms) over a set period of time. In this study, 30 and 60 s time windows were chosen for the RMSSD for emotion and rest, respectively, as these perform as well as the 5 min period [42,46]. The computation of the RMSSD, where RR represents the time interval between R peaks and N is the total number of RR intervals, is defined as The SDNN is the standard deviation of the RR time intervals over the length of the signal and is defined as where µ represents the mean of the RR intervals in ms. The RR_Avg feature is calculated as the mean of the RR intervals, and HR is calculated as the number of RR intervals in a 60 s time window: Frequency-Based HRV The frequency domain can be used to separate HRV into power in different frequency ranges [42]. In this study, the Lomb-Scargle power spectral density [47] was used to estimate the periodogram and frequencies of the given signal. Afterwards, the output was separated into the three frequency ranges of HF, LF, and VLF. The HF2LF is calculated as the ratio of HF to LF. The following frequency limits [42] were used for the calculation: The sum square energy was calculated for each of the HF, LF, and VLF, as follows: where P represents the periodogram data, f the frequency, n the lower limit, and m the upper limit of the corresponding frequency range. Continuous Wavelet Transform (CWT) The CWT was used to extract features for the classification of the emotions' arousal strength. A sampling frequency of 256 Hz was used with a scale range of 1 to 512, a time bandwidth of 0.234, and a Morlet wavelet [48]. Figure 5 shows the output (Figure 5b) from the CWT with a given ECG signal snippet input (Figure 5a). Emotion Detector To distinguish a signal's emotion state, divided into either emotion or rest, from the gathered features, a machine learning algorithm was adopted. Different models were tested and the results are presented in the ablation study in Section 5.1, and the best-performing one was selected. The support vector machine (SVM) classification model was thus used to classify this two-class system. The SVM classifier has many strong points suitable for this task, as they are versatile, robust to overfitting, and effective in high-dimensional spaces [49,50]. The hyperparameters of the SVM were optimized using a Bayesian optimization function for 100 iterations with a 5-fold cross-validation scheme. The optimized and selected hyperparameters are described in Table 2. The model classified the signal as either emotion or rest based on the predicted probability. The input features were normalized to the range of 0 and 1 across each observation. Arousal Strength Classifier After identifying a signal as an emotion, it was passed through a CWT to convert the signal into an image before entering classification model 2, to determine the arousal strength of the given emotional response. To classify the image into one of the three arousal strength classes, deep learning convolutional neural network (CNN) models were utilized. Different CNN architectures were tested, the results of which are given in the ablation study in Section 5.2. The best-performing model was selected for the classification. The ResNet-50 [51] architecture with initial pre-trained weights, trained on the Ima-geNet dataset, was used for model training. The last fully connected layer of the architecture was replaced such that the output was set to 3, which represents the number of classes for classification. Weighted cross-entropy was used for the loss function: where N is the total number of observations, K is the total number of classes, and w i is the weight at class i. m i is the number of observations for class i, and T is the GT value for the predicted value T. Table 3 shows the different training options used for model training. Evaluation Criteria To evaluate the performance of the different systems, different metrics were selected. To assess the trigger mark detection from the SCL signal, the root mean squared error (RMSE) was used: where N represents the total number of trigger marks, x the annotated trigger, andx the predicted trigger at a certain time. The emotion detector and arousal strength classifier models were evaluated using a 5-fold Monte Carlo cross-validation scheme. Performance was based on the mean of the accuracy and F1-score over the 5 folds. The Fβ-score is calculated as follows: where the β. is a coefficient used to weight the precision, and, in this work, β is set to 1 to have a weighted balance between precision and recall. In Equations (8) and (9), TP stands for the true positive, FP for false positive, and FN for false negative predictions. For the second classification model (arousal strength identification), the TP accuracy was used to assess the model performance. Table 4 represents the original and selected datasets' class distribution. The different emotional classes of anger, disgust, fear, happiness, neutral, sadness, and surprise were combined to form one class under the representation of emotion. Therefore, the two-class system consisted of 266 observations for emotion and 266 observations for rest from the selected dataset. Anger 24 24 19 19 Disgust 24 24 19 19 Fear 24 24 19 19 Happiness 24 24 19 19 Neutral 24 24 19 19 Sadness 24 24 19 19 Surprise 24 24 19 19 Rest 168 168 133 133 Total 336 336 266 266 Table 5 displays the distribution of the arousal levels from the SAM assessments. As described in Section 2.1, a three-class system was established from the nine-point SAM and the distribution of the dataset was 84 for low, 121 for mid, and 61 for high arousal strength. The arousal strength labels were then randomly split into a training and testing set with a ratio of 90% training, with 240 observations, and 10% testing, with 26 observations, such that at least one observation from each nine-point SAM class was present in the testing set. SCL Trigger Point Detection The first phase of the system workflow demonstrated the efficient detection of the trigger marks form the SCL signal, as observed in Figure 6. The strategy and steps adopted were able to achieve an RMSE value of 0.9871 for all the trigger mark time stamps, for each stage of emotion and rest, at both emotion induction methods, for all subjects. Emotion and Rest Detection In Figure 7a, the average TP accuracy across both classes, as well as the average precision, recall, and F1-score accumulated over the five folds, are displayed. Figure 7b also shows the aggregated confusion matrix over all five folds for both the emotion and rest classes. The model achieved mean TP accuracy of 94.19% ± 2.50, with mean precision of 94.16% ± 2.87, a recall mean of 94.21% ± 3.00, and a mean of 94.16% ± 2.55 for the F1-score over all five folds and classes. The confusion chart revealed that the model had a misclassification rate of 5.36% and 6.25% for the emotion and rest classes, respectively. Arousal Detection The results from the classification of the emotions' arousal strength are represented in Figure 8. The mean of the precision, recall, and F1-score over all five folds for each class is displayed in Figure 8a, along with the mean and mean TP accuracy, whereas, in Figure 8b, the summed confusion matrix over the five folds is depicted. The proposed model showed some fluctuations in performance, reaching mean TP accuracy of 51.14% ± 5.58 over the five folds. The mid arousal strength class showed the best performance among the classes, achieving an F1-score of 60.31% ± 9.48, while the high arousal strength class performed the poorest, with an F1-score of 33.41% ± 18.77. The best-performing model out of the five trained models achieved mean TP accuracy of 60.37% over all the classes. The confusion chart shows that the majority of the misclassifications of the low and mid arousal strengths were linked to the mid arousal strength class with a rate of 50.81% and 50% for the high and low classes, respectively. Discussion As observed in Table 4, the selected dataset was smaller than the original, with a reduction of 20.83%. This reduction resulted from a first-stage signal analysis on the original ECG signal, where data from five subjects revealed inconsistencies in the recording. As a consequence, these samples were removed from further processing. The distribution in Table 4 also demonstrates there was no bias towards a particular class in the two-class system. Thus, there was equal representation during the training process. However, in Table 5, a bias in the data towards the class of mid arousal strength is revealed, having a rate of 45.49% from the total distribution, with 31.58% for low and 22.93% for high. This data imbalance was countered with a class-weighted loss function, as described in Section 2.5.2. This ensured the fair representation of each of the arousal strength classes during model training. The efficacy of the proposed model in distinguishing between the two classes of emotion and rest is highlighted in Figure 7. The results indicate that the selected features, and HRV specifically, have suitable embedded information for the task of distinguishing between an emotion or calm or resting state. The robustness of the model at this stage makes further processes throughout the workflow pipeline more efficient. Thus, overall errors will be more sensitive to the model's capability in identifying the strength of a detected emotion's arousal. The results in Figure 8 reveal the difficulty in identifying the different arousal strengths from the given dataset. One contributing factor to the heightened performance of the mid arousal strength could be the inherent human uncertainty or variability surrounding the projection of mid-range arousals. Contrary to real-life scenarios, where extreme emotions tend to offer clearer cues, the model appears particularly adept at navigating the nuances of these intermediate arousal strengths, possibly because of the complexities and ambiguities that humans exhibit when expressing them. In addition, the use of deep learning models is a high-dimensional problem and requires significantly large datasets. Another contributing factor to this low performance was linked to the data imbalance, as well as the limited number of total observations. The data augmentation technique of signal oversampling was not adopted as it would have led to the model overfitting on the data. The low representation of the high arousal strength class also indicates that the subjects were not strongly impacted by the experiment's stimuli. Thus, no significant change in their ECG signal was present. Indeed, when examining the recorded videos, which were synchronized with the physiological signal measurements, minimal to no change in the person's facial expressions was observed. It is thus worth noting the need for potentially more extensive tests to ensure that this state is better represented in the data, if possible. Further, the dataset used in this study was composed of real human reactions to stimuli perceived to trigger the corresponding emotional response. As a result, the complexity of classification increased, since each person behaved differently towards the same stimuli. Equally, the physiological signals also differed from one person to the other depending on a wide range of factors, which in turn influenced the acquired features. In the broader context of emotion recognition, this research underscores the potential of physiological signals, specifically electrodermal activity (EDA) and electrocardiogram (ECG) data, in accurately detecting emotions and assessing arousal strength. The notable emotion detection accuracy of 94.19% achieved by emphasizing key descriptors from heart rate variability (HRV) signifies a substantial advancement in the utilization of these physiological markers. The proposed pipeline, with its real-time application capability, highlights the emerging role of wearable devices in advancing the realm of digital health therapeutics. Additionally, by incorporating a system that can be integrated into therapeutic settings, the research paves the way for more personalized and adaptive therapeutic interventions. The methodology, especially when compared to previous works, showcases the efficacy of combining multiple physiological markers. Thus, this study adds a pivotal dimension to the ongoing discourse in emotion recognition by emphasizing real-time, wearable-device-driven insights, bridging the gap between laboratory findings and realworld therapeutic applications. As with any research, certain limitations of the study should be noted. Limitations include no optimization on the signal window length for HRV feature extraction, no hyperparameter tuning on the CWT, and no model explicability analysis. It should be noted that the signal window length for HRV feature extraction was not optimized, which could have influenced the accuracy of the HRV features derived. Additionally, the absence of hyperparameter tuning for the continuous wavelet transform (CWT) suggests that the decomposition of the signal into its constituent frequencies might not have been at its optimal state, potentially impacting the precision of the feature extraction. Furthermore, without a detailed explicability analysis, the underlying rationale behind the model's decisions remained challenging to decipher, which might limit its practical application. These factors collectively may constrain the generalizability of the findings. The focus of future work will be to tackle some of these limitations by performing an ablation study on the window length. An optimization function will be implemented to tune the CWT hyperparameters. To evaluate the explicability of the model, different techniques will be employed and an evaluation metric established for a quantitative measurement. Traditional Classifier Algorithm Selection To assess the performance and impact of the classification model on the given dataset for emotion and rest classification, different traditional machine learning classifiers were tested. The tested models were trained using the same features and their hyperparameters optimized using the same strategy described in the Methods section, with a 5-fold crossvalidation scheme. Table 6 represents the mean results over the 5 folds on each of the tested models over all the classes. As highlighted, the SVM model with optimized parameters performed the best overall. This indicates that it was able to create a more robust separable feature space than the other tested models. Network Architecture Influence A convolutional neural network architecture has a strong effect on the outcome of the model training process. In this study, five different architectures of Alexnet [52], VGG16 [53], GoogleNet [24], EfficientNetb0 [54], and SqueezeNet [55], with initial pretrained weights, trained on the ImageNet dataset, were trained and analyzed for arousal strength classification using the same training options defined in Section 2. Each architecture has uniqueness and brings a key strength to the model training process. VGG16 demonstrated that stacking small filters can be as effective as having larger receptive fields with fewer parameters. GoogleNet allows for efficient multi-scale processing by using filters of different sizes in parallel, capturing patterns at various scales. EfficientNetb0 scales all three dimensions of depth, width, and resolution together, in a balanced manner, resulting in efficient high-performing models. ResNet50 allows the network to skip certain layers and reduces the problem of gradient vanishing. SqueezeNet is lightweight and suitable for edge devices with limited computational power and is designed to reduce the number of parameters without a significant loss in accuracy. AlexNet allows the use of grouped convolutions to reduce the computational demand and promote diverse feature extraction. Table 7 showcases the mean TP accuracy results over all 5 folds and classes for each model architecture. As can be seen, the ResNet50 architecture achieved the best performance, highlighting its ability to learn relevant descriptive features for arousal strength classification. Conclusions This research used physiological signals for emotion detection and arousal strength identification and a pipeline for real-time applications is proposed. The proposed workflow emphasizes the contributions of wearable devices in advancing digital health therapeutics. Such a system could be integrated into therapeutic settings to monitor patients' emotional responses during therapy sessions. This real-time feedback might be developed into a guide for therapists in adjusting their strategies or interventions. Changes in electrodermal activity (EDA) are first identified and this information is used to reinforce data gathered from the electrocardiogram (ECG) to determine the state of the individual, differentiating between a neutral, calm or rest, or emotional state. Subsequently, the arousal strength of any detected emotional state is classified. The proposed model pipeline was able to achieve emotion detection accuracy of 94.19% with statistical relevance by focusing on key descriptors from the heart rate variability (HRV) features extracted from the ECG signal. Classification accuracy of 51.14% was achieved for the arousal strength identification, which was impacted by significant variability through the mid-range arousal states. Given the complexity of identifying real reactions to emotional stimuli, coupled with the limited amount of data, the proposed approach achieved compelling results, particularly in comparison to prior works and research using more measured input signals. Further analysis and enhancements to the models are planned for future work, including the acquisition of a new dataset along with real-time tests.
2023-09-28T15:19:46.218Z
2023-09-26T00:00:00.000
{ "year": 2023, "sha1": "f4c1488bd1e4772e6d7d51a955514ff28da8cbb3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/19/8092/pdf?version=1695729855", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5b7ce86385654d6a8d8319fad2e7f017bc68709", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
229245498
pes2o/s2orc
v3-fos-license
GPU Usage in ATLAS Reconstruction and Analysis With Graphical Processing Units (GPUs) and other kinds of accelerators becoming ever more accessible, High Performance Computing Centres all around the world using them ever more, ATLAS has to find the best way of making use of such accelerators in much of its computing. Tests with GPUs – mainly with CUDA – have been performed in the past in the experiment. At that time the conclusion was that it was not advantageous for the ATLAS offline and trigger software to invest time and money into GPUs. However as the usage of accelerators has become cheaper and simpler in recent years, their re-evaluation in ATLAS’s offline software is warranted. We show new results of using GPU accelerated calculations in ATLAS’s offline software environment using the ATLAS offline/analysis (xAOD) Event Data Model. We compare the performance and flexibility of a couple of the available GPU programming methods, and show how different memory management setups affect our ability to offload different types of calculations to a GPU efficiently. Introduction With the computing landscape rapidly changing in the last few years, the ATLAS Experiment [1] had to renew its investigations into the usage of non-CPU devices for its offline computing. Evaluations for using NVidia GPUs using the CUDA programming language [2] in the ATLAS High Level Trigger have been done during Long Shutdown 1 of the LHC [3]. The conclusion for ATLAS's Run-2 was not to use GPUs as part of the trigger system. Both due to a difficulty in re-writing the complex reconstruction code in a way that CUDA could understand at the time, and due to the high price of General Purpose GPUs (GPGPUs) of the era. Improvements in those areas, along with significant improvements in the ATLAS offline software infrastructure, the new investigation promises much more positive results for the experiment. The ATLAS Experiment and its Software ATLAS [1] is one of the general purpose physics experiments at the LHC [4]. Its subdetectors provide information about the LHC's proton-proton collisions in O(100M) readout channels. The raw information read out of the experiment during data taking needs to be interpreted by complex algorithms to find what particles were most likely created in any proton-proton collision event, and with what exact properties. The experiment's software also needs to be able to fully simulate how particles arising from different physics processes would interact with the detector, in order to understand the data collected. Most of the ATLAS software is built on top of the Gaudi [5] software framework. The organisation of the code into "algorithms", which can be executed in parallel on multiple threads using TBB [6] is orchestrated by this core framework code. The offline software of ATLAS is managed in a single Git repository [7]. All of the code is organised into "packages", which can be built in different combinations into different projects, that have specific goals. • The Athena project is used for the simulation digitisation and data/simulation reconstruction tasks; • The AthGeneration project is used for Monte-Carlo event generation; • The AthAnalysis project is used for the physics analysis of the reconstruction data; • etc. The ATLAS offline software repository holds a total of O(4M) lines of C++ and O(2M) lines of Python code. Keeping and sharing all of the code in a central place allowed us to harmonise all aspects of the code between vastly different operations. It shall also help us with writing "portable" code for different types of hardware in the same way for our simulation, reconstruction and analysis software. Heterogeneous Computing In recent years the importance of non-CPU-based computing has increased significantly. The latest US based supercomputers for instance do and will all provide most of their computing power using GPGPUs. • Frontier at Oak Ridge [8] will provide AMD CPUs and GPUs; • Aurora at Argonne [9] will provide Intel CPUs and GPUs; • The current largest HPC, Summit at Oak Ridge [10] is providing Power9 CPUs and NVidia GPUs. To make use of these "new types of resources", multiple different approaches exist. 1. Program the specific target hardware in a hardware-specific low-level language, taking the strengths and weaknesses of the hardware into account; 2. Write the program in a standardised, higher level language, which hides some of the details of the underlying platform, but allows for fewer optimisations; 3. Make use of high level libraries that translate higher level, more abstract tasks into calculations on different, specific hardware. While all of these have areas where they are the best possible choice, for writing reconstruction and simulation code for large HEP experiments like ATLAS, the second option seems to be the the only feasable one. In "high level programming languages" one still has a number of choices, which all have upsides and downsides to them. • OpenCL [11] is a cross-platform standard meant to provide a uniform programming interface to a wide range of accelerators. Unfortunately its support from AMD and NVidia is practically non-existent by now. • CUDA [2] is the most "established" out of the available languages. It is supported both by NVidia's own compiler, and, to some degree by LLVM/Clang, but only for NVidia hardware in all cases. • ROCm/HIP is providing a programming interface part way between OpenCL and CUDA. It is mainly meant as a way for programming AMD GPUs, but at the moment AMD also provides a way to generate code for NVidia devices from ROCm/HIP sources. is a pure C++ interface in the sense that it doesn't require any extensions to the C++ language for its syntax. Like OpenCL it is an open standard, in practice only being supported by Intel at the moment. • OpenMP / OpenACC also allow "pure" C++ code to be compiled for accelerators. But while they proved very appropriate for many applications, they do not seem to scale to the ATLAS offline software's size. In this most recent study we investigated the usage of OpenCL, CUDA and SYCL/oneAPI code with the ATLAS offline software. The following will describe our experience with these different programming methods. TBB Based Multi-Threading in Gaudi/Athena The ATLAS offline software is built on the Gaudi framework, which is implemented in a collaboration between ATLAS, LHCb and the CERN SFT group. Calculations in Gaudi/Athena are performed by "algorithms", which are classes that need to provide a function that gets called for every "event" being processed. In this function the algorithms can retrieve their necessary inputs using an object whiteboard, perform their operations, and then record new objects into the whiteboard before finishing. After LHC's Run-2 Gaudi was updated to execute algorithms using the Threading Building Blocks (TBB) library. In this setup all algorithms must explicitly declare what object(s) they require to run, and what object(s) they produce as an output. Using this information a component of Gaudi called the "scheduler" can figure out during a job how to feed TBB tasks to the TBB runtime system such that algorithms would be executed with the highest efficiency. (A)Synchronous Accelerator Usage The most widely used method for running accelerated calculations is the following: 1. Allocate memory on the device for the proceeding calculation, and copy all input data needed for the calculation, to the device; 2. Launch the calculation on the accelerator, and wait in the CPU thread until the calculation is finished; 3. Copy the results of the calculation from the device back to the host, and (possibly) free the memory that is no longer required. This setup can prove perfectly appropriate for calculations that are primarily happening on the accelerator. And/or when the accelerated calculation is taking much less time than the same calculation would on the CPU. In the case of HEP software, especially in the current "migration period" while we are working on establishing efficient programming methods for accelerators, we need to be smarter with how we schedule accelerated calculations. We need to make sure that CPU threads spend as little time as possible with setting up accelerated calculations, and especially that they do not ever have to wait for synchronisation points with the accelerator. Most current high-level languages provide ways of achieving such a behaviour when using multiple CPU threads. In all cases the accelerator programming languages provide ways of notifying the host code about the completions of selected operations on the accelerator. Allowing the host code to schedule the retrieval of the result data from the device and the launch of calculations dependent on that data, at the next convenient step in the TBB task execution. To make use of this, the TBB based code execution has to have a concept of the asynchronous execution happening on accelerators. Asynchronous Gaudi To evaluate different accelerator programming techniques in the same context, a dedicated software project was set up [14]. It provided a convenient way of experimenting with changes in Gaudi's TBB based scheduling system independent of the full offline software build of ATLAS. Code Organisation The project was set up to build Gaudi, the core analysis Event Data Model code of ATLAS and the code being tested, as a single CMake project. The project uses a common algorithm base class (ASync::Algorithm) to allow implementing both "synchronous" and "asynchronous" algorithms. To do this, the base class provides the following interface: In order for an algorithm to implement an asynchronous interface it needs to: 1. Implement the mainExecute(...) and postExecute(...) functions, which shall take care of launching an asynchronous calculation, and collecting its results, respectively; 2. Make sure that tbb::task object received through the AlgTaskPtr_t smart pointer object would get scheduled when the offloaded calculation is finished. It was possible to implement test algorithms on top of this common base which would either run using only the CPU "synchronously", or use an accelerator for their calculation either "synchronously" or "asynchronously". The project also provides a custom "Gaudi scheduler" (ASync::SchedulerSvc) that would orchestrate the execution of algorithms implementing the ASync::Algorithm or Gaudi::Algorithm interfaces, in a way that would maximise the CPU usage of the application. Asynchronous Execution With SYCL/OpenCL In order to be able to make use of the NVidia GPUs available to us, we had to use a maximum of OpenCL version 1.2 in all of our tests. This means that all of the accelerated code has to be handled separately from the rest of our C++ source code, and we have to ship the OpenCL sources along with our compiled binaries. Following the design of the user interface provided by tbb::flow::opencl_node (which we could not use directly due to an incompatibility of tbb::flow with Gaudi's scheduling system with TBB) we experimented with providing a variadic template based programming interface for executing "vanilla" OpenCL code from our algorithms. This effort was however abandoned before its conclusion, as by that time it was clear to us that OpenCL itself would not be an appropriate programming interface for ATLAS. Instead we started looking at SYCL, more specifically Intel's implementation as part of its oneAPI platform. To perform asynchronous calculations with SYCL, we extracted variables from ATLAS EDM objects individually using cl::sycl::buffer objects, and then used basic SYCL code to run calculations on these buffers. Unfortunately in order to receive a callback about asynchronous calculations finishing, we had to rely on OpenCL's clSetEventCallback(...) as the SYCL API does not currently provide a C++ interface for such a feature. Asynchronous Execution with CUDA To use NVidia GPUs to their full potential, tests specifically using CUDA were written. For simplifying the memory handling between the host and the NVidia GPUs, a special class (AthCUDA::AuxStore) was written that would allow xAOD style objects [15] to offload their data to the GPU, and get the results of a calculation back, with just a few lines of user code. In order to perform GPU calculations with CUDA asynchronously, every memory copy operation and kernel launch was assigned to CUDA queues, and the TBB based scheduling was notified of the completion of GPU calculations using CUDA's cudaLaunchHostFunc(...) function. Since all CUDA memory creation/deletion operations are managed by a global mutex, extra care had to be taken to serialise the creation and deletion of memory areas both on the device and the host. Tests To test the performance of running calculations through CUDA and SYCL, the same ATLAS Monte-Carlo reconstruction "toy configuration" was used as for the development of Gaudi's TBB based scheduling. This configuration describes all algorithms taking part in the AT-LAS reconstruction, with the time that they each took on a reference CPU, and all the data dependencies and products that the algorithm require and produce. Using this information "toy" algorithms were written that would execute the number of floating point operations on the CPU or GPU that correspond to the time recorded in the previously described configuration, on the test machine's CPU. This simplicity also meant that higher order effects, that one usually runs into when writing real GPU code, could not be properly emulated by this toy code. The algorithms running calculations on a GPU assumed that all calculations could be vectorised to 100 parallel threads, and were set up to allow us to run a configurable number of floating point operations with respect to the "ideal" value taken from the previously described calculation. The algorithms were also set up to transfer a small amount of memory to and from the GPU before and after the calculation. However no fine-tuning was done to adjust the memory amounts of the individual algorithms in any realistic way, they all copied the same small amount of data. Because of this setup the toy algorithms were not set up to be "chained" on the GPU. They all were set up to receive their input data and return their output data from/to the host. The results of a number of representative jobs are shown in Table 1. Executing offloaded calculations from TBB in an asynchronous way is providing encouraging results in our tests so far. But the results also tell us that all offloaded code will have to be written very carefully to be able to achieve the best possible performance. Effort is currently ongoing to migrate a select list of ATLAS reconstruction algorithms to GPUs -chosen primarily based on coding considerations in this first round, not directly based on this study -, which shall allow ATLAS to run much more elaborate tests with accelerated calculations in its offline software framework in the non too distant future.
2019-12-05T17:24:19.622Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "7935c57546afff6315c8d0ba8269c72a592eb95a", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/21/epjconf_chep2020_05006.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "712dc01c489cc33b94ba39d774f7784d9bcdb714", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }