id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
246030208 | pes2o/s2orc | v3-fos-license | Vibration and noise suppression method of transformer
Aiming at the problem of noise and vibration of distribution transformer body and base connector under heavy load, the corresponding relationship between vibration and noise, load and power factor of distribution transformer is studied in this paper. In order to improve the operation reliability of distribution transformer, a noise and vibration suppression method based on load scenario optimization is proposed. From the aspects of design reliability, data detection and noise level correction, the typical relationship between the increase of noise level and DC current, noise spectrum and DC bias noise is obtained, and the DC bias limiting measures and correction suggestions are given.
2 magnetic bias noise, and puts forward rectification suggestions, which can provide reference for the followup research work.
2.Analysis of vibration and noise data of transformer
The vibration noise of DC bias mainly comes from the vibration of iron core and winding, and the vibration signal is transmitted to the shell in different ways. The iron core consists of The magnetostrictive effect of silicon steel sheet is caused. Vibration is transmitted to the oil tank mainly through two ways, one is transmitted to the oil tank through the pad foot, the other is transmitted to the oil tank through insulating oil. The vibration of winding is caused by electromagnetic force, which is mainly transmitted to the oil tank through insulating oil; the vibration of cooling devices such as fans can also be transmitted to the oil tank of transformer through solid way.
When the transformer works normally, the excitation current is symmetrical in the positive and negative half cycle, and the AC excitation flux is symmetrical in the positive and negative half cycle, so the magnetostrictive displacement is also symmetrical in a period of flux change. When DC passes through AC transformer, the changes of internal excitation current and excitation flux under over excitation state and DC bias are shown in Fig.1. In the case of DC bias, due to the superposition effect, the corresponding excitation current presents the shape of positive and negative half wave asymmetry. The half cycle which is consistent with the magnetic bias direction increases greatly, while the other half cycle decreases, resulting in half wave saturation, as shown in Fig. 2 (compared with Fig. 1). The results show that the magnetostrictive displacement is positively correlated with the square of the magnetic flux [7], and the magnetic flux is positively correlated with the exciting current. Therefore, the magnetostrictive displacement of the iron core will appear asymmetry in a cycle, including not only even harmonic component but also odd harmonic component.
When the transformer works normally, the winding vibration is basically positively correlated with the square of the current. Therefore, the winding vibration is mainly composed of 100 Hz fundamental frequency and high-order harmonics with integral times of the fundamental frequency. Under DC bias, due to the distortion of excitation current, a large number of odd and even harmonics will appear in the winding vibration.
To sum up, the vibration of the transformer will become more complex and a series of high-order harmonics will appear in the case of DC bias. The noise level increases obviously with the increase of DC magnetic bias, and the noise level is related to the ratio of DC magnetic bias current to no-load current .
3.Analysis of multi scene operation characteristics of transformer and its corresponding relationship with vibration and noise
In addition to the project cost, foreign projects should alsTaking No.2 and No.3 main transformers of a new substation as an example, the abnormal noise situation is analyzed. The transformer model is sfz10 180000 / 220; the rated voltage is 230 ± 8 × 1.25% 121 / 11 kV; the rated capacity is 180 / 180 / 90 MVA; the connection group is ynyn0d11; the main tap impedance is 14% / 48% / 33%. No.2 main transformer is grounded and No.3 main transformer is not grounded.
After putting into operation, the noise of No.2 main transformer is obviously higher than that of No.
Where: build-up d is the increase value of noise level; α is the correction coefficient; T A is the measured sound pressure; Au is the reference sound pressure. The calculated noise level increases by 10.4 dB. Therefore, the noise level of the transformer without grounding is 68.6 dB.
The two transformers in the station are all high impedance transformers with built-in reactor structure. The transformer manufacturer has adopted the built-in reactor structure in dozens of high impedance transformers in 19 substations. The field operation is good and the structure is mature and reliable, which can meet the requirements of field operation. It can be seen from the correction results in Section 2. The noise of main transformer under no-load state is mainly 100 Hz and its multiple frequency, and the frequency with the highest amplitude is 300 Hz.
The noise spectrum of No.3 main transformer is composed of even and odd harmonics, mainly even order. After DC bias is injected into the transformer, its excitation current is saturated, so that the harmonic component of vibration contains odd harmonic component.
Using MATLAB simulation analysis, the noise spectrum of no-load test of No.3 main transformer is shown in Fig.4 (a), and the noise spectrum analysis of No.3 main transformer under DC bias is shown in Fig.4 Figure 4. matlab simulation analysis image The suppression of DC bias is generally realized by adding a suppression device between the neutral point and the ground. The suppression device can add a DC generator to inject a certain amount of reverse DC current into the substation grounding grid to reduce its potential, and carry out reverse compensation for the grounding DC to reduce the DC current flowing through the neutral point and restrain the influence of DC bias Relay protection and insulation level have no influence. The current limiting resistor or isolating capacitor can also be used in the suppression device. However, in this case, the DC bias is slightly greater than 1 A, and low resistance or DC generator is generally used to suppress it. However, considering that the installation of small resistance will have a certain impact on the system structure, DC generator is used to suppress DC.
There are two main transmission paths of core vibration: one is that the iron core transmits the vibration to the bottom of the oil tank through the support bolt which is rigidly connected with the bottom of the oil tank, thus causing the bottom-up vibration of the oil tank. The vibration transmission mode is "solid-solid" transmission, which has the unidirectional attenuation characteristics of vibration when transmitted by a single solid path; the other is that the iron core passes through insulation The vibration is transmitted to the surface of the oil tank, thus causing the overall vibration of the oil tank. The vibration transmission mode is "solid liquid solid". On the solid-liquid coupling interface, the vibration of the iron core causes the movement of the insulating oil, and the movement of the insulating oil causes the vibration of the oil tank. In addition, due to the difference of material structure, material and distance, the vibration frequencies of the two paths have different amplitude attenuation, resonance and coupling in the process of transmission, thus forming a vibration distribution characteristic completely different from the core vibration characteristics when reaching the tank shell, which is from the core to the oil tank It is the result of the interaction of natural attenuation mechanism and coupling attenuation mechanism.
In addition, the vibration spectrum of transformer core, oil tank vibration and near-field noise measurement points under different voltages are analyzed, and the peak frequencies of transformer core, tank vibration and near-field noise are calculated. It can be seen that the peak frequencies of core and tank vibration and near-field noise are basically the same under different no-load voltages, which further illustrates the significant correlation between core and tank vibration and near-field noise. Specifically, the peak frequency is mainly concentrated in 100Hz, 200Hz and 300Hz, which is due to the frequency of the external power frequency magnetic field of 50 Hz, the nonlinearity of the core hysteresis expansion, and the different length of the magnetic flux path along the inner and outer frames of the core, resulting in the waveform of the actual core magnetic flux density is not a standard sine wave, Therefore, in addition to 100Hz fundamental frequency, 200Hz, 300Hz and other harmonics are also obvious. Results shows the frequency spectrum characteristics of transformer core and tank vibration and near-field noise under 100% no-load voltage.
4.In conclusion
Starting from the vibration mechanism of transformer DC bias, the noise characteristics of transformer magnetic bias vibration are analyzed, and the abnormal phenomenon of vibration and noise of main transformer in newly put into operation substation is elaborated in detail. The vibration and noise of transformer is mainly determined by the vibration of iron core and winding, which takes twice the power frequency as the fundamental frequency (100 Hz) and contains the higher harmonic whose frequency is integral times. Under the action of DC bias, the excitation current is distorted, and the corresponding noise signal also appears a lot of odd order noise. When the abnormal noise of main transformer is caused by DC magnetic bias, injecting reverse DC into neutral point is an effective measure to limit DC bias. | 2022-01-19T20:09:03.004Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "af158551855fc83f2401f7926035afa1b261c5d3",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2158/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "af158551855fc83f2401f7926035afa1b261c5d3",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
24965694 | pes2o/s2orc | v3-fos-license | Distinct roles of mitogen-activated protein kinase pathways in GATA-4 transcription factor-mediated regulation of B-type natriuretic peptide gene.
The expression of cardiac hormones, atrial natriuretic peptide and B-type natriuretic peptide, is induced by cardiac wall stretch and responds to various hypertrophic agonists such as endothelin-1. In cardiac myocytes, endothelin-1 induces GATA-4 binding to the B-type natriuretic peptide gene, but the signaling pathways involved in endothelin-1-induced GATA-4 activation are unknown. Mitogen-activated protein kinase pathways are stimulated in response to various extracellular stimuli, and they modulate the function of several transcription activators. Here we show that inhibition of p38 kinase with SB203580 inhibited endothelin-1-induced GATA-4 binding to B-type natriuretic peptide gene and serine phosphorylation of GATA-4. Inhibition of extracellular signal-regulated protein kinase with MEK1 inhibitor PD98059 reduced basal and p38-induced GATA-4 binding activity, but it had no significant effect on endothelin-1-induced GATA-4 binding activity. Overexpression of p38 kinase pathway, but not extracellular signal-regulated kinase or c-Jun N-terminal protein kinase, activated GATA-4 binding to B-type natriuretic peptide gene and induced rat B-type natriuretic peptide promoter activity via proximal GATA binding sites. In conclusion, these findings demonstrate that activation of p38 kinase is necessary for hypertrophic agonist-induced GATA-4 binding to B-type natriuretic peptide gene and sufficient for GATA-dependent B-type natriuretic peptide gene expression.
tensin II, and ␣ 1 -adrenergic agonists (2). At the genetic level, activation of a program of immediate early genes, such as c-fos, c-jun, and c-myc, is the first detectable response to hypertrophic stimuli. This is followed by alterations in contractile protein compositions, including reactivation of -myosin heavy chain, skeletal ␣-actin, and myosin light chain-2 genes (3,4). Hypertrophy also results in induction of noncontractile protein genes such as atrial natriuretic peptide (ANP) and B-type natriuretic peptide (BNP), which are known members of the mammalian cardiac natriuretic peptide system (5)(6)(7). ANP and BNP defend against increased hemodynamic load by decreasing blood pressure, regulating fluid homeostasis by increasing salt and water excretion, and regulating several hormones, such as angiotensin II, ET-1, and vasopressin (5,8). In the normal adult heart, ANP is mainly synthesized in the atria, whereas BNP is abundant in cardiac atria and ventricles where its gene expression is rapidly up-regulated in response to cardiac wall stretch. Indeed, the induction of BNP gene expression is one of the earliest myocyte-specific markers of hemodynamic stress-induced hypertrophic response (5, 9 -11).
Several signaling pathways, including intracellular calcium, protein kinase C, nonreceptor protein tyrosine kinases, and calcineurin are implicated in the initiation and maintenance of myocyte hypertrophy (12)(13)(14). There is also considerable evidence that activation of the mitogen-activated protein kinase (MAPK) cascades can lead to a hypertrophic response in myocytes. MAPK pathways can be divided into three subclasses; the extracellular signal-regulated protein kinase (ERK) pathway, the c-Jun N-terminal protein kinase (JNK) pathway, and the p38 kinase pathway (15). Each MAPK pathway consists of three or more levels and multiple isoforms, giving the signaling system potential to distinguish different extracellular stimuli. The MAPKs, ERK, JNK, and p38 MAPK, have been shown to be inducible by a variety of hypertrophic stimuli, including mechanical stretch, ET-1, and other GPCR (G protein-coupled receptor) agonists (15,16). Cardiac-restricted MEK1 (an upstream kinase of ERK pathway) overexpression in vivo has been shown to lead to concentric hypertrophy in transgenic mice (17), and most studies have found that ERK is associated with ET-1-induced cardiomyocyte hypertrophy (15,16,18,19).
The p38 MAPK family consists of six isoforms, of which p38␣ and p38 are the predominant isoforms present in the heart (20). Activation of p38 has also been shown to lead to cardiomyocyte hypertrophy in vitro (21,22). Activated MAPKs phosphorylate a number of substrates, including nuclear transcription factors such as myocyte enhancer factor-2 (MEF2), activating transcription factor-2 (ATF2), ATF6, and downstream kinases such as p38-regulated/activated kinase (23)(24)(25)(26). However, the precise roles of different MAPKs and their downstream targets in hypertrophic signaling are not known.
The GATA family of transcriptional factors contains six mammalian members (reviewed in refs. 27,28). GATA proteins, which contain a DNA binding domain composed of two evolutionary conserved zinc fingers (N-and C-terminal), bind to consensus sequence 5Ј-(A/T)GATA(A/G)-3Ј and its variants (29). Cardiac transcription factor GATA-4 has been shown to play a nonredundant role for the cardiac muscle development during embryogenesis (30,31). In postnatal cardiac myocytes, it has been reported that the expression of several cardiac genes, including ␣-myosin heavy chain (␣MHC) and cardiac troponin C (cTnC), is directed into cardiac myocytes via GATA-4 binding elements on the promoter region (32,33). Interestingly, analysis of the ANP and BNP promoter regions has also revealed binding sites for GATA-4 (34,35). There are data demonstrating possible involvement of GATA-4 in the hypertrophic signaling in cardiac myocytes (36 -40). Recently, we have reported that pressure overload of rat heart activates GATA-4 and that the activation is mediated by ET-1 (41). In the present study, to identify molecular mechanisms mediating ET-1-induced BNP gene expression and activation of GATA-4, we focused on the role of MAPK signaling in cultured rat neonatal cardiac myocytes.
Cell Culture and Transfection-Cells were prepared from 2-to 4-dayold Sprague-Dawley rats (42). Cells were plated at the density of 2 ϫ 10 5 /cm 2 onto Falcon wells from 15 to 60 mm in diameter. Following a 16-h incubation, myocytes were subjected to liposome-mediated transfection with FuGENE 6 for 6 h. To control the transfection efficiency, reporter plasmids were cotransfected with RSV (Rous sarcoma virus) promoter driven -galactosidase gene plasmids (1 and 0.5 g, respectively). In cotransfection experiments, 0.1 g of expression plasmids was used to avoid quenching, whereas double this amount of expression plasmids was used in other experiments. After transfection, cells were washed twice with Dulbecco's modified Eagle's medium and cultured in complete serum-free medium (CSFM). When appropriate, ET-1 100 nM (Sigma Chemical Co.) was added to culture medium on a third day in culture. Previously, this concentration of ET-1 has been shown to induce cardiomyocyte hypertrophy in cell culture (16,18,19). On the fourth day, myocytes were lysed, and luciferase and -galactosidase activity assays were performed using Luminoscan (Labsystems). All experiments were repeated at least three times.
COS-1 cells were maintained in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum. Cells were plated onto plates 100 mm in diameter and transfected with 1 g of GATA-4 expression plasmid and 0.1 g of expression plasmids for p38␣, MEK1, JNK1, and pUC19 using FuGENE 6 reagent. Forty-eight hours after transfection, cells were harvested and subjected to nuclear protein extraction. We thank Dr. Jukka Hakkola (Department of Pharmacology and Toxicology, University of Oulu) for the gift of COS-1 cells and for helpful advice on the project.
Kinase Assays-After treatment with appropriate agonists, myocytes (ϳ5 ϫ 10 6 ) were washed with phosphate-buffered saline at room temperature and collected by scraping into 500 l of lysis buffer, which consisted of 20 mM Tris (pH 7.5), 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, 2.5 mM sodium pyrophosphate, 1 mM -glycerophosphate, 1 mM Na 3 VO 4 , 1 g/ml leupeptin, 1 g/ml pepstatin, 1 g/ml aprotinin, 2 mM benzamidine, 1 mM phenylmethylsulfonyl fluoride, 2 mM DTT, and 50 mM NaF. Extracts were further lysed with sonication, and supernatant was collected after centrifugation. Western blot assays for p38 were performed using the PhosphoPlus p38 MAPK antibody kit. Samples (20 -40 g) were loaded onto SDS-PAGE and transferred to nitrocellulose filters. The membranes were blocked in 5% nonfat milk and then incubated with indicated primary antibody overnight at 4°C. Phospho-p38 and total p38 were detected by enhanced chemiluminescence. For a second Western blot, the membrane was stripped for 30 min at 60°C in stripping buffer (62.5 mM Tris (pH 6.8), 2% SDS, 100 mM -mercaptoethanol). For immunocomplex kinase assay, endogenous p38 was immunoprecipitated with specific antibody at 4°C overnight, followed by protein G-Sepharose precipitation. Immunoprecipitates were washed three times with buffer containing 50 mM Tris (pH 7.4), 150 mM NaCl, 5 mM EDTA, 25 mM -glycerophosphate, 25 mM NaF, and 1% Triton X-100. Lysates were once more washed with kinase buffer containing 25 mM Tris (pH 7.5), 5 mM -glycerophosphate, 2 mM DTT, 1.0 mM Na 3 VO 4 , and 10 mM MgCl 2 . The activity of the immunocomplex was assayed at 30°C for 15 min in 30 l of kinase buffer in the presence of 2 Ci of [␥-32 P]ATP and 20 g of MBP as substrate. The reactions were terminated, and the reaction contents were electrophoresed on 15% SDS-polyacrylamide gels followed by PhosphorImager analysis to determine the phosphorylation level of MBP. The effect of p38 inhibitor SB203580 on p38 activity was measured by in vivo kinase assay.
For ERK assays, cells were collected with buffer containing 10 mM Tris (pH 7.5), 150 mM NaCl, 2 mM EGTA, 2 mM DTT, 1 mM Na 3 VO 4 , 10 g/ml leupeptin, 10 g/ml aprotinin, 2 g/ml pepstatin, and 5 mM benzamidine. Extracts were sonicated, and the supernatant was collected after centrifugation. 15 l of protein extract was incubated at 30°C for 15 min with 10 l of substrate buffer containing specific ERK-substrate peptide in the presence of 1 Ci of [␥-32 P]ATP. Each reaction was terminated and blotted onto separate peptide binding paper discs, which were washed with 75 mM orthophosphoric acid repeatedly. Incorporated radioactivity was measured with a scintillation counter (Rackbeta II, LKB Wallac).
Nuclear Protein Extraction and Electrophoretic Mobility Shift Assay-Nuclear extracts from myocytes were prepared as described previously (43). Protein concentration from each sample was determined by using Bradford assay (44) (Bio-Rad Laboratories). Double-stranded oligonucleotide corresponding to GATA binding region (⌬-68/-97) of rat BNP promoter was used for analysis of GATA DNA binding activity and a previously described oligonucleotide for measurement of Octamer-1 (Oct-1) DNA binding activity (45). Both probes were sticky-end-labeled with [␣-32 P]dCTP by Klenow enzyme. For each reaction mixture (20 l) 6 g of nuclear protein and 2 g of poly(dI-dC) was used in a buffer containing 10 mM HEPES (pH 7.9), 1 mM MgCl 2 , 50 mM KCl, 1 mM DTT, 1 mM EDTA, 10% glycerol, 0.025% Nonidet P-40, 0.25 mM phenylmethylsulfonyl fluoride, and 1 g/ml each of leupeptin, pepstatin, and aprotinin. Protein phosphatase inhibitors NaF (50 mM) and Na 3 VO 4 (1 mM) were also added to the mixture. Reaction mixtures were incubated with a labeled probe for 20 min followed by nondenaturating gel-electrophoresis on 5% polyacrylamide gel. Subsequently, gels were dried and exposed in a PhosphorImager screen and analyzed with ImageQuaNT (Molecular Dynamics). To confirm DNA sequence specificity of the protein-DNA complex formation, competition experiments with 10-, 50-, and 100-molar excesses of nonradiolabeled oligonucleotides with intact or mutated binding sites were performed. For competition and supershift experiments appropriate oligodeoxynucleotides or antibodies were added to reaction mixture 20 min before addition of labeled probe.
GATA-4 Phosphorylation Analysis-To determine the GATA-4 phosphorylation state, GATA-4 was immunoprecipitated using a Seize X Protein G immunoprecipitation kit. GATA-4 antibody was first bound and immobilized to Protein G according to the manufacturer's instructions. Nuclear extracts were then applied to immobilized antibody support, unbound proteins were washed out, and finally GATA-4 protein was eluted. Samples were loaded onto SDS-PAGE and subjected to FIG. 1. Activation of p38 MAPK by ET-1. A, effect of ET-1 on activation of p38 MAPK. Cardiac myocytes were treated with ET-1 at the concentration of 100 nM for 15 min at 37°C and 5% CO 2 . After ET-1 exposure, cells were washed and lysed. Cell lysis was centrifuged, and supernatants were subjected to SDS-PAGE and immunoblotted with antibody specific for phospho-p38 (Thr 180 /Tyr 182 ) to detect the activated p38 kinase. To quantitate the total amount of p38 kinase protein, samples were immunoblotted with antibody specific for p38 kinase. Bars represent two separate experiments done in duplicates and are expressed as a -fold change versus untreated control. *, p Ͻ 0.05 compared with untreated control. B, effect of ET-1 on p38 kinase activity. Cardiac myocytes were cultured in culture plates 50 mm in diameter and treated with 100 nM ET-1 for 5-60 min at 37°C and 5% CO 2 , followed by washing and lysing of the cells. Subsequently, cell lysates were centrifuged and protein concentration of supernatant was measured. 100 g of cellular protein was subjected to immunoprecipitation with antibody specific for p38 MAPK. After addition of 2 Ci of [␥-32 P]ATP, immunoprecipitated p38 was incubated with 20 g of MBP in reaction buffer at 30°C for 15 min. After termination of reaction, proteins were resolved on SDS-PAGE gels, followed by autoradiography and densitometric scanning for incorporated radioactivity. C, effect of ET-1 on p38 kinase substrate (ATF2) transactivation. Cardiac myocytes were cultured on 24-well cell culture plates and cotransfected with 0.1 g of a p38 kinase pathway-specific transactivator vector fused to the tetracycline repressor protein (pTetR-ATF2), 0.9 g of reporter vector containing the luciferase gene under the control of a tetracycline-responsive element and 0.5 g of RSV-promoter driven -galactosidase plasmid. After transfection, cells were washed and incubated overnight with complete serum-free medium. The next day, ET-1, p38 Western blotting. The primary antibody indicated was incubated at 4°C overnight. Antibody binding was detected with a peroxidase-conjugated goat anti-rabbit or bovine anti-goat IgG and enhanced chemiluminescence.
Protein Synthesis-[ 3 H]Leucine incorporation was measured as described previously (46). Briefly, cells were cultured in 24-well plates, and on a third day in culture, medium was replaced with CSFM supplemented with [ 3 H]leucine (5 Ci/ml). When appropriate, ET-1 (100 nM), SB203580 (20 M), and PD98059 (20 M) were also added. After 24 h, cells were lysed and processed for measurement of incorporated [ 3 H]leucine by liquid scintillation counter.
Statistics-Results are expressed as means Ϯ S.E. For the comparison of statistical significance between two groups, Student's t test was used. Differences at the 95% level were considered statistically significant.
Activation of p38 MAPK and ERK by ET-1 in Cardiac
Myocytes-p38 MAPK is activated in neonatal rat ventricular myo-cytes (referred after this as myocytes) by various extracellular stimuli such as pro-inflammatory cytokines interleukin-1␣ and tumor necrosis factor-␣ (47). It has also been shown that hypertrophic agonists ET-1 and phenylephrine (PE) stimulate p38 activity in myocytes (18). To establish the activation of p38 by ET-1 in the present study, we used an antibody selective to a dually phosphorylated form of p38 for Western blot analysis. Phosphorylation of p38 was imminent and peaked at 15 min (Fig. 1A). The kinetics of p38 activation was measured by immunocomplex kinase assay. Endogenous p38 was immunoprecipitated with anti-p38 antibody, and its activity was measured using MBP as a substrate. As shown in Fig. 1B, ET-1 induced a rapid increase in p38 activity, which was maximal at 15-20 min. The pyridinyl imidazole SB203580 has been shown to be a potent inhibitor of p38␣ and p38 1 MAPKs (48). To verify the inhibition of p38 by SB203580 in cardiac myocytes, we applied in vivo kinase assay, which uses ATF2 as a substrate. Treatment with ET-1 (100 nM) for 24 h increased p38 activity by 3.4-fold, and activity was totally inhibited by p38 inhibitor SB203580, which also decreased basal activity of p38 MAPK by 50% (Fig. 1C). In contrast, treatment of myocytes with a potent MEK1 inhibitor PD98059 increased basal p38 activity but had no effect on ET-1-induced p38 activity (Fig. 1C).
As noted previously, ERK is activated by several GPCR agonists in cardiac myocytes (49). To examine the regulation of ERK by ET-1, we applied an assay, which measures transfer of a phosphate group to a peptide highly selective for ERK (p42/44 MAPK). As reported previously (12,16), ET-1 at the concentration of 100 nM was a strong activator of p42/44. This response was maximal at 5 min and declined to almost basal level within 35 min (Fig. 2). MEK1 inhibitor PD98059 (20 M) was sufficient to abolish ET-1-induced ERK activation by 80% measured at 5 min (data not shown).
Effect of ERK and p38 Inhibition on ET-1-induced Protein Synthesis-Activation of de novo protein synthesis, a major inhibitor SB203580, and ERK inhibitor PD98059 (final concentrations of 100 nM, 20 M, and 20 M, respectively) were added, and cells were incubated with or without 2 g/ml tetracycline hydrochloride at 37°C and 5% CO 2 for 24 h, followed by luciferase and -galactosidase assays. Reporter activity obtained in the presence of tetracycline was subtracted from luciferase activity of cells without tetracycline treatment to confirm the specificity of the TetR-ATF2-dependent transactivation. Each bar represents results of 4 -6 separate experiments obtained from three independent cell cultures. *, p Ͻ 0.05 compared with untreated control cells. **, p Ͻ 0.01 compared with untreated control cells. ##, p Ͻ 0.001 compared with ET-1 treated cells.
hallmark of cardiomyocyte hypertrophy, is strongly induced by ET-1 (15). To examine whether blockade of p38 MAPK or ERK with specific inhibitors is sufficient to attenuate this hypertrophic response, we examined incorporation of 3 H-labeled leucine in cardiac myocytes. Treatment of myocytes with SB203580 or PD98059 at the dose of 20 M had no effect on basal protein synthesis (Fig. 3). The ET-1-induced 2.5-fold increase in [ 3 H]leucine incorporation was totally abolished by p38 MAPK inhibition with SB203580 (20 M), whereas ERK inhibition with PD98059 (20 M) had no effect.
ET-1-induced GATA-4 DNA Binding Is Regulated by p38 MAPK-We have recently reported (41) that in vivo pressure overload activates GATA-4 binding to BNP gene via ET-1 in rat heart and that in vitro ET-1 treatment of cultured cardiac myocytes was sufficient to stimulate GATA-4 binding to BNP gene. Therefore, we tested the hypothesis that one of the MAPKs, p38, ERK or JNK, regulates ET-1-induced GATA-4 binding to BNP gene. Like p38 activation, GATA activation occurred rapidly in response to ET-1 (100 nM). ET-1 stimulated GATA-4 binding to BNP gene was detectable in 15 min and was maximal in 60 min (Fig. 4A). GATA-4 binding remained upregulated for 3 h, and according to the supershift analysis GATA-4 was the major cardiac nuclear factor that binds to the BNP GATA site (Fig. 4B). Mutation of either of the GATA binding sites showed that GATA-4 binds equally well to both sites, but the binding activity is reduced to about a half of that observed with a probe having both sites intact (data not shown). This data agree with previous findings by Thuerauf et al. (50) indicating that at least one of the GATA sites is required to confer full GATA-4-inducible transcription.
We next pretreated the myocytes with SB203580, PD98059, or transfected the cells with the dominant negative form of JNK and then subjected the cells to ET-1 treatment. The induction of GATA-4 binding to BNP gene was completely inhibited by p38 inhibitor SB203580, and, moreover, this inhibition of GATA-4 binding was dose-dependent (Fig. 5A). Inhibition of the ERK pathway with PD98059 had no effect on ET-1-induced increase in GATA-4 DNA binding, but basal GATA-4 binding activity was significantly decreased (Fig. 5B). Inhibition of JNK pathway with dominant negative form of JNK had no effect on basal or ET-1-induced GATA-4 DNA binding (data not shown). The levels of GATA-4 mRNA did not change in neona-tal cardiac myocytes treated with ET-1 for 4 h (41) suggesting that the increase in GATA binding activity was due to posttranscriptional mechanisms.
p38 MAPK Increases DNA Binding Activity and Phosphorylation of GATA-4 -To further elucidate the role of p38 MAPK in the induction of GATA-4 DNA binding, the p38 protein levels were increased by transfecting the myocytes with a cytomega- and 180 min at 37°C and 5% CO 2 and subjected to nuclear protein extraction and EMSA. 32 P-Labeled double-stranded oligonucleotide corresponding to (⌬-68/-97) of rat BNP promoter was used as a GATA factor-binding probe. Specificity of the effect of ET-1 on GATA factor DNA binding activity was confirmed by measuring Octamer-1 (Oct-1) DNA binding activity. In parallel with GATA binding, same nuclear extracts were incubated with 32 P-labeled Oct-1 probe prior to EMSA. B, supershift (SS) analysis of the (⌬-68/-97) of rat BNP promoter binding GATA factor. Cardiac myocytes were incubated with 100 nM ET-1 for 60 min at 37°C and 5% CO 2 , and nuclear proteins were extracted prior to EMSA. Supershift reactions were performed by incubating reaction mixtures with 1 g of antibodies specific for GATA-4, GATA-5, and GATA-6, followed by addition of 32 P-labeled double-stranded oligonucleotide corresponding to (⌬-68/-97) of rat BNP promoter. Similar results were obtained in three independent experiments. lovirus (CMV) promoter-driven plasmid overexpressing p38␣. Similarly, ERK and JNK pathways were studied by using CMV promoter-driven plasmids overexpressing MEK1 and MEKK1. Myocytes transfected with pUC-19 were used as control. p38 overexpression substantially evoked GATA-4 binding to BNP gene compared with control plasmid, which was abolished by p38 inhibitor SB203580 (Fig. 6A). ERK inhibition with PD98059 (20 M) slightly decreased p38-induced GATA-4 binding to BNP gene. MEK1 or MEKK1 overexpression had no effect on GATA-4 DNA binding (Fig. 6A).
It has recently been shown that serine residues of GATA-4 are phosphorylated in response to PE and that the phosphorylation is ERK-dependent (39). We examined whether the p38induced increase in GATA-4 DNA binding activity was also due to changes in phosphorylation of GATA-4. Myocytes were transfected with plasmids overexpressing p38␣, MEK1, MEKK1, or pUC19 (control). Subsequently, GATA-4 was immunoprecipitated from nuclear extracts, and Western blot analysis was performed. Immunoblotting with GATA-4 antibody showed that GATA-4 protein levels were unaffected (Fig. 6B). Overexpression of MEK1 and p38␣ exhibited a marked increase in serine phosphorylation of GATA-4, whereas overexpression of JNK pathway (MEKK1) had no effect. p38-induced serine phosphorylation of GATA-4 was inhibited by p38 inhibitor SB203580 and also by ERK inhibitor PD98059 consistently with the finding that p38␣-induced GATA-4 binding was also depressed with PD98059. It is, therefore likely that various serine residues of GATA-4 are differently phosphorylated by ERK and p38 MAPK. Forced expression of p38, MEK1, or MEKK1 did not induce threonine phosphorylation (five different antibodies used) or tyrosine phosphorylation of GATA-4. These results indicate that in cardiac myocytes p38 MAPK and ERK preferentially activate serine phosphorylation of GATA-4.
MAPK Regulation of GATA-4 DNA Binding in COS-1 Cells-To further investigate the role of MAPKs in the regula-tion of GATA-4, we used COS-1 cells transiently expressing GATA-4 and cotransfected the cells with plasmids overexpressing p38␣, MEK1, or JNK1. Control cells, cotransfected with pUC19, showed modest GATA-4 binding activity to BNP promoter (Fig. 7). p38␣ overexpression resulted in 4-fold increase in GATA-4 binding activity, whereas MEK1 or JNK1 overexpression had no effect on GATA-4 binding to BNP gene promoter. Oct-1 binding activity was not affected with transient expression of different plasmids.
p38 MAPK Regulation of a GATA-dependent Promoter-Because BNP expression is an important genetic marker of myocyte hypertrophy, we tested whether p38 overexpression would be sufficient to stimulate BNP promoter activity. Myocytes were cotransfected with (⌬-534bp/ϩ4bp) BNP promoter plasmids and p38␣ expression plasmid or pUC19 plasmid (control). p38␣ overexpression stimulated 4-fold increase in promoter activity (Fig. 8). The mutation of two proximal GATA binding sites at Ϫ91 and Ϫ80 bp of (⌬-534bp/ϩ4bp) BNP promoter abolished p38-induced increase in promoter activity. Cotransfection with a plasmid expressing either MEK1 or MEKK1 induced both the BNP and the mutated constructs similarly (data not shown). DISCUSSION MAPKs, ERK, JNK, and p38, regulate a broad range of biological functions in response to extracellular stimuli. Each MAPK pathway is a complex formation, which provides multiple alternatives to distinguish between different signals. On the other hand, cross-talk between MAPK pathways is known to exist at several levels, i.e. MEKK1 (an upstream kinase of JNK pathway) activating both ERK and p38 MAPK pathways (21,51), therefore influencing the interpretation of the results when studying the specific cellular roles of MAPKs. In the present study, we investigated the role of MAPK signaling in hypertrophic gene expression induced by ET-1 in cardiac myo- After pretreatment with inhibitors 100 nM ET-1 was added for 60 min, nuclear protein extraction was performed, and samples were subjected to EMSA. 32 P-Labeled double-stranded oligonucleotide corresponding to (⌬-68/-97) of rat BNP promoter was used as a GATA binding probe. In parallel, the same nuclear extracts were incubated with Oct-1 binding probe to confirm the specificity of the effects on GATA binding activity. Similar results were obtained in three independent experiments. **, p Ͻ 0.001 compared with untreated control cells. *, p Ͻ 0.05 compared with untreated control cells. #, p Ͻ 0.05 compared with ET-1-treated cells. ##, p Ͻ 0.001 compared with ET-1-treated cells.
cytes. ET-1 rapidly activated p38 MAPK, in agreement with several previous papers suggesting involvement of p38 MAPK in ET-1-induced hypertrophic response (16,18). We also found that ET-1-induced de novo protein synthesis of neonatal rat ventricular myocytes was inhibited by pharmacological blockade of p38 MAPK (SB203580), but not with blockade of ERK signaling (PD98059). This finding disagrees with the previous results showing that SB203580, which blocks the activity of p38 by binding to the ATP binding site of p38 MAPK (52), had no effect on ET-1-induced protein synthesis or sarcomere organization (19). The reason for these discrepant findings remains to be established but may be related to differences under "Experimental Procedures," such as the duration of experiments and inhibitor concentration.
As reported previously (12,16), p42/44 MAPK was also rapidly activated by ET-1. Inhibition of ERK pathway with PD98059 has been proposed to inhibit also p38 to some extent (18), but we found no inhibition of ET-1-induced p38 activity by PD98059 at the concentration of 20 M (Fig. 1C). On the other hand, ERK inhibition induced basal p38 activation about 2-fold, but it had no additional effect on ET-1-induced p38 activity. Previously, a higher dose of PD98059 (50 M) has been shown to increase basal levels of phosphorylated p38 MAPK (18). Furthermore, in a recent study constitutive active MEK1 (an upstream kinase of ERK pathway) was shown to inhibit p38 MAPK activity and p38-induced phosphorylation of TATAbinding protein (53). This inhibitory response was suggested to be mediated by MAPK phosphatase-1 (MKP-1), which has been shown to block ET-1-induced activation of the MAPKs (53, 54). Studies using MEK1/2 inhibitor or overexpression of dominant negative form of MEK1 have shown that ERK is necessary for , and JNK1 (pCMV-JNK1) using FuGENE 6 reagent. Control cells were transfected similarly with GATA-4 expression plasmid and pUC19 plasmid. After 48 h nuclear extraction was performed, and samples were subjected to EMSA. 32 P-Labeled doublestranded oligonucleotide corresponding to (⌬-68/-97) of rat BNP promoter was used as a GATA binding probe. In parallel, same nuclear extracts were incubated with Oct-1 binding probe to confirm the specificity of the effects on GATA binding activity. Similar results were obtained in three independent experiments. **, p Ͻ 0.001 compared with control cells.
FIG. 8. The effect of forced expression of p38 MAPK on GATAdependent rat BNP promoter activation. Cardiac myocytes were transfected with p38␣ MAPK overexpression plasmid (pCMV-p38␣-HA) or pUC19 plasmid (final concentrations of 0.1 g/ml), rat (⌬-534/ϩ4) BNP promoter or Gmut-(⌬-534/ϩ4) BNP promoter linked to luciferase expression plasmid (final concentrations of 0.9 g/ml) and 0.5 g/ml RSV-promoter-driven -galactosidase plasmid. After 48 h of incubation at 37°C and 5% CO 2 , cells were lysed and cell lysates were subjected to luciferase and -galactosidase assays. Each bar represents 12 separate experiments from three independent cell cultures. *, p Ͻ 0.05 compared with basal promoter activity. #, p Ͻ 0.05 compared with p38-induced (⌬-534/ϩ4) BNP promoter activity. the stimulation of MKP-1 mRNA expression (55). Therefore, blockade of ERK for 24 h in the present experiments is likely to inhibit MKP-1 expression and thus result in increased p38 activity. On the other hand, hypertrophic agonists have been shown to activate MKP-1 through mechanisms involving Ca 2ϩ , protein kinase C, and diacylglycerol (56,57). Therefore, lack of additive effect with PD on ET-1-induced p38 activity is likely to result of ET-1-induced activation of MKP-1. Another mechanism involved may be the substrate specificity of MKP-1, because it has been shown to preferentially block the activation of p38 MAPK (58).
A large number of transcription factors, including GATA-1-4 (39, 59 -61) have been shown to exist within cells as phosphoproteins. The GATA-4 protein has at least seven potential sites for serine phosphorylation by MAPKs, and the phosphorylation was increased after ␣ 1 -agonist stimulation via ERK pathway (39). A novel finding in our studies is the differential regulation of GATA-4 binding activity by MAPKs. The present results indicate that p38 MAPK and ERK are involved in the regulation of GATA-4 binding activity. Blockade of ERK pathway, although increasing p38 MAPK activity, lead to decreased phosphorylation of serine residues in GATA-4 and decreased basal binding activity, but it had no effect on ET-1-induced increase in GATA-4 DNA binding. ERK overexpression lead to phosphorylation of the serine residues of GATA-4 protein, but it was not sufficient to increase GATA-4 binding to BNP gene. Blockade of p38 pathway similarly decreased phosphorylation of serine residues in GATA-4 and, in contrast to ERK inhibition, totally abolished ET-1-induced GATA-4 binding to BNP gene. It is remarkable that p38 overexpression not only phosphorylated serine residues in GATA-4 protein, but also increased GATA-4 binding to BNP promoter. Interestingly, p38induced increase, but not ET-1-induced increase, in GATA-4 DNA binding activity was partially inhibited by MEK1 inhibitor PD98059. This is likely to result from other mechanisms induced by ET-1, such as other kinases or transcription factors, which can compensate for the inhibited ERK pathway. Studies on MAPKs in COS-1 cells transiently expressing GATA-4 further supported the essential role of p38 MAPK in the regulation of GATA-4 DNA binding activity. Our findings together indicate the preferential but distinct roles of ERK and p38 MAPK signaling pathways in regulation of GATA-4 transcription factor binding activity. The present results show that blockade of p38 MAPK pathway abolishes hypertrophic agonist-induced GATA-4 binding to BNP gene, whereas inhibition of ERK pathway only disrupts GATA-4 binding activity in nonstimulated myocytes.
In addition to the increase in the DNA binding activity, the functional consequences of GATA-4 phosphorylation may include changes in cellular localization and transcriptional activation. To define the role of GATA-4 binding on BNP gene expression, we introduced site-directed mutations to two adjacent GATA-sites at Ϫ91 and Ϫ80 bp of the proximal BNP promoter (⌬-534bp/ϩ4bp). Previously, these GATA binding sites have been shown to direct cardiac myocyte-specific expression of rat BNP promoter and regulate basal promoter activity (34,50). We found that p38␣ overexpression was potent in activating the proximal BNP promoter, but the mutation of GATA sites abolished p38-induced promoter activity. In contrast, overexpression of either MEK1 or MEKK1 activated both the proximal BNP promoter and the mutated promoter. These results demonstrate that, in the context of proximal rat BNP promoter, p38-but not ERK-induced transcription is dependent upon a GATA binding site in the promoter.
The precise role of the third member of MAPK family, JNK, in hypertrophic response is even more controversial due to the lack of specific inhibitor for JNK. In cardiac myocytes, a dominant-negative JNK construct has been shown to inhibit PEinduced ANP expression, and some studies also find functional JNK pathway essential for hypertrophic response to ET-1 (19,51). In our studies we found that inhibition of the JNK pathway with dominant negative mutant of JNK1 had no effect on basal or hypertrophic agonist-induced GATA-4 DNA binding ability. On the other hand, overexpression of MEKK1, an upstream kinase of JNK, induced proximal BNP promoter activity, but the induction was independent of GATA-4 binding in the promoter.
The mechanisms involved in GATA-4-induced tissue-specific gene expression are not well understood but may involve interactions between GATA-4 and other cell-restricted transcription factors (27,35,62). The ANP promoter is a known downstream target for the cardiac-specific transcription factor GATA-4, and for Nkx-2.5, which bind to adjacent sites in ANP promoter and synergistically activate ANP gene (35,63). There is also evidence from an earlier study (64) that MEF2 proteins are recruited by GATA-4 to synergistically activate ANP and ␣MHC genes. Interaction of friend of GATA-2 (FOG-2) with GATA-4 has also been confirmed (65,66): FOG-2 repressed activation of several GATA-4-dependent promoters, including ANP, BNP, and cTnC (65,67). Nkx-2.5 and NF-AT3 (nuclear factor of activated T lymphocytes), in turn, bind to C-terminal zinc finger of GATA-4 resulting in synergistic transcriptional activation (35,38). The mechanisms by which GATA-4 increases or represses the transcriptional activity with its cofactors remain unclear, but site-specific phosphorylation of GATA-4 protein by MAPKs may have an effect on the interactions of GATA-4 with its cofactors in cardiac myocytes. GATA-4 protein harbors a strong MAPK recognition sequence (Pro-Val-Ser-Pro) at 102-105 residues and multiple Ser-Pro sequences (68). Given that p38-mediated Ser phosphorylation of GATA-4 is followed by increased DNA binding activity, mapping of these Ser phosphorylation sites of GATA-4 will be necessary to fully understand the regulation of GATA-4-mediated gene expression.
The heart adapts to increased demands for cardiac work by increasing muscle mass through the initiation of a hypertrophic response. Hypertrophic stimuli reach the nucleus via multiple signaling pathways within cardiac myocytes and elicit changes in gene expression. p38 MAPK has been implicated in cardiomyocyte hypertrophy, but exact mechanisms are not well understood (18,21,22). The finding that both activators of p38 MAPK, MKK3 and MKK6, are present in the nucleus (69) supports the role of p38 in regulation of transcription factors. Our present findings (summarized in Fig. 9) demonstrate that activation of p38 MAPK is necessary for hypertrophic agonistinduced GATA-4 binding to BNP gene and sufficient for GATAdependent BNP gene expression. Several studies have shown an interaction between GATA-4 and other transcription factors and their ability to direct cardiac gene expression. It will be interesting to determine whether the p38 MAPK-induced phosphorylation of GATA-4 will affect specific interactions with other transcription factors or cofactors. Modulations such as these could profoundly alter the cellular transcriptional program elicited by GATA factors and thus ultimately regulate myocyte hypertrophic response. | 2018-04-03T04:04:10.405Z | 2002-04-19T00:00:00.000 | {
"year": 2002,
"sha1": "6cfe9a503f66a71d3483af85404e8ca3be6c6602",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/277/16/13752.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b4069183072a092c4df8bb26f85196c79b170eb2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
18507309 | pes2o/s2orc | v3-fos-license | Does Dexmedetomidine as a Neuraxial Adjuvant Facilitate Better Anesthesia and Analgesia? A Systematic Review and Meta-Analysis
Background Neuraxial application of dexmedetomidine (DEX) as adjuvant analgesic has been invetigated in some randomized controlled trials (RCTs) but not been approved because of the inconsistency of efficacy and safety in these RCTs. We performed this meta-analysis to access the efficacy and safety of neuraxial DEX as local anaesthetic (LA) adjuvant. Methods We searched PubMed, PsycINFO, Scopus, EMBASE, and CENTRAL databases from inception to June 2013 for RCTs that investigated the analgesia efficacy and safety for neuraxial application DEX as LA adjuvant. Effects were summarized using standardized mean differences (SMDs), weighed mean differences (WMDs) or odds ratio (OR) with suitable effect model. The primary outcomes were postoperative pain intensity and analgesic duration, bradycardia and hypotension. Results Sixteen RCTs involving 1092 participants were included. Neuraxial DEX significantly decreased postoperative pain intensity (SMD, −1.29; 95% confidence interval (CI), −1.70 to −0.89; P<0.00001), prolonged analgesic duration (WMD, 6.93 hours; 95% CI, 5.23 to 8.62; P<0.00001) and increased the risk of bradycardia (OR, 2.68; 95% CI, 1.18 to 6.10; P = 0.02). No evidence showed that neuraxial DEX increased the risk of other adverse events, such as hypotension (OR, 1.54; 95% CI, 0.83 to 2.85; P = 0.17). Additionally, neuraxial DEX was associated with beneficial alterations in postoperative sedation scores and number of analgesic requirements, sensory and motor block characteristics, and intro-operative hemodynamics. Conclusion Neuraxial DEX is a favorable LA adjuvant with better and longer analgesia. The greatest concern is bradycardia. Further large sample trials with strict design and focusing on long-term outcomes are needed.
Introduction
Neuraxial anesthesia and analgesia provide solid analgesic effect by inhibiting nociceptive transmission from peripheral to central neuronal system [1,2]. However, their analgesic advantages might be limited by the short life of current local anesthetics (LAs), and, especially, be weakened during postoperative pain control [3]. The analgesic duration can be prolonged by increasing dose of LA, however, the risk of accompanied systemic and potential neurotoxicity can also be increased. Therefore, adjunct analgesic strategy is an alternative to prolong the analgesic duration, decrease the potential risk of side effects by reducing the dose of individual LA. Recently, several neuraxial adjuvants, including clonidine [4], opioids [527], dexamethasone [8], ketamine [9], magnesium [10], and midazolam [11] have demonstrated the synergistic analgesic effect with LAs with varying degrees of success.
Dexmedetomidine (DEX) is a clinically used anesthetic and belongs to high selective a 2 -adrenergic receptors (a 2 AR) agonist. Intravenous DEX exhibits synergism with regional anesthesia and facilitates postoperative pain control [12,13] and has been accepted as a clinical anesthetic strategy. However, DEX has not been approved by US Food and Drug Administration (FDA) for neuraxial administration. Pre-clinic evidences showed that neuraxial DEX produces antinociception by inhibiting the activation of spinal microglia and astrocyte [14,15], decreasing noxious stimuli evoked release of nociceptive substances [16], and further interrupting the spinal neuron-glia cross talk and regulating the nociceptive transmission under chronic pain condition [17]. Thus, DEX might be an interesting adjuvant for neuraxial anesthesia and analgesia to decrease intra-and postoperative anesthetic consumption and prolong the postoperative analgesic duration, but the potentially increased risk of bradycardia, hypotension and neurotoxicity should be taken into consideration in clinic settings. One recent meta-analysis reported the facilitatory effects of perineural DEX on neuraxial and peripheral nerve block [18] and another suggested beneficial effects of intravenous and intrathecal DEX in spinal anesthesia [19]. However, the results from these two meta-analyses might be biased, because 1. the pooled results were not based on all the currently available RCTs on neuraxial DEX; 2. only the primary outcomes of the sensory and motor block durations were pooled for neuraxial DEX; 3. the analgesic and side effects of adjunct neuraxial DEX to LA has not been carefully investigated; 4. no effort was made to explore the significant heterogeneity within the RCTs. Thus, we performed the current systematic review and meta-analysis focusing on postoperative pain outcomes (pain intensity and analgesic duration) and major adverse events (bradycardia and hypotension) of neuraxial DEX as an adjuvant compared with LA alone.
Methods
We performed the current meta-analysis based on the QUORUM (Quality of Reporting of Meta-analyses) guidelines [20] and the recommendations of the Cochrane Collaboration [21].
Literature Search
The electronic databases screened were MEDLINE (1990 to June 2013), PsycINFO (1990
Study Selection
Selected studies met the following criteria: 1. Any randomized controlled trial (RCT), controlled clinical trial, or open label trial (OLT) designed with at least two groups that one control group receiving pharmacological placebo (saline) in combination with one LA, and the other group receiving DEX in combination with one LA; 2. Neuraxial DEX was delivered via any intravertebral routes, such as epidural, intrathecal, and caudal route in adults and children of any sex undergoing selective surgical procedural; 3. Trials revealed at least one of primary or secondary outcomes mentioned below.
Outcome Measurement
Primary outcomes were postoperative pain intensity within 24 hours, postoperative analgesia duration (''time to first analgesic requirement'' in hours), and major adverse events, including bradycardia and hypotension. Postoperative pain scores from trials measured by verbal rating scale (VRS), visual analog scale (VAS), pediatric observational face, leg, activity, cry, consolability (FLACC) pain scale, or children's and infant's post-operative pain scale (CHIPPS) were pooled to evaluate postoperative pain intensity. Four postoperative time points were pooled to assess pain, 2 to 4 h, 6 to 8 h, 12 h, and 24 h. Different dosages of DEX from the included studies were pooled and the dose effect was not stratified in the current study.
Data Extraction
Characteristics of patients (number of patients, American Society of Anesthesiologist (ASA) rating, age, gender, type of surgery and anesthesia, body mass index (BMI)) and trials design (intervention, follow-up time, completed rate and reported outcomes) were also recorded. If the data mentioned above were unavailable in the article, the corresponding authors were contacted for missing information. If the outcomes in the published studies were presented in a graph manner without any description of absolute value, Image J software Version 2.1.4.7 (Image J software, National Institutes of Health, USA, http:// imagej.nih.gov) was used to restore the related data if we could not get the original data from the authors.
All data were independently extracted using a standard data collection forms by 2 reviewers (HH Wu and HT Wang), and then the collected data were checked and entered into Review Manager analyses software (RevMan) Version 5.2.7 using the double-entry system by the other 2 reviewers (JJ Jin and GB Cui). All discrepancies were rechecked and consensus was reached by discussion with a third author (KC Zhou) involved. A record of reasons for excluding studies was kept. Cohen's kappa was applied for calculating inter-rater agreement.
Assessment of Study Quality
A critical evaluation of the included studies quality was performed by 2 reviewers (KC Zhou and Y Chen) by using a 5point Jadad scale [22]. The main categories consisted of the following 5 items: ''Was the study described as randomized? (1)'', ''Was the method used to generate the sequence of randomization described and appropriate (random numbers, computer-generated, etc)? (1)'', ''Was the study described as double-blind? (1)'', ''Was the method of double-blinding described and appropriate (identical placebo, active placebo, dummy, etc)? (1)'', and ''Was there a description of withdrawals and drop-outs? (1)''. A score of 4 to 5 was considered a high methodological quality.
Assessment of Risk of Bias
Two reviewers (JJ Jin and GB Cui) independently evaluated the risk of bias according to the recommendations from the Cochrane collaboration [19,23]. The main categories consisted of random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, and selective reporting. Each domain was assessed to ''high risk'', ''low risk'', or ''unclear''.
Assessment of Heterogeneity and Publication Bias
We pooled all studies reporting the same primary or secondary outcomes together. And then the study heterogeneity at overall level was investigated by using a x 2 test and calculating I 2 statistic [19,24]. When I 2 was 50% or lower, a low heterogeneity was rated and the data were pooled with a fixed effect model. When I 2 was over 50%, a significant heterogeneity was rated and the data were pooled with a random effects model [24].
Subgroup analyses were used to identify the significant heterogeneity according to different routes of DEX delivery (epidural, intrathecal, and caudal route), different doses of DEX (# 5 mg and . 5 mg), and different time points after DEX administration (2,4 h, 6,8 h, 12 h, and 24 h or # 60 min and . 60 min). Furthermore, meta-regression was used to identify the origin of heterogeneity, such as the different routes, doses, time points, and study qualities.
We performed the sensitivity analyses to examine the effect of primary outcomes by excluding studies with low quality or high risk of bias, and investigated the potential publication bias by using graphical (Begg's funnel plot) [25] and statistical tests (Egger's test) [26].
Statistical Analysis
Continuous variables were pooled by using either the standardized (SMD) or weighted mean difference (WMD) with their 95% confidence intervals (CIs). If the 95% CI covered the value of 0, we considered that the difference between DEX and placebo group was not statistically significant. SMD was calculated for postoperative pain intensity and postoperative sedation scores, because they were measured with different scales. WMD was calculated for postoperative analgesia duration, sensory and motor block characteristics, and intra-operative hemodynamic, because they were measured by the same scale. If these continuous data were only reported as mean or median with standard error or range, we converted them into mean with standard deviation (SD) as previously reported [27]. Binary variables (the number of postoperative analgesic requirements, and the primary and secondary adverse events) were pooled by using odds ratio (OR) with 95% CIs. If the 95% CI covered the value of 1, we considered that the difference between DEX and placebo group was not statistically significant. For the number of postoperative analgesic requirements and the adverse events with statistically significant difference between the DEX and placebo group, number need to treat (NNT) or number need to harm (NNH) was further calculated. The meta-analyses were performed with RevMan 5.2.7 according to Cochrane Handbook for Systematic Reviews of Interventions [24] and further confirmed by using Stata 12.0 software (Stata Corporation, USA).
Search Results
The literature search yielded 253 citations. Initially, 46 records were removed because of duplicate publication. On a more detailed review, an additional 174 papers were excluded for the following reasons: pre-clinical experiments, comments, editorial, case reports, reviews, and data unavailable. Seventeen more papers were further excluded because of DEX via systemic or nasal route, lacking of parallel placebo control, and retrospective research. Finally, the remained 16 studies [28243] with available data met our selection criteria and were included in the metaanalysis. The flow diagram of search strategy and study selection was presented in Figure 1.
Characteristics of the Included Studies
All 16 included studies [28243] were designed as prospective, randomized, double-blinded and placebo controlled trials, and their main characteristics were presented in Table S1. Patients investigated in 4 trials [29,31,35,41] were children, in 1 trial [38] were full term parturients, and in 11 trials [28,30,322 34,36,37,39,40,42,43] were adults of any sex. DEX delivery via epidural route was reported in 4 trials [30,36238], via intrathecal route was reported in 8 trials [28,32234,39,40,42,43], and via caudal route was reported in 4 trials [29,31,35,41]. The doses of DEX varied from 1 to 2 mg/kg via epidural and caudal route, and 3 to 15 mg via intrathecal route. In total, 470 patients were randomly assigned to receive neuraxial administration of DEX combined with bupivacaine or ropivacaine, and 401 patients were assigned to placebo groups receiving neuraxial administration of saline and bupivacaine or ropivacaine.
Methodological Quality and Risk of Bias
The Jadad score of each included study was presented in Table S1, and the median quality score was 4 (range from 3 to 5). Interrater reliability for this assessment was k = 0.79.
Meta-Analyses of Primary Outcomes
Postoperative pain intensity. Results were presented in Figure 2 and Table 1. Postoperative pain intensity within 24 hours was investigated in 6 trials [29231, 34,41,42]. The pooled analysis revealed that neuraxial DEX was associated with a significant reduction of postoperative pain intensity within 24 hours compared with placebo group (SMD, 21.29; 95% CI, 21.70 to 2 0.89; P , 0.00001). The I 2 value of 92% indicated significant heterogeneity.
Further subgroup analyses according to different routes and doses of neuraxial DEX, as well as time periods during postoperative care did not affect the pooled results, and all of these analyses were also influenced by heterogeneity.
Further subgroup analyses according to different routes and doses of neuraxial DEX did not affect the pooled results, and all of these analyses were also influenced by heterogeneity.
Meta-Analyses of Secondary Outcomes
The number of postoperative analgesic requirements. Results were presented in Figure 5. The number of postoperative analgesic requirements was investigated in 5 trials [28,35,36,41,42]. The pooled analysis revealed that neuraxial DEX was associated with a significant reduction in the number of postoperative analgesic requirements compared with placebo group (OR, 0.13; 95% CI, 0.07 to 0.26; P , 0.00001) without any heterogeneity (I 2 = 0%).
Postoperative sedation scores. Results were presented in Table 2. Postoperative sedation scores within 24 hours were investigated in 3 trials [30,35,41]. The pooled analysis revealed that neuraxial DEX was associated with a significant increase of postoperative sedation within 24 hours compared with placebo group (SMD, 0.96; 95% CI, 0.16 to 1.76; P = 0.02). The I 2 value of 94% indicated significant heterogeneity.
Further subgroup analysis investigating different routes of neuraxial DEX revealed that there was no difference between epidural DEX and placebo group in postoperative sedation scores. In contrast, a significantly increased postoperative sedation level was associated with caudal DEX. Another subgroup analysis investigating different time periods showed that there was no difference between neuraxial DEX and placebo group in postoperative sedation scores, and all of these analyses were also influenced by heterogeneity.
Further subgroup analysis investigating different routes of neuraxial DEX revealed that there was no difference between epidural DEX and placebo group in onset of sensory block. In contrast, a significantly fast onset of sensory block was associated with intrathecal DEX. Another subgroup analysis investigating different doses did not affect the pooled results, and all of these analyses were also influenced by heterogeneity. All subgroup analyses investigating different routes and doses of neuraxial DEX, as well as regression dermatomes of sensory block did not affect the pooled results in duration of sensory block, and all of these analyses were also influenced by heterogeneity.
Motor block characteristics. Results were presented in Table 4. The onset and duration of motor block were investigated in 3 [28,39,40] and 7 trials [28,33,35,36,39,40,43], respectively. The pooled analysis revealed no difference between neuraxial DEX and placebo group in onset of motor block (WMD, 23.26 minutes; 95% CI, 26.35 to 0.02; P = 0.05), and a significantly prolonged duration of motor block with neuraxial DEX (WMD, 103.37 minutes; 95% CI, 57.03 to 149.71; P , 0.0001). The I 2 value of 95% and 97% indicated significant heterogeneity both in onset and duration of motor block, respectively. Further all subgroup analyses investigating different routes and doses of neuraxial DEX did not affect the pooled results of onset of motor block.
Further subgroup analysis investigating different routes of neuraxial DEX revealed that there was no difference between caudal DEX and placebo group in duration of motor block. In contrast, a significantly prolonged duration of motor block was associated with both epidural DEX and intrathecal DEX. Another subgroup analysis investigating different doses did not affect the pooled results, and all of these analyses were also influenced by heterogeneity.
Intra-operative hemodynamic. Results were presented in Table 5. The intra-operative HR and MAP were investigated in 7 [28,29,31,33235,42] and 6 trials [28,29,31,33,35,42], respectively. The pooled analysis revealed that neuraxial DEX was associated with a significantly increased HR (WMD, 1.39 bpm; 95% CI, 0.29 to 2.49; P = 0.01) and decreased MAP (WMD, 2 1.93 mmHg; 95% CI, 23.23 to 20.64; P = 0.004) compared with placebo group. The I 2 value of 94% and 68% indicated significant heterogeneity both in HR and MAP, respectively. Further subgroup analysis investigating different time periods of neuraxial DEX revealed that a significantly increased HR and decreased MAP were associated with neuraxial DEX within 60 minutes. In contrast, a significantly decreased HR and MAP were associated with neuraxial DEX beyond 60 minutes. Subgroup analysis investigating different routes of neuraxial DEX revealed that a significantly increased HR and decreased MAP were associated with intrathecal DEX. In contrast, no difference between caudal DEX and placebo group was detected in both HR and MAP. And subgroup analysis investigating different doses of neuraxial DEX revealed that a slight change of HR and significantly decreased MAP were associated with small dose of DEX (# 5 mg). In contrast, a significantly increased HR and slight change of MAP were associated with high dose of DEX (. 5 mg).
Test of Heterogeneity
Significant heterogeneity was carefully considered in 2 primary outcomes (postoperative pain intensity and analgesia duration) and 4 secondary outcomes.
Subgroup analyses. Subgroup analyses were performed to identify the potential clinical heterogeneity according to different routes (epidural, intrathecal and caudal route) and doses (# 5 mg and . 5 mg) of neuraxial DEX, different time periods after neuraxial DEX administration (2,4 h, 4,6 h, 12 h, and 24 h, or # 60 min and . 60 min), and different regression dermatomes of block level (2 dermatomes and . 2 dermatomes). However, the potential clinical heterogeneity failed to explain the study heterogeneity.
Meta-regression. Meta-regression was performed for postoperative pain intensity and analgesia duration to identify the potential sources of methodological and clinical heterogeneity. Evidence showed that neither route (P adjusted = 0.97 for postoperative pain intensity and 0.91 for postoperative analgesia duration), dose (P adjusted = 0.95 for postoperative pain intensity and 0.96 for postoperative analgesia duration), time period (P adjusted = 0.41 for postoperative pain intensity), nor quality (P adjusted = 0.28 for postoperative pain intensity and 0.71 for postoperative analgesia duration) was contributed to the study heterogeneity.
Sensitivity Analyses
Sensitivity analyses were performed in 4 primary outcomes by excluding studies with low quality or high risk of bias. All the metaanalyses results were not affected by the low quality or high risk of bias of studies ( Figure S1).
Publication Bias
Publication bias was found in one primary outcome (postoperative pain intensity) according to both Begg's funnel plot ( Figure S2) and Egger's test (Table S3).
Discussion
The current systematic review and meta-analysis indicated that DEX as a neuraxial adjuvant was associated with reduction in postoperative pain intensity within 24 hours. The mean duration of postoperative analgesia was prolonged by approximate 7 hours. Additionally, neuraxial DEX was also associated with significantly quick onset of sensory block and prolonged duration of sensory and motor block. Intra-operative HR and MAP were also significantly affected by neuraxial DEX. No evidence showed that neuraxial DEX significantly increased the risk of drug-related adverse events, such as hypotension, nausea and vomiting, except for bradycardia.
Several reviews highlight the potential role of a 2 AR agonists for postoperative pain control [44246]. DEX, with its more favorable pharmacokinetic and pharmacodynamic than clonidine [47], might be an interesting option for neuraxial anesthesia and analgesia [48]. Administered as an adjuvant, the synergistic analgesic effect of neuraxial DEX might be contributed to its high selective affinity to the spinal a 2 AR that is approximately 8,10 times higher than that of clonidine [47]. There is no such study comparing the dose equivalence and peri-operative related cost between DEX and clonidine, but previous studies have stated that the dose of clonidine is 1.5,4 times greater than DEX when it is delivered via epidural route [37,49]. Since the analgesic effect of DEX was mainly mediated via the a 2 AR [50], the cardiorespiratory adverse events via a 1 AR might be minimized. There were several included studies reporting that neuraxial DEX was associated with a lower inspired inhalation anesthetic concentration [31,35,36] and bispectral index (BIS) [30] compared with placebo group, indicating that a synergism between neuraxial DEX and LAs yielded to anesthetic sparing and improved anesthesia.
The pooled results from our meta-analysis showed that adjunct neuraxial DEX was associated with significantly lower pain intensity within 24 hours postoperatively compared with placebo group. The average decrease in pain intensity was approximate 1.3 on a VRS, VAS, FLACC, or CHIPPS scale, indicating a mild to moderate postoperative pain relief. The previous meta-analyses [18,19] didn't pool this part of results because of the limited number of included studies and significant clinical heterogeneity (DEX via neuraxial vs. peripheral or intravenous vs. intrathecal). We also demonstrated that the duration of postoperative analgesia in neuraxial DEX group was prolonged by approximate 7 hours, which was longer than previous studies (approximate 4 and 5 hours, respectively) [18,19]. This discrepancy was derived from the clinical heterogeneity. Our further subgroup analysis revealed the similarly prolonged duration of postoperative analgesia in intrathecal DEX group (approximate 4.2 hours). Caudal DEX tended to prolong more analgesic duration compared with epidural or intrathecal DEX (10 vs. 2 vs. 4 hours), however, this pooled results might be weakened by clinical heterogeneity that caudal anesthesia in 4 included studies was all performed in children. The pooled results from our meta-analysis showed that adjunct neuraxial DEX was associated with a significantly quick onset of sensory and motor block, and prolonged duration of sensory block compared with placebo group, which was similar to previous reports [18,19]. Subgroup analyses revealed that a clinical heterogeneity, such as the different routes, doses of DEX, might influence the results. Although most of the subgroup results didn't reach the statistically significant difference compared with the pooled ones, the prolonged duration of block time might be considered clinical difference (e.g. an average prolongation of duration of sensory block: approximate 43 minutes for # 5 mg DEX vs. 102 minutes for . 5 mg DEX; an average prolongation of duration of motor block: approximate 90 minutes for intrathecal DEX vs. 120 minutes for intrathecal DEX vs. 8 minutes for caudal DEX).
The pooled results from our meta-analysis showed that adjunct neuraxial DEX was associated with a significant change in intraoperative hemodynamic compared with placebo group. However, an increase of approximate 1.4 bpm HR and decrease of 2 mmHg MAP were considered as no clinical significance. Eight trials [28,29,31233,35237] recorded the intra-operative ephedrine or atropine consumption, and no group difference was detected between neuraxial DEX and placebo group, suggesting an overall stable hemodynamic and that these changes were easily reversed.
The pooled results from our meta-analysis showed that adjunct neuraxial DEX was associated with a significantly higher incidence of bradycardia (NNH = 14) compared with placebo group, which was in agreement with previous reports [18,19]. No evidence showed any increased risk of other adverse events, such as hypotension, nausea and vomiting. Six trials [28,32,33,39,42,43] reported that no patient suffered from neurological impairment within 1 to 2 weeks follow-up. However, our results might be weakened by several limitations. First, there were high heterogeneity in 2 primary outcomes (postoperative pain intensity and analgesia duration) and 4 secondary outcomes, since we pooled different route and dose of neuraxial DEX, different type of anesthesia, surgical procedural, and LAs, different postoperative time period, and different age and gender together in our analyses. Although a series of subgroup analyses and meta-regression were performed to identify the potential clinical and methodological heterogeneity, we failed to consolidate any cause to the significant heterogeneity. Thus, we used random effect model to modify the potential influence of heterogeneity on the result validity with wide 95% CI. Second, the limited number of included studies with varied clinical heterogeneity did not allow us to perform a detailed meta-regression including all possible predictors. Third, one primary outcome (postoperative pain intensity) might be influenced by publication bias indicated by Begg's funnel plot and Egger's test, since positive results are always more frequently published than the negative ones. A sensitivity analysis by excluding studies with low quality or high risk of bias revealed that the model and statistical assumptions did not influence our pooled results [51]. Fourth, six included studies with low Jadad scores [32234,36,37,41] and 1 study with high risk of attrition bias [38] might influence our pooled results. Finally, although we have confirmed the favorable safety profile of neuraxial DEX in shortterm, long-term outcomes concerning potential neurotoxicity and delayed neurological impairments are lacking.
Conclusion
Our evidence demonstrated that neuraxial DEX is a favorable LA adjuvant with decreased postoperative pain intensity, prolonged analgesic duration and improved neuraxial anesthesia. The greatest concern is bradycardia. Since DEX has not been approved in most countries for neuraxial use yet, urge cautions regarding the use of neuraxial DEX are highlighted in medical practice. Further trials with strict design and focusing on long-term outcomes are warranted. | 2016-05-04T20:20:58.661Z | 2014-03-26T00:00:00.000 | {
"year": 2014,
"sha1": "c2fcd925e090f113e9be6aa7e0d527fd63b3540f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0093114&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2fcd925e090f113e9be6aa7e0d527fd63b3540f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59409662 | pes2o/s2orc | v3-fos-license | Analysis of The Influence of Exellent Service Training On Inpatient Statisfaction in RSIA ANALYSIS OF THE INFLUENCE OF EXCELLENT SERVICE TRAINING ON INPATIENT SATISFACTION IN RSIA PURI BUNDA MALANG
Increasing the number of patients and hospital performance improvement should be accompanied by improved quality of service. With good quality care, the patient satisfaction will increase and will come back to use the service. Preliminary studies indicate that a decline in the level of patient satisfaction in inpatient units RSIA Puri Bunda. The purpose of this study was to determine differences in patient satisfaction before and after giving excellent service training as well as explore factors that influence service quality of patient satisfaction. This research method is the pre-post quasi-experimental design. Respondents are patients in inpatient units RSIA Puri Bunda who underwent treatment at least 2 days. Total respondents 181 people. The results showed that there was a difference in the satisfaction averages 3.86 P0, P1 and P2 4.29 4.06. Through Mann Whitney test found significant differences in the groups P1 and P2 in the variable product quality with sig 0,010 and emotional factors with sig 0.005. Advice to the management of the hospital is to conduct continuous training in order to reach the expected results.
The implementation of health care in hospital is essentially a fulfillment and demand of patients to resolve their health problems, which is expected to provide medical service which is qualified, fast response to a complaint and provides comfortable health service (Ristrini, 2005).The provision of health service which is in accordance with the expectation or even beyond the expectation can lead to patient satisfaction.Patient satisfaction is one of the terms that must be fulfilled in order to get patient and maintain patient to keep using the health care service.The possibility to re-use the same service would be greater if the patient is satisfied with the service, but if patient is not satisfied with the service patient is likely to move to other health care provider and highly likely will tell his/ her experience to others, so that it can cause bad image of the health care provider in the eyes of customers.The important thing that needs to be considered in patient satisfaction-oriented health service is determining the patient's perception of quality, including facility, the role of doctor, medical personnel and nurse (Supriyanto and Soesanto, 2012).number of respondents on P0 was 61 people, P1 was 60 and p2 was 60.
The instrument used in data collection in this study was a questionnaire, filled by respondents.Validity test which was used in this study was Pearson correlation test, while the consistency of instrument was tested by using alpha test.Then, descriptive analysis was performed to make a conclusion in general.The analysis technique used in non-normal distributed data was non-parametric.To know the difference in each treatment group, Mann Whitney test was used.While the test instrument used to know the variable of service quality which is the most influential in patient satisfaction is a logistic regression test.
Respondent Characteristics
There were 61 respondents filling the questionnaire in group P0, 65 respondents filling the questionnaire in group P1, and 63 respondents filling the questionnaire in group P2, but because the questionnaire was not fully completed, there were 60 respondents collected in group P1 and 60 respondents collected group P2.From the table, illustration obtained is that in group P0, there were more female respondents of 50.8% than in treatment in group P1 and there were more male respondents in P1 and P2, amounted to 71.7% in P1 and 76.7% in P2.This was because the most respondents filing questionnaire in group P0 were patient and patient's mother, while the most respondents filing questionnaire in group P1 and P2 were patient's husband.Most of the respondents in all treatment groups are in productive age; it is appropriate for most of the cases are the childbirth case.The education level of most respondents in group P0, P1 and P2 is the senior high school (43.3% to 55%); 37.7% to 51.7%of them work as private employees.The highest income of respondents in all treatment groups is less than 2.500.000rupiahs, with an average of 52.5% to 55%.Most respondents (above 90%) in all treatment groups are the recipient of the service of delivery care service.Most respondents (average of 65% to 76.7%) pay their billing by using BPJS facility.
RSIA Puri Bunda is a hospital which runs health care, especially for maternal and child health, which experience the development of service.Along with the addition of service provided in RSIA Puri Bunda, the number of the patient visit is also increased.The increase in the number of patients and performance of hospital also should be followed by improving the quality of service, but from the survey of patient satisfaction in the last 3 months shows that there is a decrease in patient satisfaction.One of them is in doctor service, stating that dissatisfaction increased from 0.7% to 2.3%.The service speed of medical personnel who are less satisfied with the service also increased from 3% to 5,8%.The attention of medical personnel is also still not good, with the increase in dissatisfaction from 1.5% to 15.29%.Staffs also become less friendly from 0% to 5.8%.
The behavior of medical personnel which is not good in giving treatment and less helping patient in health service can be caused by understanding or knowledge about the importance of the role of medical personnel for patients is still less, and the skill of medical personnel is inadequate or lack of labor.
One of the solutions offered is providing training on excellent service.According to Dessler (2009) training is an activity to educate and train a particular skill to employees, both new employees and old employees, so employees can work properly.
With the background above, the authors will conduct a study on the analysis of the influence of training about excellent service on the patient satisfaction in RSIA Puri Bunda Malang.
METHOD
The design used in this research was pre-post quasi experimental.This research was conducted in May to June 2015.This study was conducted in hospitalization unit in RSIA Puri Bunda.The sampling was done 2 weeks before the training, first 2 weeks after training and second 2 weeks after training.The respondents have hospitalized patients undergoing treatment for at least 2 days.The populations in this study were 181 people, with the total Table 1 illustrates that overall the value of patient satisfaction before the intervention was already high, with an average of 3.86.From the factors that affect patient satisfaction, the lowest factor in group P0 is the price.
It was obtained the illustration of improvement in patient satisfaction for 2 weeks after intervention was given.There was the increase in the average of patient satisfaction in group P0 from 3.86 to 4.29.There was the decrease in the average of the patient in group P2.It can be seen that the average of patient satisfaction of group P1 from 4.29 to 4.06 in group P2.However, the average of patient satisfaction in group P2 was still higher than group P0, which was 3.86.
From the illustration in Table 2, it can be concluded that all variables have significance value <0.05 which means that Ho is rejected.This means that every variable in group P0 and P1 has a real difference or has a significant difference.
From the illustration in Table 3, it can be concluded that variable of product quality and emotional factor have significance value > 0.05; therefore, Ho in this variable is accepted.It means that the variables in group P1 and P2 that have a real difference or have significant difference are only variable of product quality and emotional factor.The illustration in Table 4 shows that the significance level of reliability by 0.047 is smaller than 0.05 and much smaller than 0.1 (significance level of 10%).This means that there is significant effect between the variable of reliability and patient satisfaction in the variable of product quality (H0 is rejected).The regression coefficient value amounted to 1.253 shows that there is a one-direction influence of reliability and patient satisfaction variable on product quality variable, meaning that the increase of reliability can improve patient satisfaction in the variable of product quality.
The significance level amounted to 0.006 is far below the significance level of 5%.So variable of responsiveness has a very significant influence on patient satisfaction in the variable of product quality value.Regression coefficient value amounted to 1.615 shows that there is a one-direction influence of variable of responsiveness with patient satisfaction in the variable of product quality.It means that the increase of responsiveness can improve patient satisfaction in the variable of product quality.
of regression coefficient amounted to 1.925 shows that there is one-direction influence between the variable of empathy and patient satisfaction in the variable of emotional factor.It means that the improvement of empathy can improve patient satisfaction in the variable of emotional factor.
Respondent characteristics
In this study, most of the respondents are young adults, amounted to 93.5% in group P0, amounted to 93.2% in group P1, and amounted to 88.1% in group P2.In this life stage, most of them are new parents, who are at the top of income and outcome level.The respondents in this age range emphasize the importance of health, sport, and education.As for gender, it has no relationship with patient satisfaction; both male and female will be relatively similar in feeling satisfaction.
The education has no effect on the level of patient satisfaction, but the one which has an effect on the level of patient satisfaction is knowledge.The knowledge will affect the decision about a product of service.While work is closely related to income.The higher the income of patients or patient's family, then the higher the demand of patient on the ability of health workers.The number of respondents in this study whose income less than 2.500.0000rupiahs is 52.5% in group P1, 63.3% in group P2 and 66.7% in group P3.Based on the research above, high level of patient satisfaction is already appropriate.
The most financing in group treatment P0, P1 and P2 is financing by using BPJS insurance, amounted to 70.5% in P0, amounted to 76.7% in P1, and amounted to 65% in P2.Dewi (2010) states that the perception of the patient on the quality of health service by using insurance is lower than the quality of health service by using by using independent financing.
Illustration of patient satisfaction in pre intervention stage
Answers stating that respondents are very satisfied (value of 5) are in the variable of responsiveness, amounted to 21.3%, while the lowest average Table 5 illustrates significance level of assurance amounted to 0.001, which is far below significance level of 5%.Therefore, variable of assurance significantly affects patient satisfaction in the variable of emotional factor.The value of regression coefficient amounted to 2.461 shows that there is one-direction between the variable of assurance and patient satisfaction in the variable of emotional factor.It means that the increase of assurance can improve patient satisfaction in the variable of emotional factor.
The illustration of a significance level of empathy amounted to 0.003 is far below the significance level of 5%.Therefore, the variable of empathy is significantly influential in patient satisfaction in the variable of emotional factor.The value of satisfaction is related to price, with the value of 3.68.Based on the research conducted by Lubis (2010), it can be proven that the variable of price has a significant effect on customer satisfaction.Even based on that study, the variable of price is more dominant than the variable of service quality.In general, the value of patient satisfaction in this group is already good, with the average value of 3.86.
Illustration of patient satisfaction in post intervention stage
There is a difference in the average in group P0 and P1.The respondents in group P1 are respondents measured for 2 weeks after training was conducted.There is a shift in the value of the respondents with answers stating that they are neutral, satisfied and very satisfied, into satisfied and very satisfied.Satisfaction with an initial average of 3.86 increased to 4.29.The highest value of answer stating that respondents are very satisfied is in emotional factor, while the lowest average is in price.
Respondents in group P2 are respondents who were measured in the second 2 weeks after training was conducted, in which there is a decrease in the average of satisfaction than group P1, with the decrease of 0.23.There is a shift good assessment from very satisfied and satisfied to satisfied and neutral.The greatest decreases are in product quality and emotional factor.
It can be seen in the average of the group that the values of patient satisfaction in those three groups are already good, and there is an increase in the average of patient satisfaction after the intervention was given, which was in the form of training.The provision of education and training is significantly influential to improve the performance of employees and a research conducted by Kaihatu (2012) stating that the quality of service positively influences patient satisfaction.
Results of Mann-Whitney Test
From the results of Mann-Whitney test in group P0 and P1, it is obtained a value of <0.05 in all the variables tested.This shows that there is a signifi-cant difference in all variables in group P0 and P1, which means that giving intervention in the form of training can provide a positive influence on patient satisfaction.From the results of the test in group P1 and P2, it was found that the significance value of <0.05 is only in the variable of product quality and emotional factor, while in the other variables, significance value > 0.05.From the descriptive analysis, it was found that the difference in group P0 and P1 is an increase in patient satisfaction after the training was conducted, by looking at the increase in the total average.While in group P1 and P2, the difference happened was a decrease in patient satisfaction by looking at the decrease in the total average of patient satisfaction.
This study describes that training gives positive impact to improve patient satisfaction.This indicates that nurses who have participated in the training on excellent service can improve the quality of health service, so they can improve the quality of their health service, which can improve patient satisfaction.The training gives positive and significant influence on productivity.There is a positive correlation of performance between before and after training.
There are some things that can be evaluated which may be the cause of the decrease in the number of patient satisfaction in group 2. Training evaluation should be done to ensure the success of training in improving the potential of employees.According to Kirkpatrick, there are 4 levels in evaluating a training, namely evaluating the reaction of trainees, evaluating the learning, evaluating behavior change, and evaluating the results.
Level 1 is evaluating the reaction of the participant.At this stage, the ones being assessed are the teacher, material, the method of delivering material, and means that is used during the training.Sudarman (2008) states that collaborative learning method can help trainees actively involved in building knowledge, thus deep learning is achieved.
Evaluation of level 2 is learning evaluation.This level measures how well trainees understand the concept, theory, policy, ideas, and facts presented.Evaluation of level 3 is behavior evalua-tion, which is by measuring the work behavior change of trainees in accordance with the target of training materials.
Evaluation of level 4 is result evaluation.This evaluation assesses the benefits obtained by the hospital after holding training, including, improvement in patient satisfaction, an increase in patient visits and others.Training is expected to change the behavior, which can improve the performance.The performance change needs the awareness and motivation of participants to change, and in a few things, it needs the support of boss and the work environment.The motivation has a very strong effect on performance.
On of the functions of organization or leadership related to post training is supervision.The previous research conducted by Nur, Q.M. et al (2013) and Mulyono, M.H. et al (2013) conclude that there is significant and dominant influence between supervision on the performance of nurse.According to Suarli and Bachtiar (2009,) in carrying out good supervision, there are 2 things which need to be noted that direct observation in educational and supportively is not to show power or authority, rather build cooperation with the subordinates to create good communication.
Results of Logistic Regression Test
From the results of logistic regression test on the variable of product quality, it was found that the ones which give significant influence on patient satisfaction are reliability and responsiveness.The test results also show that there is one-direction influence which means that increase in the reliability and responsiveness will improve patient satisfaction.While in the results of logistic regression test on the variable of the emotional factor, the variables that provide significant results are variable of assurance and empathy.Both of them also have one-direction effect, which means that the increase in assurance and empathy can improve patient satisfaction in terms of emotional factor.Test results above show that actually there is a strong relationship between one variable with others.Patients will be happy if the quality of service provided is good.
CONCLUSIONS AND SUGGESTIONS
Training has a positive impact on the change of patient satisfaction.Expected results of training cannot be long lasting; in the first 2 weeks after training, there is a significant positive influence.However, in the second 2 weeks, patient satisfaction starts to decrease.It can be caused by many other factors that affect the performance of nurse, which finally will affect patient satisfaction.
Training should be done continuously to maintain and improve the performance of nurse.Training activity also requires evaluation and mentoring continuously by the management to optimize the impact of training itself.
Table 2 Results of Mann Whitney test in treatment group P0-P1
Source: Processed data in 2015
Table 3 Results of Mann Whitney test in treatment
Source: Processed data in 2015
Table 5 Significance andRegression Coefficient of Emotional Factor
Source: Processed data in 2015 | 2018-12-29T08:47:30.295Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "3d9cc8b74c3daac28d812c9b2c04e86de4b60261",
"oa_license": "CCBYSA",
"oa_url": "https://jurnaljam.ub.ac.id/index.php/jam/article/download/1095/919",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3d9cc8b74c3daac28d812c9b2c04e86de4b60261",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3670440 | pes2o/s2orc | v3-fos-license | In vivo kinematics of a unique posterior-stabilized knee implant during a stepping exercise
Background Stair-stepping motion is important in daily living, similar to gait. Knee prostheses need to have even more superior performance and stability in stair-stepping motion than in gait. The purpose of this analysis was to estimate in vivo knee motion in stair stepping and determine if this unique knee prosthesis function as designed. Methods A total of 20 patients with Bi-Surface posterior-stabilizing (PS) implants were assessed. The Bi-Surface PS knee is a posterior-cruciate substitute prosthesis with a unique ball-and-socket joint in the mid-posterior portion of the femoral and tibial components. Patients were examined during stair-stepping motion using a 2-dimensional to 3-dimensional registration technique. Results The kinematic pattern in step up was a medial pivot, in which the level of anteroposterior translation was very small. In step down, the kinematic pattern was neither a pivot shift nor a rollback. From minimum to maximum flexion, anterior femoral translation occurred slightly. Conclusions In this study, this unique implant had good joint stability during stair stepping. The joint’s stability during stair stepping was affected by the design of the femorotibial joint rather than post/cam engagement or the ball-and-socket joint.
Background
Clinical and radiographic examinations are commonly used to evaluate the postoperative outcomes of total knee arthroplasty (TKA). During such assessments, kinetic and gait analyses are considered to be essential for determining the detailed effects of TKA. In particular, fluoroscopic in vivo kinematic studies performed during knee flexion have been demonstrated to be useful for assessing the postoperative outcomes of TKA [1][2][3][4][5]. Knee motion patterns have been examined in various studies of gait, step, stair, or deep bending-based activities. During daily activities, knee implants partially replicate the intrinsic constraints of the original joint. Many different types of knee implants have been developed. The Bi-Surface posterior-stabilizing (PS) knee implant (Kyocera) was designed to improve the range of deep flexion and stability, and its mid-posterior portion contains a ball-and-socket joint that links its femoral and tibial components. This characteristic structure allows a larger contact area between the femoral and tibial articular surfaces and reduces the stress placed on the tibial plate. In addition, the articular surface of the tibial plate is asymmetric; it is concave on the medial side and flat on the lateral side. The post/cam mechanism of the Bi-Surface PS implant is designed to enable it to function from 45°to 60°of knee flexion during stair stepping, which allows the femoral component to roll back early (Fig. 1).
Among daily activities, the ability to use the stairs is very important, as is gait. It is more important for knee prostheses to exhibit good performance and stability during stair stepping than during walking. Therefore, it is important to understand the relationship between implant design and functional knee motion during stair stepping. The goal of this analysis was to assess in vivo knee motion during a stepping exercise and determine the motion pattern in patients with Bi-Surface PS knee implant functions.
The 3-dimensional (3D) positioning and orientation of the implant components were determined using a 2D/ 3D registration technique involving previously reported methods, manual matching, and image space optimization [1][2][3]6]. Using this approach, we performed an in vivo kinematic analysis of stepping activity in patients that had been implanted with the Bi-Surface PS knee prosthesis.
Methods
Twenty subjects that underwent TKA involving a Kyocera Bi-Surface PS knee prosthesis (Kyocera, Japan) were assessed in this study. The patients had undergone clinically successful TKA and were willing to participate in this study. The patients were followed up for more than 6 months before being assessed and included 18 females and 2 males. All of the patients had been diagnosed with osteoarthritis. Their mean age was 74.7 years (range 64-83). All of the TKA procedures were performed by the same surgeon, and a parapatellar approach was used in all cases. The patella was not resurfaced, and all of the implants were fixed in place with cement. At the time of the analysis, the mean duration of the postoperative follow-up period was 7.1 ± 1.2 months (range [6][7][8][9][10][11]. Clinical evaluations were performed according to the knee rating scale of the Hospital for Special Surgery (HSS) after arthroplasty. The mean postoperative HSS score was 91.9 ± 3.3 (range 86-97). This study was conducted in accordance with the Declaration of Helsinki and with approval from the Ethics Committee of Kanmon Medical Center (Shimonoseki, Japan). Written informed consent was obtained from all participants or their guardians.
Each patient was examined under fluoroscopic surveillance in the sagittal plane whilst stepping onto and off a 10-cm-high step. During the examinations, the patients stood with their feet in neutral rotation. Then, they stepped onto and off the step. Both of these movements were performed using a single leg. The patients began by placing their ipsilateral foot onto the 10-cm-high step. They were then instructed to step up onto the step, before swinging their other leg through and onto the step. When stepping down, they were instructed to step off the step with their opposite leg and stand with the ipsilateral foot remaining on the 10-cm-high step. Three successful sets of movements were recorded, and the best recording was used for the analysis. Successive knee motions were recorded as serial digital x-ray images (2048 × 1536 × 14 bits/pixel, 194-μm serial spot images, saved as DICOM files) using a 40 cm × 30 cm flat panel detector system (DHF-155H3, Hitachi, Japan) and 1.2to 2.0-ms pulsed x-ray beams. The 3D in vivo positions of the Bi-Surface prosthesis were computed at 10°intervals using a 2D/3D registration technique. The digital fluoroscopic images were undistorted using a custom MATLAB program. The optical geometry of the fluoroscopic system (principal distance, principal point) was determined based on images of a calibration target [3,4]. An implant surface model was projected onto the geometry-corrected fluoroscopic images, and its 3D position was iteratively adjusted so that its silhouette matched with that of the knee prosthesis using custom software (JointTrack, University of Florida, FL). After the matching procedure had been completed, videos of the movements of the bone model and the 6 degrees of freedom kinematics of the implant components were acquired and subjected to quantitative analysis (3D-JointManager, GLAB Inc., Hiroshima, Japan). The matching procedure exhibited standard errors of approximately 0.5°to 1.0°for rotations and 0.5 to 1.0 mm for translations in the sagittal plane [4]. The relative movements of the femoral and tibial components were determined based on the 3D positions of the knee prosthesis using the projection coordinate system proposed by Andriacchi [7].
We evaluated the flexion angle, the axial rotation angle, anteroposterior translation, the valgus/varus angle, and post/cam engagement between the femoral and tibial components during stepping up and down movements. In patients with fixed-bearing knee prostheses, the 3D position of the radiolucent tibial polyethylene insert could be determined based on the estimated position of the tibial component. The anteroposterior translation of the points on the femoral component that were nearest to the tibial polyethylene insert (and vice versa) on the medial and lateral sides was also evaluated. External and internal axial femoral rotations were defined as positive and negative, respectively. The points on the medial and lateral sides of the femoral component that were nearest to the tibial polyethylene insert (as the center of quasi-contact) were determined by calculating the distances between the surfaces of the femoral and tibial components using CAD models. Regarding the anteroposterior positioning of the femoral component, positions anterior to the tibial insert were denoted as positive, and positions posterior to the tibial insert were regarded as negative. Valgus/varus angles (varus angles were considered to be positive) were also evaluated. We defined post/cam engagement as when the distance between the post and cam was less than 1 mm. All data are expressed as mean ± SD values. Welch's t test was used for comparisons of the degree of anteroposterior displacement of the medial and lateral condyles or the valgus/varus angle. Values of P < 0.05 were considered to be statistically significant.
Results
The minimum flexion angle between the femoral and tibial components was 5. during stepping up movements and 3.2°± 2.1°(0.6°-8.2°) during stepping down movements. The mean axial rotation of the femoral component exhibited gradual external rotation during the transition from 30°knee flexion to maximum flexion when the patients were performing stepping up movements (Fig. 2).
During the transition from minimum flexion to maximum flexion, medial anteroposterior translation of 2.7 ± 1.2 mm (1.0-6.2 mm) and 3.2 ± 0.9 mm (1.8-5.6 mm) was seen during stepping up and stepping down movements, respectively. In addition, lateral anteroposterior translation of 2.7 ± 1.4 mm (0.7-5.9 mm) and 3.1 ± 1.3 mm (1.5-6.1 mm) was observed during stepping up and stepping down movements, respectively. In the range from 30°knee flexion to maximum flexion, the lateral condyle exhibited slightly greater posterior rollback than the medial condyle during stepping up movements. No posterior rollback of the medial or lateral condyle occurred during stepping down movements. Slight anterior femoral translation was noted during the transition from minimum to maximum flexion (Fig. 3).
The kinematic patterns of the patient's prostheses were determined based on the positions of the medial and lateral condyles at each flexion angle. During the transition from 30°flexion to maximum flexion, a medial pivottype kinematic pattern involving very little anteroposterior translation was observed during stepping up movements. During stepping down movements, neither a pivot-shift-type nor a rollback-type kinematic pattern was seen. Slight anterior femoral translation occurred during the transition from minimum to maximum flexion. The total valgus/varus angles for each knee were 0.1°± 0.6°(−1.7°-1.4°) during stepping up movements and 0°± 1.0°(−1.6°-2.6°) during stepping down movements. No significant differences in the valgus angle were detected between the two motions (Fig. 4).
Post/cam engagement was considered to have occurred in one case during stepping up movements. The minimum flexion angle seen during stepping up movements was 55.1°.
Discussion
TKA has been demonstrated to achieve successful clinical outcomes in patients with osteoarthritis of the knee. Knee implants partially replicate the intrinsic constraints of the lost joint. However, they do not necessarily restore normal joint stability and motion so it is necessary to understand in vivo knee motion during daily activities in patients with knee prostheses.
In normal knees, the femur exhibits a medial pivot motion relative to the tibia during deep knee flexion [8,9]. However, such movements are not always seen after TKA [10][11][12][13][14]. For example, Dennis reported that both medial pivot-type and lateral pivot-type patterns were seen in patients that had undergone TKA [14]. Banks found that in patients that undergo successful TKA, knee motion is directly related to the constraints of the implant [15]. On the other hand, while the center of rotation is predominantly on the lateral side of the knee during walking, the normal function of the knee during walking is associated with lateral and medial pivoting [16].
Among daily activities, the ability to use stairs is very important, as is gait. It is more important that knee prostheses exhibit good performance and stability during stair stepping than during walking. Banks reported that most patients that underwent PS TKA exhibited medial central rotation, which was indicative of posterior femoral translation and flexion, during stair stepping [17]. In our study, the subjects displayed a medial pivot kinematic pattern involving very little anteroposterior translation during stepping up movements. During stepping down movements, neither a pivot-shift-type nor a rollback-type kinematic pattern was seen. Only slight anterior femoral translation occurred during the transition from minimum to maximum flexion. The motion pattern may be caused by the tibial plate which is concave on the medial side and flat on the lateral side. The Bi-Surface PS demonstrated good joint stability during the stepping exercise. Thus, there are clear discrepancies between the kinematic patterns detected in our study and those described in Banks' report. There were some differences in the step height and stepping method between our study and that conducted by Banks; however, we consider that the main reason for the abovementioned differences in the kinematic patterns is the unique design of the Bi-Surface PS; i.e., it is a posterior-cruciate ligament-substituting prosthesis with a characteristic ball-and-socket joint that links its femoral and tibial components.
The post/cam mechanism of the Bi-Surface PS-type implant is designed to function from 45°to 60°of knee flexion, and the ball-and-socket joint functions as the main load supporting surface from 90°of flexion. Post/ cam engagement was considered to have occurred in one case during stepping up movements. The minimum flexion angle was 55.1°during stepping up movements. Furthermore, the ball-and-socket joint did not function in any case. Thus, the joint stability of the Bi-Surface PS implant during step ascension/descension is affected by the design of the femorotibial joint rather than post/cam engagement or the ball-and-socket joint. In situations involving steps that are higher than 10 cm, maximum knee flexion might increase, and greater post/cam engagement and ball-and-socket joint loading might occur during knee flexion. We consider that these kinematic patterns could affect the long-term outcomes of TKA procedures involving the Bi-Surface PS. Therefore, the relationship between these kinematic patterns and clinical outcomes should be assessed in further studies involving long-term follow-up.
Conclusions
In summary, in patients that had undergone TKA procedures involving the Bi-Surface PS, a medial pivot kinematic pattern involving very little anteroposterior translation was seen during stepping up movements. During stepping down movements, neither a pivot-shift-type nor a rollbacktype kinematic pattern was seen. Slight anterior femoral translation occurred during the transition from minimum to maximum flexion. The Bi-Surface PS demonstrated good joint stability during a stepping exercise. The joint's stability was affected by the design of the femorotibial joint rather than post/cam engagement or the function of the ball-and-socket joint. | 2018-03-05T18:09:47.362Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "3094a9525132594472fd62ffaf275c9595847c3c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13018-016-0354-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3094a9525132594472fd62ffaf275c9595847c3c",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235372713 | pes2o/s2orc | v3-fos-license | Personalized medicine for Hodgkin lymphoma: Mitigating toxicity while preserving cure
The treatment of classical Hodgkin lymphoma in young patients is one of the success stories of modern medicine. The use of risk‐ and response‐adapted approaches to guide treatment decisions has led to impressive cure rates while reducing the long‐term toxicity associated with more intensive therapies. Tissue biomarkers have not yet proven more effective than clinical characteristics for risk stratification of patients at presentation, but functional imaging features such as metabolic tumor volume may be used to predict response, if early observations can be validated. The success of treatment in younger patients has unfortunately not been mirrored in those over 60, where complex decision‐making is often required, with a paucity of data from clinical trials. The use of PD1 blocking antibodies and brentuximab vedotin in this cohort, either alone or in combination with chemotherapy, may provide attractive options. The incorporation of frailty assessment, quality‐of‐life outcomes, and specialist geriatric input is also important to ensure the best outcomes for this diverse group.
year overall survival (OS) rates above 80%. 1 However, the ongoing challenge remains of identifying patients with high-risk disease who will benefit the most from intensified therapy, while de-escalating treatment in those likely to be cured by less toxic regimens, to minimize the long-term morbidity and mortality seen in a minority of survivors, without comprising outcomes. This article will outline the emergence of new biomarkers to aid risk stratification and guide treatment decisions at diagnosis, the use of different responseadapted approaches, and the incorporation of new targeted agents in the treatment of both younger and older patients, in a more personalized approach to therapy.
| APPROACHES IN YOUNGER PATIENTS WITH HODGKIN LYMPHOMA
-39 bleomycin, and dacarbazine (ABVD) is guided by risk stratification at diagnosis, comorbidity, and patient preference. Excellent disease control is achieved with more intensive BEACOPP regimens; however, this is at the price of increased acute toxicity and long-term morbidity, including second malignancies, infertility and cardiovascular disease among survivors, when compared to less intensive regimens. The use of six cycles of ABVD compared to four cycles of escBEACOPP plus two cycles of standard BEACOPP in the Italian HD2000 study showed no difference in 10-year OS, despite a significant difference in progression-free survival (PFS) in favor of the BEACOPP group at 5 years. 1 This is perhaps explained by the significantly lower rates of second malignancy in patients treated with ABVD had when compared to escBEACOPP (0.7% vs. 6.6%) and the success of autologous stem cell transplant (ASCT) in the salvage of relapsed disease. In the modern era, morbidity is predicted to be lower as the number of BEACOPP cycles has been reduced using a PET-directed approach 2 ; however, identification of patients with higher risk disease at diagnosis seems important for choosing the correct intensity of therapy, to optimize the chance of cure.
Risk stratification of patients' disease by the International Prognostic Score (IPS) has previously been used to guide clinicians with initial treatment decisions; however, compared to the dynamic assessment of response by PET, it is less able to identify those patients with high-risk disease that have a poorer outlook. 3 The incorporation of biologic features such as gene expression profiles in addition to IPS has so far not yielded any prospectively validated biomarkers, but measurement of metabolic tumor volume (MTV) and total lesional glycolysis (TLG) at the baseline PET may provide a more quantifiable assessment of tumor burden, a known predictor of poor outcome. 4 The European collaborative group retrospectively analyzed baseline total MTV (TMTV) in 258 patients with early-stage HL in the standard combined modality arm of the H10 trial and showed that both TMVT and interim PET (iPET) following two cycles of ABVD were independently prognostic of response to treatment, and when combined, allowed identification of a high-risk patient group with a 5-year PFS of only 25% (TMVT >148 cm 3 and iPET positive-Deauville Score [DS] 4-5). 5 In this study, the TMTV was calculated by summing all the extranodal and nodal lesions using the 41% maximum standardized uptake value threshold (SUVmax) method. In advanced-stage disease (stage IIB-IV), 848 patients enrolled in the RATHL trial had baseline total/bulk MTV and TLG measured using SUV ≥ 2.5 when compared to the liver (the 41% SUVmax method was found not to be associated with PFS or 3-year HL events in this patient cohort). 6 Patients with a positive iPET following two cycles of ABVD had a significantly higher total/bulk MTV and TLG when compared to iPET-negative patients (p = 0.0002); however, in a multivariate analysis, only total TLG, B symptoms, and age were significantly associated with PFS. Patients with a negative iPET and high-volume TLG at baseline (defined as >3318 g) had a 5- year treatment failure rate of 31%, compared with 13.1% in lowvolume TLG. 6 A study which retrospectively analyzed a total of 392 patients enrolled in both arms of the AHL 2011 LYSA trial identified a small number of patients with a high-baseline PET TMVT (set at a threshold of 350 ml using the 41% SUVmax method) who had a positive iPET (DS 4-5) following two cycles of escBEACOPP, with a 2- year PFS of 61% compared to 88% and 96% in patients with a low TMTV/positive iPET and a low TMTV/negative iPET, respectively. 7 The rate of progression among patients with stage IV disease and a negative iPET in the RATHL trial was 20% compared with less than 10% of patients enrolled in the GHSG H18 trial and LYSA study, suggesting a more reliable negative predictive value of iPET after more intensive regimens such as escalated BEACOPP in patients with high-risk disease. 3 Thus, baseline total MTV and TLG may prove useful in the context of guiding initial intensity of treatment, by identifying those at risk of treatment failure despite a negative iPET.
Measurement of total MTV/TLG will require standardization; however, similar to the Deauville scoring system that was developed for iPET assessment to allow reproducibility and consistency when stratifying patients into different risk groups and setting consistent threshold values. 4 Prospective validation of this potential biomarker in a large clinical trial is needed to ascertain its true prognostic value.
Patients with advanced-stage HL and a positive iPET after two cycles of ABVD in the RATHL trial went on to receive escalated treatment with more intensive BEACOPP regimes (four cycles of escBEACOPP or six cycles of BEACOPP-14), with a 5-year PFS of 65.7% and OS of 85.1%. This compares favorably to continuation of ABVD following iPET in previous studies, where the PFS was consistently less than 40%. 3 The South West Oncology Group (SWOG) 0816 trial showed at 5-year follow-up, 59 patients with advanced-stage HL (here defined as stage III-IV) and a positive iPET (DS 4-5) escalated to escBEACOPP after two cycles of ABVD had a similar PFS of 66%, but the rate of second malignancy was 14% with a short median onset of 4.2 years. In this study, six cycles of escBEACOPP were given compared to four cycles in RATHL, which may partly explain the high rate of secondary malignancy. 8 The GHSG H18 trial showed that patients with iPET-positive disease following two cycles of escBEACOPP who were treated with a total of six cycles of escBEACOPP had a secondary malignancy rate of 9% at 5.5 years of follow-up. 2 In the RATHL study, the treatment failed despite escalation to BEACOPP regimens in 20 out of 37 patients with a DS of 5 on iPET, and this group almost certainly requires a different approach to improve their survival. The use of salvage therapy with high-dose chemotherapy (HDT) followed by ASCT is an option for patients with initial chemorefractory disease, and was investigated by the Italian HD0801 trial. 9 Here a positive iPET was defined as a DS of 3-5, and therefore included a more favorable patient group when compared to outcomes from RATHL and LYSA trials. Following two cycles of ABVD, 81 (19%) patients remained iPET positive and received HDT ASCT, with a 2-year PFS of 75% suggesting that early intensification might improve outcomes for this group. 9 The use of newer agents such as brentuximab vedotin (BV) and anti-PD1 antibodies in the frontline treatment of patients with high-risk iPETpositive disease may provide an alternative to ASCT, given their activity in the relapsed/refractory disease; however, there is as yet little data to support their use in a PET-driven approach for this selected group of patients. The Phase III ECHELON-1 trial incorporated six cycles of brentuximab with AVD chemotherapy (A + AVD) and showed a 3-year modified PFS (including a DS 3-5 at the end of treatment as an event) of 83.1% compared with 76.2% in patients' receiving six cycles of ABVD (7.1% difference p = 0.005), with a beneficial trend observed in iPET-positive patients <60 years receiving A+ AVD (3-year PFS 69.2% vs. 54.7%, respectively). 10 Therefore, A+ AVD may be an attractive option for those patients with high-risk disease who wish to reduce the risk of long-term toxicity associated with BEACOPP regimes or who are unable to tolerate escalation of therapy following a positive iPET.
In early stage unfavorable disease, the addition of BV to four cycles of AVD within in a phase II PET-directed pilot study in the United States allowed the reduction of dose and intensity of radiotherapy without apparently compromising treatment efficacy, with a 2-year PFS of 97% among 29 patients who did not receive any consolidation radiotherapy. 11 The phase III GHSG H17 trial in a similar patient cohort also showed that the omission of radiotherapy in those with a negative PET following two cycles of ABVD plus two cycles of escBEACOPP was noninferior in terms of 5-year PFS (2.2% difference in favor of the radiotherapy group). 12 An initial highintensity approach in the early-stage disease thus appears to maximize cure rates without the need for consolidation radiotherapy in those patients with a negative PET at the end of the treatment, showing an improvement in the negative predictive value of iPET when compared to the use of less intensive regimens.
The RATHL trial showed that the omission of bleomycin in patients with a complete metabolic response at iPET did not compromise survival outcomes, and resulted in a lower incidence of pulmonary toxicity (5-year PFS and OS 84% and 98% vs. 86% and 97%, respectively). 3 Similarly in the AHL 2011 LYSA trial, 5-year PFS was not significantly different between patients treated with continued escBEACOPP or de-escalated to ABVD (86.2% standard arm vs. 85.7% PET-driven arm) leading to the conclusion that therapy can be reduced in those patients whose disease responds to initial therapy without compromising survival outcomes. 13 The optimal number of escBEACOPP cycles was investigated by the GHSG H18 trial in this context, and showed that in patients with a negative iPET (DS 1-2) following two cycles of esc-BEACOPP, the duration could be safely reduced to two further cycles, with a small but statistically significant improvement in 5-year survival outcomes when compared to four cycles (PFS 92.2% vs. 90.8% OS 97.7 vs. 95.4%, respectively). 2 For patients with an IPS score of 1-2 and favorable baseline characteristics, an initial two cycles of ABVD with de-escalation to AVD if iPET negative and escalation to four cycles of escBEACOPP if iPET positive has a high probability of cure while minimizing the number of patients exposed to the acute and long-term toxicity of BEACOPP regimes. The omission of radiotherapy in those patients with a complete metabolic response did not affect survival outcomes in the GHSG H15 study 14 and only 6.5% of patients received consolidation radiotherapy without loss of disease control in the RATHL trial. 3 There may be a role for radiotherapy in single-site iPETpositive disease to reduce the number of patients escalated to more intensive chemotherapy regimens; however, there is currently a lack of prospective data supporting this approach.
| IMMUNE CHECKPOINT INHIBITORS AND EMERGING BIOMARKERS
The use of immune checkpoint inhibitors (ICI) in relapsed Hodgkin lymphoma is well established, and the use of anti-PD1 antibodies combined with multi-agent chemotherapy is being explored in the first-line setting. A study of affected nodes in those treated with anti-PD1 antibodies showed modification of the HL microenvironment in response to anti-PD1 therapy, with rapid depletion of HRS cells and a reduction in PDL1-expressing tumor-associated macrophages and regulatory T cells. 15 There was no clonal expansion and activation of cytotoxic T cells as is seen in solid tumors, suggesting a mechanism of action that is particular to HL, involving interruption of T cell-B cell signaling pathways. Combination of nivolumab with AVD chemotherapy (N + AVD) for advanced-stage HL (stage IIB-IV) was investigated by Ramchandren et al. who first gave nivolumab monotherapy for 4 doses, followed by combination therapy (N + AVD) for 12 doses every 2 weeks, with response assessment at the end of monotherapy, after two combination cycles and at the end of the therapy. 16 Interestingly, at the end of monotherapy, the complete response rate was 21%, with all patients in the highest quartile for expression of PDL1 on HRS cells achieving a CR after combination therapy, maintained at 32 weeks of follow-up. Discontinuation rates were low (10%) with a febrile neutropenia rate of 10%. The most common endocrine immune-mediated adverse event (IMAE) was hypothyroidism, and the main nonendocrine IMAE was rash (grade 1-2).
Generally, the regimen was well tolerated, but there was one treatment-related death in an older patient, in CR after two cycles of combination therapy who experienced four grade 3-4 adverse events. 16 The use of pembrolizumab monotherapy prior to 4-6 cycles of AVD in 30 patients with early unfavorable and advanced disease showed an impressive 100% CR rate by the end of two cycles of AVD.
Responses were durable, with no progression or death at 22 months of follow-up, with no consolidation radiotherapy given at the end of the treatment. 17 subsequently escalated to more intensive regimes and exposed to unnecessary toxicity whose disease may have responded at a later time point. This resulted in the addition of indeterminate response (IR) to CR PR and PD and allows the flexibility for patients to continue treatment with further imaging at 12 weeks to confirm either PD or response (Table 1). There may be a role for anti-PD1 LONGLEY AND JOHNSON therapy in those patients with a DS of 5 on iPET as an alternative to escalation of therapy to more intensive regimes, whose disease is refractory to traditional chemotherapy.
| APPROACHES FOR OLDER PATIENTS WITH HODGKIN LYMPHOMA
The use of BV and ICI in the elderly may be an attractive option as monotherapy, or in combination with less toxic chemotherapy regimens, to improve the poorer survival outcomes when compared to the younger population. The problems of comorbidity, poor performance status (PS), increased adverse events, and low tolerance of chemotherapy regimens at full dose in this heterogeneous population have resulted in the reported 3-year PFS and OS rates of 55% and 78%, respectively. 19 20 This assessment is quick and convenient for the busy oncologist to use in the clinic, to inform the initial treatment decisions, and help tailor initial therapy to each individual patient's circumstances (Table 2).
| CONCLUSIONS
The use of PET-directed therapy in younger patients with advanced HL has allowed safe de-escalation of treatment for those with responsive disease at iPET, sparing the acute and long-term toxicity of more intensive chemotherapy regimens and consolidation radiotherapy. Figure | 2021-06-09T13:17:41.670Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "39a57ce27ff6eeba2ec9e44f545fcf454e9c4352",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hon.2856",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "39a57ce27ff6eeba2ec9e44f545fcf454e9c4352",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1199002 | pes2o/s2orc | v3-fos-license | Recognition of Double-Stranded RNA and Regulation of Interferon Pathway by Toll-Like Receptor 10
Toll-like receptor (TLR)-10 remains an orphan receptor without well-characterized ligands or functions. Here, we reveal that TLR10 is predominantly localized to endosomes and binds dsRNA in vitro at endosomal pH, suggesting that dsRNA is a ligand of TLR10. Recognition of dsRNA by TLR10 activates recruitment of myeloid differentiation primary response gene 88 for signal transduction and suppression of interferon regulatory factor-7 dependent type I IFN production. We also demonstrate crosstalk between TLR10 and TLR3, as they compete with each other for dsRNA binding. Our results suggest for the first time that dsRNA is a ligand for TLR10 and propose novel dual functions of TLR10 in regulating IFN signaling: first, recognition of dsRNA as a nucleotide-sensing receptor and second, sequestration of dsRNA from TLR3 to inhibit TLR3 signaling in response to dsRNA stimulation.
inTrODUcTiOn Pattern recognition receptors (PRRs) play an essential role in recognizing pathogen-associated molecular patterns (PAMPs) leading to the initiation and orchestration of innate and adaptive immune responses. Toll-like receptors (TLRs) are a major group of PRRs and their activation is known to play an important role in host defense against pathogen infection (1). Ten TLR members, TLR 1-10, have been identified in humans and are responsible for the sensing of distinct microbial components. In general, TLR3, 7, 8, and 9, which are predominately located in endosomes, are involved in the recognition of nucleic acids derived from or associated with internalized microbes, while TLR1, 2, 4, 5, and 6 are localized on the surface of mammalian cells, where they can detect the outer membrane components of bacteria, fungi, and protozoan micro-organisms. Thus, the cellular localization of TLRs correlates with their functions in sensing invading pathogens. Engagement of TLRs by PAMPs leads to signaling via their toll/interleukin-1 (IL-1) receptor (TIR) domain recruiting signaling adaptors, and activating transcription factors that result in induction of IFNs and cytokines.
Toll-like receptor-10 is the least characterized TLR and still remains an orphan receptor, with only very limited information available regarding its localization, agonist, signaling, and function (2). A major constraint for research on TLR10 has been the lack of a suitable mouse model as TLR10 is a pseudo-gene in mice (3).
One suggestion is that TLR10 cooperates with TLR2 in sensing bacterial lipopeptides and recruits the adaptor myeloid differentiation primary response gene 88 (MyD88) to the activated receptor complex (15). However, native TLR10 co-expressed with TLR2 as a heterodimer in a human colonic epithelial cell line did not respond to lipopeptide stimulation and a response could only be demonstrated in a situation when TLR2 was coexpressed with a chimeric TLR1/TLR10 receptor (the extracellular and transmembrane domains of TLR10, the TIR of TLR1) (15). Moreover, TLR10, alone or in cooperation with TLR2, failed to activate typical TLR-induced signaling, including activation of nuclear factor κB (NF-κB) (15). On the other hand, TLR10 has been shown to mediate activation of NF-κB and trigger innate immune responses to Helicobacter pylori infection (16), to act as a PRR with mainly anti-inflammatory properties inhibiting the production of pro-inflammatory cytokines in response to bacterial lipopeptides (17) and function as a negative regulator of MyD88 dependent and independent TLR signaling (18). Conversely, TLR10 may play a role in activating inflammatory responses to Listeria monocytogenes in intestinal epithelial cells and macrophages (19). Knockdown (KD) of TLR10 reduced TLR ligand induced pro-inflammatory cytokine expression (20) and we previously reported that TLR10 plays a role in innate cytokine responses following influenza viral infection (21). These data suggest that the modulatory effects of TLR10 are complex of which TLR10 may function distinctively in response to stimulations by different pathogens or ligands triggering distinct TLR10 signaling pathways or possibly via crosstalking with other PRRs.
In this study, we provide different lines of evidences demonstrating that dsRNA is a ligand for TLR10 sensing and signaling, and suggest a role of TLR10 as a nucleotide-sensing receptor. We also revealed another function of TLR10, which sequesters dsRNA from TLR3 to regulate IFN signaling. Together, these findings provide new insights into the mechanism and role of TLR10 in the regulation of IFN signaling in innate immune response.
MaTerials anD MeThODs cells THP-1 (ATCC TIB-202) cells were obtained from the ATCC and cultured in RPMI-1640 (Life Technologies) supplemented with 10% fetal bovine serum (FBS, Life Technologies), 100 U/ml penicillin and 100 µg/ml streptomycin (Life Technologies). The TLR10 KD and TLR10 overexpressed (OE) THP-1 cells were generated and maintained as described previously (21). KD and overexpression of TLR10 was verified by RT-qPCR. THP-1-dual reporter cells were obtained from InvivoGen and maintained in RPMI-1640 culture medium with 10% FBS supplemented with 100 U/ml penicillin, 100 µg/ml streptomycin, 10 µg/ml blasticidin (InvivoGen), and 100 µg/ml Zeocin (InvivoGen). Human peripheral blood monocytes were isolated from blood packs of healthy donors provided by the Hong Kong Red Cross Blood Transfusion Service and purified by adherence and differentiated into macrophages as described (21). Consent from blood donors was obtained by Hong Kong Red Cross to use blood components for research experiments. The work involved the use of human blood samples has been reviewed and obtained human ethics approval (ref no. UW 10-201, UW 14-170) issued by Institutional Review Board of the University of Hong Kong and met the standards of the Declaration of Helsinki.
immunofluorescence confocal Microscopy
THP-1 cells were washed twice with PBS and fixed with 2% paraformaldehyde (USB Corporation) in PBS for 10 min at room temperature. For the co-localization study of rhodamine-poly(I:C), TLR10 and endosomal markers, wild-type (WT) THP-1 cells and primary human monocyte-derived macrophages (MDM) were first transfected with rhodamine-poly(I:C) (10 µg/ml) for TLR1 Q6FI64 TLR2 O60603 TLR3 O15455 TLR4 O00206 TLR5 O60602 TLR6 Q9Y2C9 TLR7 Q9NYK1 TLR8 Q9NR97 TLR9 Q9NR96 TLR10 D1CS19 10 and 30 min, respectively, and washed twice with PBS before fixation. For intracellular staining, cells were permeabilized using 1% saponin (Sigma) in PBS for 30 min. 0.1% saponin was included in all subsequent steps involving intracellular staining. Blocking was done by using 10% normal goat serum (Abcam) in PBS for 30 min. Cells were stained with primary and secondary antibodies in 1% normal goat serum, both for 1 h. Plasma membranes of non-permeabilized cells were stained using Alexa Fluor 488-conjugated WGA for 10 min. Nuclei were counter-stained with NucBlue Fixed Cell ReadyProbes Reagent for 5 min. Cells were embedded in Mowiol 4-88 (Sigma) with PPD (2 mg/ml) (Sigma) and 0.02% NaN3 (Sigma), mounted on slides with coverslips for imaging. Images were acquired using an LSM 710 (Carl Zeiss) equipped with a Plan-Apochromat 40× objective (Carl Zeiss) and processed using ZEN lite (Carl Zeiss) and ImageJ. For co-localization estimation between TLR10 and different organelles, masks were generated from separate channels from the same micrograph corresponding to TLR10 and the organelle marker. Masks were overlaid and percentage co-localization was calculated by the ImageJ algorithm.
ligand stimulation
The WT, TLR10 OE, and KD THP-1 cells were stimulated with poly(I:C) (10 µg/ml or as specified), or 5′pppdsRNA (10 µg/ml) synthesized in vitro from vesicular stomatitis virus genome and its variant (22) for 4 h or as indicated. For intracellular stimulation, ligands were complexed with Lipofectamine 2000 reagent (Life Technologies) and delivered intracellularly. For cell surface stimulation with poly(I:C), the ligand was added to the cell culture medium directly. The induction of cytokines was analyzed by RT-qPCR (normalized to β-ACTIN) and compared with the corresponding unstimulated control cells.
In Vitro binding assay
Binding assays were performed as described (23)
Fluorescent resonance energy Transfer (FreT) assay
Interactions between poly(I:C) and TLR10 within cells were analyzed by FRET from FITC-conjugated antibodies to rhodamine-labeled poly(I:C). Rhodamine-labeled poly(I:C) was transfected into WT THP-1 cells, washed, and fixed at 30 min post-transfection as described above. Immunostaining was performed as above. Samples were mounted on slides with Mowiol 4-88 (Sigma) with PPD (1 mg/ml) (Sigma) and 0.02% NaN3 (Sigma). FITC and rhodamine were excited with 488 and 561 nm lasers, respectively. Images from the FITC and rhodamine channels were acquired with LSM 710 (Carl Zeiss) before, between and after repeated bleaching of rhodamine by a 561 nm laser. FRET efficiency was estimated with ZEN 2012 equipped with the FRET module using the acceptor photobleaching approach. irF luciferase activity assay THP-1-dual reporter cells were stably integrated with two inducible reporter constructs allowing the activation of the NF-kB or IRF pathways to be detected via measurement of secreted alkaline phosphatase or luciferase activity, respectively. At 24 h after poly(I:C) (10 µg/ml) stimulation, secreted luciferase in supernatant from the reporter cells was quantified by the QUANTI-Luc assay (InvivoGen) using a MicroBeta luminescence counter (PerkinElmer Wallac).
synthesis of hTlr10-ectodomain (ecD)
Plasmid pcDNA3-TLR10-YFP (Addgene 13643) was a gift from Doug Golenbock and used as a template to construct the extracellular domain of human TLR10 (hTLR10-ECD). The gene fragment of hTLR10-ECD without signal peptide was amplified by PCR using forward primer 5′-GACGACGACAAGATG GATGCTCCAGAGCTGCCAG-3′ and reverse primer 5′-GAGG AGAAGCCCGGttaTGTGTTGCAAGATAATTCGTGG-3′. The amplified gene was cloned into pET46 EK/LIC vector based on commercial provided protocol (Novagen), and the sequence of resulting plasmid pET46/hTLR10-ECD was confirmed by sequencing (Invitrogen). Recombinant hTLR10-ECD was synthesized and the protein purification procedure was carried out at 4°C as following described. The pET46/hTLR10-ECD was transformed into Escherichia coli BL21(DE3) cells, and single colony was selected and inoculated into 100 ml of LB containing 100 µg/ml ampicillin and culture for overnight at 37°C. Afterward, the culture was transferred to auto-induction medium by the ratio of 1:50 (1% tryptone, 0.5% yeast extract, 2 mM MgSO4, 0.5% glycerol, 0.05% glucose, 0.2% lactose, in PBS buffer, containing 100 µg/ml ampicillin). The culture was incubated at 16°C for 60 h under constant shaking at 200 rpm. Cells were harvested by centrifugation at 5,000 × g for 20 min then re-suspended in lysis buffer (25 mM Na2HPO4, 25 mM KH2PO4, 350 mM NaCl, 10 mM imidazole, and 5% glycerol, pH 7.2), followed by disruption with French Press. Cell debris was removed by centrifugation at 17,000 × g for 1 h. The supernatant was then applied to a Ni-NTA column using the FPLC system (GE Healthcare). The target proteins eluted at ~100 mM imidazole when using a 10-500 mM imidazole gradient. Proteincontaining fractions were collected and dialyzed against a buffer containing 25 mM Tris-HCl and 20% glycerol (pH 7.5). The protein was then passed through a DEAE column and target fractions were collected and concentrated. The protein solution was then applied onto a size exclusion column (Superdex 200 16/60, 120 ml, GE HealthCare). The purified protein was concentrated and in a buffer containing 25 mM Tris-HCl, 500 mM NaCl (pH 7.5), and the purity and molecular weight of the protein were checked by SDS-PAGE.
dsrna competition by Tlr10 and Tlr3 ecD
In-house synthesized TLR10-ECD (corresponds to residues 20-576 of a reference sequence NCBI accession number AAY-78486.1) or commercially available TLR3 ECD (ab73825, Abcam) was used at a concentration of 61.9 nM and incubate with 50 ng biotin-poly(I:C) (average size: 1.5-8.0 kb; final concentration: 1 µg/ml) in PBS at pH 5.5 at 37°C in a volume of 50 µl for 1 h with gentle mixing at 15 min intervals. After incubation, mixture was further incubated with 10 µl agarose streptavidin at 4°C for 16 h. Beads were washed thrice with ice-cold PBS at pH 5.5 supplemented with 0.05% Tween-20 and eluted by heating with 2× Laemmli buffer at 95°C for 10 min. For TLR10 and TLR3 ECD competition, binding volume and elution volume were increased to twice of those set-up which individual TLR ECD was used, while keeping the concentration of both TLR ECD and biotinpoly(I:C) as well as elute gel loading volume for Western blotting analysis constant.
rT-qPcr
Cells were lysed with RLT lysis buffer and total RNA was isolated using the RNeasy mini kit according to manufacturer's instructions (QIAGEN). Following quantification by NanoDrop, 1 µg of RNA from each sample was used for reverse transcription using SuperScript VILO (Life Technologies). Gene expression levels were monitored using the SYBR Fast qPCR master mix kit (KAPA Biosystems) with the use of specific primers and signals were detected by a Light Cycler LC480 Instrument II (Roche). Fold change of target gene expression level was determined by the 2 −ΔΔCT method by utilizing β-ACTIN gene expression level as the internal reference.
statistical analysis
All statistical analyzes were performed using an unpaired twotailed Student's t-test using GraphPad Prism 6.01 software. Data are presented as mean with SEM. p-Values <0.05 were considered as statistically significant.
sub-cellular localization of Tlr10
The cellular localization of a TLR has a key influence on its functions in sensing ligands (24,25). Here, confocal microscopy was used to define the sub-cellular localization of TLR10. As in our previous flow cytometry study of a resting monocytic cell line (THP-1) (21), TLR10 was detected on cell surface but was more abundant intracellularly ( Figure 1A). Markers of intracellular organelles were used to investigate the co-localization of TLR10 in different cellular compartments. TLR10 was predominately expressed in endosomes, with the highest expression detected in RAB11A + recycling endosomes and RAB5 + early endosomes ( Figure 1B). The expression level of TLR10 was high in the endoplasmic reticulum and RAB7 + late endosomes but relatively lower in the Golgi apparatus. Sub-cellular localization of TLR10 in primary human MDM was also investigated (Figure S1 in Supplementary Material). As in THP-1 cells, TLR10 was detected in different organelles with a high co-localization of TLR10 with endosomal markers in MDM. This suggests that distribution of TLR10 in human THP-1 cell line closely resembles primary human macrophages. As genetic manipulation of these cells is needed for functional studies, the THP-1 cell line was used in most of the subsequent experiments.
To confirm the specificity of anti-human TLR10 antibodies used in this study, we have tested the antibodies used for Western blotting and immunofluorescence staining in WT and TLR10 genetic modified THP-1 cells. In Western blotting, protein expression level of TLR10 was found to correlate well in these cell types, of which TLR10 protein was found to be more and less abundant in TLR10 OE and KD cells, respectively, compared with WT cells ( Figure S2A in Supplementary Material). Immunofluorescence staining data were also in agreement with Western blotting data, and expression in TLR10 OE cells showed stronger intensity compared with that of WT cells. Particularly, there was a significant differential intracellular expression in TLR10 OE vs WT cells ( Figure S2B in Supplementary Material). These data confirmed the quality and specificity of the antibodies used in this study.
dsrna is a ligand for Tlr10 sensing and signaling Toll-like receptor-10 has been demonstrated to play a role in response to different immunological stimulations (10,(16)(17)(18)(19)(20)(21), however, none of these studies provided evidences to demonstrate what could be the true ligand(s) for TLR10 sensing and signaling. Our finding of high-TLR10 expression in early endosomes of resting cells suggested that TLR10 might be a nucleic acid sensing receptor (23,26). While poly(I:C) is the only nucleic acid candidate so far reported to trigger TLR10dependent signaling (18), no previous data have shown that poly(I:C) could bind TLR10 as its ligand. Thus, we investigated if poly(I:C) is a ligand for TLR10 as well as mechanism for its signaling.
To study the TLR10-specific biological function, TLR10 OE and KD cells were employed and compared with the WT THP-1 cells (21). Differential expression of TLR10 mRNA and protein in OE and KD cells was systematically checked using reverse transcription-quantitative PCR (RT-qPCR) (Figure 2A) and Western blotting ( Figure S2A in Supplementary Material), respectively, and was found to be consistently maintained throughout this study. Poly(I:C) was used to stimulate these three types of cells both at the cell surface with naked poly(I:C) and transfected intracellularly via cationic lipid mediated delivery. When poly(I:C) was transfected, it potently induced type I IFN response in WT THP-1 cells (Figure 2B, intracellular). Relative to the induction of IFNβ in WT cells, a significantly higher level of induction occurred in TLR10 KD cells while overexpression of TLR10 reduced the IFN response (Figure 2B, intracellular). Naked poly(I:C) could also induce IFNβ expression in WT cells but to a much lesser extent compared that to transfected poly(I:C), while IFNβ expression in TLR10 OE or KD cells showed no difference relative to WT cells (Figure 2B, surface). As β-ACTIN was used for RT-qPCR normalization, expression of β-ACTIN in response to ligand stimulations was monitored. The expression of β-ACTIN was found to be stable, and there was no significant difference in its expression with or without ligand stimulations ( Figure S2C in Supplementary Material), verifying its suitability as a house-keeping gene for RT-qPCR normalization in this study. The differential responses to intra-and extracellular stimulation of poly(I:C) has been reported, with a 1,000-to 10,000-fold increase for intracellular transfection (27,28). These data suggest that sensing of poly(I:C) by TLR10 likely occurs in intracellular compartments and to certain extent in accordance with the abundant expression of TLR10 found intracellularly compared with that on the cell surface.
Poly(I:C) stimulates TLR10 mediated IFN response in a timeand dose-dependent manner. Changes in the expression of type I IFN among WT, OE, and KD THP-1 cells were consistent and showed a significant difference as early as at 1 h and at 3 and 6 h post-stimulation ( Figure 2C) and at concentrations from 10 to 40 µg/ml ( Figure 2D).
As specific features on nucleic acids have been reported to be crucial for the activation of certain PRRs (29, 30), a synthetic 5′pppdsRNA (dsRNA WT) and its variant with structural modifications and improved antiviral properties (dsRNA M5) (31) were tested for the effect of triphosphorylation at 5′ end of dsRNA on TLR10 sensing and signaling. Given that dsRNA and 5′pppdsRNA are known to sense by TLR3 and RIG-I, respectively, we first checked if basal expression of these receptors is being affected in the TLR10 OE and KD cells. Basal expression of TLR3 and RIG-I was not affected in TLR10 genetic modified cells (Figure 2E). Similar to response with poly(I:C) challenge, IFNβ expression is significantly induced by these 5′pppdsRNAs in WT THP-1 cells, while overexpression and KD of TLR10 suppresses and upregulates IFNβ expression, respectively ( Figure 2F). Although expression of IFNβ was higher with 5′pppdsRNA, this differential expression of IFNβ in OE or KD cells relative to WT cells was comparable with that observed with poly(I:C), suggesting that TLR10 signaling does not preferentially sense 5′pppdsRNA over dsRNA. TLR10 regulated type I IFN expression specifically responds to dsRNA as stimulation by 2′3′-cGAMP, a ligand of the endoplasmic reticulum adaptor STING, showed a much smaller level of type I IFN induction (a 5.5-fold increase) in WT THP-1 cells and was not significantly different from that in TLR10 OE and KD THP-1 cells (Figure 2F).
Taken together, the down-regulation of the IFN response in TLR10 OE cells implies that TLR10 negatively modulates IFN responses after dsRNA stimulation.
binding of dsrna to Tlr10 requires acidic ph
In vitro binding assays (23, 32) of TLR10 and dsRNA were performed at pH 5.5 as high-affinity ligand binding and signaling of nucleic acid sensing TLRs depends on a pH environment (33,34) similar to that within endosomes (pH 4.5-6.5) (35-37). TLR10 was readily pulled-down in vitro using biotinylated poly(I:C) as bait at pH 5.5 ( Figure 3A, lane 3). Addition of unlabeled poly(I:C) markedly decreased the amount of TLR10 pulleddown ( Figure 3A, lane 4), suggesting that TLR10 specifically bound poly(I:C). At pH 7.4, the physiological pH (38)(39)(40) and of the cell culture and cell surface in the experiments above, TLR10 pull-down was not detectable (Figure 3B). This is consistent with the hypothesis that binding of dsRNA to TLR10 occurring in acidic compartments such as the endosomes and not at the surface of cells.
intracellular interactions between Tlr10 and dsrna confirmed by FreT assay
A co-localization study of the spatial association of TLR10 with poly(I:C) showed that, after ligand transfection, fluorophorelabeled poly(I:C) co-localized with TLR10 in RAB5 + early or RAB7 + late endosomes, but this was barely detectable in RAB11A + recycling endosomes (Figure 4A). Similar result was observed in primary human MDM cells showing co-localization of TLR10 and poly(I:C) in endosomes ( Figure S3 in Supplementary Material).
Next, we determined by FRET assay to investigate if TLR10 and poly(I:C) are sufficiently close such that they interact with each other. Fluorescently labeled poly(I:C) as FRET acceptor were transfected into THP-1 cells. Endogenous TLR10 detected by fluorophore-conjugated antibody was served as FRET donor in the assay. To better avoid artifacts which may be introduced by the single acceptor photo-bleaching approach, a gradual acceptor photo-bleaching was chosen (41). Signal from both TLR10 corresponding channel (donor) and poly(I:C) corresponding channel (acceptor) within the bleached region of interest ( Figure 4B, white circled) was monitored in real-time throughout photo-bleaching which started after the fifth frame. After photo-bleaching of the acceptor, fluorescence intensity of the donor corresponding to TLR10 increased obviously in the bleached region with calculated mean FRET efficiency of 50.4 ± 12.7%. This result suggests that TLR10 and poly(I:C) are at very close proximity and further implies that TLR10 and dsRNA interact directly with each other.
MyD88 is recruited to Tlr10 Upon dsrna stimulation
The highly conserved BB-loop of the TIR domain of TLRs was shown to interact with TIR domains of signal-activating adaptor proteins (24,(42)(43)(44) and an alanine/proline residue in the BB-loop confers adaptor binding specificity (42,45). Except TLR3, all known human TLRs, including TLR10, have a proline residue in the BB-loop, and are thought to bind to MyD88 ( Figure 5A). TLR3 has alanine in this position and binds TIRdomain-containing adaptor-inducing IFNβ (TRIF), yet mutation of this alanine residue to proline is sufficient to switch the TLR3 signaling adaptor from TRIF to MyD88 (45). Based on sequence similarity, MyD88 would be expected to be an adaptor protein for TLR10 signaling. Co-localization of MyD88 with TLR10 upon poly(I:C) stimulation was investigated. Cells not transfected with poly(I:C) displayed little or no co-localization of MyD88 and TLR10 ( Figure 5B). However, on transfection with poly(I:C), colocalization of MyD88 with TLR10 was observed as early as 5 min post-ligand stimulation, with increased levels at 10 min, and a subsequent decline. Recruitment of MyD88 by TLR10 upon stimulation was further confirmed by immunoprecipitation (Figure 5C). In a mock transfection [without poly(I:C)], MyD88 was barely detectable in samples immunoprecipitated with an anti-TLR10 antibody while a stronger interaction between MyD88 and TLR10 was detected upon poly(I:C) stimulation ( Figure 5C). As in the co-localization study, the strongest interaction was detected at 10 min post-stimulation and decreased gradually afterward. Immunoprecipitation of TLR10 with TRIF did not result in pull-down of TRIF ( Figure 5C). Thus, MyD88, but not TRIF, is actively recruited to TLR10 upon poly(I:C) stimulation.
Tlr10 regulates dsrna-Mediated iFn responses via irF7
Interferon regulatory factors play an essential role in regulating IFNs expression following TLR signaling (46). IRF3 and IRF7 are thought to regulate expression of type I IFN in response to dsRNA stimulation (46). Activation of IRF7 is characterized by phosphorylation of its C-terminus by the IKK-related kinases TBK-1 and IKKε, followed by IRF dimerization and nuclear translocation (47). TLR10 signaling might reduce the phosphorylation of IRFs, subsequently leading to reduced type I IFN production. Therefore, IRF phosphorylation was examined following transfection of poly(I:C) into WT and TLR10 OE THP-1 cells. Phosphorylation of IRF7 stably increased in response to poly(I:C) challenge in WT cells, while, it was markedly reduced in TLR10 OE cells ( Figure 6A). Phosphorylation of IRF3 showed no differences with respect to TLR10 overexpression, and was similar in both cell types at all times following poly(I:C) challenge ( Figure 6B).
Interferon regulatory factor-3 is constitutively expressed in most cells while IRF7 can be induced in response to the activation of PRRs or type I-IFN-mediated signaling (48,49). The mRNA levels of both IRF7 and IRF3 were compared between IRF3 expression was unchanged throughout the course of the experiment ( Figure S4B in Supplementary Material). Taken together, the data here demonstrated that TLR10 regulates type I IFN response through IRF7 but not IRF3. TLR10 signaling not only modulates IRF7 activity but also its expression.
Expression of soluble luciferase in a reporter THP-1 cell line under the control of an IRF-inducible promoter (five IFNstimulated response elements and an ISG54 minimal promoter) allows quantification of the induction of type I IFN signaling responses through the level of luciferase activity. siRNA against TLR10 (si-TLR10) were introduced to reporter THP-1 cells to compare their luciferase activities with cells treated with nontargeting control siRNA, in response to poly(I:C) stimulation. A significant increase in luciferase activity was seen in reporter cells transfected with anti-TLR10 siRNA relative to that in cells transfected with the control siRNA, further proven that IRF mediated type I IFN response would be augmented in a TLR10 deficient environment (Figure 6C).
Tlr10 competes With Tlr3 for dsrna binding
In this study, we demonstrated a direct binding of poly(I:C) to TLR10, while poly(I:C) either added in the culture medium or transfected directly into the cells (34,50,51) is also known ligand to activate TLR3 signaling (52).
Since TLR3 is predominantly expressed intracellularly, ligands need to be transported into the receptor containing organelles to activate its signaling. Study demonstrated that CD14 enhances dsRNA-mediated TLR3 activation by directly binding to poly(I:C) and mediating cellular uptake of extracellular poly(I:C) (53). Ligands could be transfected into the cells with cationic liposomes of which dsRNA-liposome complexes are thought to be delivered to the endosome where they activate TLR3 (34,50,54,55). Here, we found that transfected poly(I:C) co-localized together with both TLR10 and TLR3 in endosomes (Figure 7A), suggesting transfected poly(I:C) could interact with both TLRs in endosomes. A recent paper suggested that TLR10 has an inhibitory effect on IFNβ production may involve TLR3 (18), but the mechanism remains undefined. Here, we investigated the mechanism involving crosstalk between TLR10 and TLR3. We generated a recombinant TLR10 ECD to investigate ligand sequestration from TLR3. Biotinylated poly(I:C) was incubated with TLR10-ECD or TLR3-ECD alone or both together at acidic pH condition. The ECD-dsRNA complexes formed were pulleddown and analyzed by immunoblotting (Figures 7B,C). TLR10-ECD or TLR3-ECD alone could be efficiently pulled-down when they were individually incubated with biotinylated poly(I:C) (Figures 7B,C, lane 3), suggesting that TLR10-ECD itself, like TLR3-ECD is sufficient for dsRNA binding without the need for other protein as co-receptor in vitro. Notably, when TLR10-ECD and TLR3-ECD were incubated with biotinylated poly(I:C) together, the pulled-down amount of both TLR10-ECD and TLR3-ECD were markedly reduced (Figures 7B,C, lane 4). This suggests that TLR10 competes with TLR3 and sequester dsRNA from TLR3 binding. To further examine this, we determined the effect of TLR3/10 double KD on IFN expression. Poly(I:C) was challenged to WT, TLR3 KD, TLR10 KD, and TLR3/10 double KD THP-1 cells and expression of IFNβ was assayed ( Figure 7D). As expected, knock-downing of TLR10 and TLR3 significantly led to up-and down-regulation of IFNβ, respectively, while expression of IFNβ was restored in TLR3/10 double KD cells similar to the level of WT cells. These data suggested that TLR3 and TLR10 have opposite effect on IFN expression, and there is a crosstalk of TLR3 and TLR10, while sequestration of dsRNA from TLR3 by TLR10 is a contributory factor to regulate dsRNAmediated IFN response. Furthermore, when TLR10 was OE, the expression of TLR3 (Figure 7E) was significantly suppressed in response to poly(I:C) stimulation suggesting that TLR10 not only sequester ligand with TLR3, but also regulate TLR3 expression to inhibit its signaling.
We also found that Sterile alpha and TIR motif-containing protein 1 (SARM1) was reduced upon poly(I:C) challenge, while its expression was rescued with TLR10 overexpression (Figure 7F). SARM1 is a negative regulator in TLR signaling. Activation of the TRIF-dependent pathway, e.g., TLR3 signaling is suppressed by SARM1 of which SARM1 associates with TRIF and inhibits the downstream signaling (56). Here, we found that expression of SARM1 was enhanced with TLR10 overexpression, suggesting another possible regulatory mechanism by TLR10 to suppress TRIF-dependent TLR3 signaling regulating IFN expression.
DiscUssiOn
Although TLRs have been implicated in many diseases (4-7, 10, 12, 13), TLR10 is unique among the human TLRs in limited knowledge that exists about its ligand(s), signaling, and function. Several ligands as well as in response to bacterial and viral infections (16,17,19,21), have been described to elicit a TLR10-dependent response, however, no solid evidence has been provided so far to support what could be the true ligands of TLR10 for its sensing and signaling. In this work, we provided different lines of evidences to demonstrate that dsRNA is in fact a true ligand for TLR10 sensing and signaling, thereby identifying a previously unrecognized role of TLR10 as a novel nucleotidesensing receptor. We also revealed that TLR10 competes with TLR3 for ligand binding and proposed a model to illustrate the mechanisms for dual functions of TLR10 in the regulation of dsRNA-mediated IFN signaling (Figure 8).
To date, only nucleic acid sensing TLRs have been reported to be located in early or late endosomes in resting cells (23,26). The expression and localization of TLR10 in RAB5 + early and RAB7 + late endosomes of resting cells is in concordance with the nucleic acid sensing nature of TLR10. While, the co-localization event of poly(I:C) and TLR10 in RAB11A + recycling endosomes was barely observed. RAB11A + recycling endosomes are known to transport cargo back to the plasma membrane after endocytosis (57). It is possible that TLR10 localized in RAB11A + endosome would be trafficking to plasma membrane to exert its function (16,17) and trigger same or different signaling as in intracellularly, an aspect that deserves further investigation.
Compartmentalization of nucleic acid sensing TLRs is useful to prevent aberrant activation by self-nucleic acids released from dead cells. Mis-localized TLRs have been demonstrated to be activated by self-nucleic acids and have been suggested as a cause for autoimmune diseases (58). For example, intracellular localization of TLR9 prevents recognition of self-nucleic acids but facilitates access to viral nucleic acids entering via the endocytic pathway. Endosomes provide acidic pH for increased affinity of nucleic acid sensing receptors to respective ligands (36,59). TLR10 binds to dsRNA at acidic endosomal pH, it is possible that this mechanism is utilized by TLR10 as a control mechanism to minimize aberrant receptor activation.
Previous functional studies on TLR10 mainly focused on the regulation of pro-inflammatory cytokines such as IL-6 and IL-8 (10,17,19). A recent paper suggested the involvement of TLR10 in poly(I:C) induced IFN response, possibly via TLR3, suggesting a crosstalk of TLR10 with other TLR signaling pathways (18), but the mechanism remains unexplored. In our present study, we investigated on the mechanistic aspect of TLR10 biological function by demonstrating its direct ligand competition with TLR3 through sequestrating poly(I:C). Furthermore, we also propose several mechanisms in the regulation of TLR3 signaling by TLR10, via suppressing TLR3 expression and enhancing negative regulator SARM1 to inhibit TRIF mediated signaling.
Our data further demonstrated a novel signaling pathway of TLR10. In the canonical TLR signaling pathway, dsRNA sensed by TLR3 leads to TRIF-IRF mediated IFN response. Our data suggested that activation of TLR10 by poly(I:C) triggered the recruitment of MyD88, but not TRIF, and reduced phosphorylation of IRF7, while IRF3 was not affected. Previously, it has been shown that upon the activation of TLR9, the death domain of MyD88 could directly interact with an inhibitory domain of IRF7 but not IRF3 (60). Thus, the recruitment of MyD88 upon TLR10 stimulation demonstrated in this study suggests TLR10 mediated MyD88-IRF7 axis to regulate IFN expression. While mechanism(s) on the inhibition of IRF7 activity by TLR10 and whether if there are additional adaptors involved in such interaction deserve further investigation.
Previous studies have shown contradictory role of TLR10 either enhancing or suppressing cytokine response (16)(17)(18)(19)(20)(21). Our present data demonstrated the mechanism of crosstalking between TLR10 and TLR3 put forward a new thinking to explain the divergent roles of TLR10. Involvement of TLR10 in cytokine response upon different stimulations, could be largely dependent on TLR10 signaling solely or in combination with one or more PRRs signaling. Especially in the scenario of pathogen infections, microbial components could be PAMPs to trigger a diverse array of PRRs, thus the complicated crosstalk between TLR10 and other PRRs in different disease pathogenesis would be an important area for future study. Since murine TLR10 is a pseudo-gene, investigating using classical knock-out approach is not possible in a mouse model. The generation and access of human TLR10 knock-in mice have successfully demonstrated that TLR10 has a role in controlling immune response in vivo (18). Our next important goal is to study the functional relevance of TLR10 as well as its crosstalk with other PRRs and their signaling employing transgenic human TLR10 knock-in mice in combination with PRR antagonist(s) or signaling inhibitor(s) in response to microbial infections. The discovery of ligand for TLR10 is a major step in the understanding of the biological function of this hitherto orphan receptor. In this study, we provide different lines of evidences suggesting dsRNA is the ligand for TLR10. TLR10 is a novel nucleotide-sensing receptor. Our work here provides mechanistic insight explaining two major roles of TLR10 in regulating IFN response upon dsRNA stimulation: first, recognition of dsRNA as a nucleotide-sensing receptor for TLR10 signaling which involves MyD88 and IRF7 to modulate IFN response and second sequestration of dsRNA with TLR3 for inhibiting TLR3 signaling and thus IFN expression.
Results here demonstrate the mechanism underlying the crosstalk between TLR10 and TLR3, which opens up a new concept in the regulation of IFN response by TLR10. Besides ligand sequestration, sequestration of signaling molecules between TLR10 and TLR3 or other PRRs signaling will be the next step to understand the mechanistic details on such regulation.
As there is increasing evidence suggesting the involvement of TLR10 in different disease pathogenesis, we believe that these new findings not only provide important fundamental insights to the understanding of immunobiology of TLR10, but also bring indispensable importance to further investigate the role and functional relevance of TLR10 in diseases. Modulation of TLR10 signaling may thus provide a unique option to fine-tune fundamental physiological pathways involved in disease pathological conditions. | 2018-03-16T17:28:16.703Z | 2018-03-16T00:00:00.000 | {
"year": 2018,
"sha1": "b36269f8c8f72cb1db6da9e3e79bbd340abf6a2d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.00516/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c4727dadb3c5fbc3da7c01f8f81ecf3d9cf4b4a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
210921177 | pes2o/s2orc | v3-fos-license | Handling noise in image deblurring via joint learning
Currently, many blind deblurring methods assume blurred images are noise-free and perform unsatisfactorily on the blurry images with noise. Unfortunately, noise is quite common in real scenes. A straightforward solution is to denoise images before deblurring them. However, even state-of-the-art denoisers cannot guarantee to remove noise entirely. Slight residual noise in the denoised images could cause significant artifacts in the deblurring stage. To tackle this problem, we propose a cascaded framework consisting of a denoiser subnetwork and a deblurring subnetwork. In contrast to previous methods, we train the two subnetworks jointly. Joint learning reduces the effect of the residual noise after denoising on deblurring, hence improves the robustness of deblurring to heavy noise. Moreover, our method is also helpful for blur kernel estimation. Experiments on the CelebA dataset and the GOPRO dataset show that our method performs favorably against several state-of-the-art methods.
INTRODUCTION
This work is on blind deblurring of a single blurry image with noise. The fundamental blur model is: where N is the blurred image, I is the sharp image, * is the convolution operator, P is the blur kernel, and n is the noise term. The blur kernel P is also known as the point spread function (PSF). Priors based approaches and deep learning based approaches are two major kinds of approaches to blind deblurring.
Priors based approaches, e.g., [1,2], are usually based on the uniform blur model (Eq. 1) that assumes the blur kernels P are spatial-invariant. However, most motion blurs in real scenes are non-uniform because different objects have diverse moving trajectories. Deep models like DeblurGAN [3], SRN [4], GFN [5], and Inception GAN [6] are excellent at deblurring noise-free images with complex non-uniform blurs. Nevertheless, they are trained on noisy-free images and could hardly deblur noisy images (see Fig. 1(b)). A straightforward idea is to denoise these noisy images before deblurring them. The idea has two major problems. First, it is common that denoised images still contain slight noise ( Fig. 1(e)). The slight noise are propagated into the deblurring networks and jeopardize the deblurring stage ( Fig. 1(f)). Second, denoisers (e.g., BM3D [7] and DnCNN [8]) usually rely on noise level estimation that would lead to significant artifacts if the estimation is inaccurate. If we underestimate the noise level, the denoised image would remain noisy ( Fig. 1(c, d)). If we overestimate the noise level, the denoised image would be oversmoothed and blurrier.
To our knowledge, there are few prior works to handle noise in image deblurring. One important attempt [9] combined directional filtering with the noise-aware kernel estimation algorithm. However, their work was limited to uniform blurs and slight noise. Anger et al. [10] proposed refining the 0 prior [11]. Despite strong robustness and short running time, their work was also limited to uniform blurs. In this work, we propose a Noisy Images Deblurring Framework (NIDF) composed of a denoiser subnetwork and a deblurring subnetwork cascaded in series. Specifically, we propose a loss function L joint (Eq. 5) to train the two subnetworks jointly. Better than most deblurring methods that are not noise-robust, NIDF could generate sharp images from blur images with the presence of noise. Different from DnCNN [8] that trains a corresponding model for each noise level, we train NIDF under mixed noise levels. As a result, NIDF adapts to various noise intensities and does not require noise level estimation during training and inference. Extensive experiments on the CelebA [12] dataset and the GOPRO [13] dataset show that joint learning significantly improves performance.
Network Architecture
We present the NIDF architecture in Fig. 2 1 . We discover that deblurring networks do not work well under noise, while denoisers are usually immune to blurs. Therefore, we concatenate the denoiser subnetwork (Net1) before the deblurring subnetwork (Net2). The proposed cascaded structure has two major advantages. First, its relatively light structure speeds up inference. Some approaches boost performance by exploiting sophisticated architectures, e.g., SRN [4] employs a multi-scale structure, and [6] uses multiple dense IRD blocks. These architectures benefit deblurring but do not help to handle noise. They also slow down inference. Second, compared with merging the two subnetworks into an all-inone architecture, the cascaded structure allows Net1 and Net2 to work independently. Inspired by [3,5,14], we use a Ushape encoder-decoder [15] that has been shown effective in image restoration for both subnetworks. Different from De-blurGAN [3], we do not employ adversarial training which is unstable. Compared to [5,14] that use parallel branches for joint deblurring and super-resolution, we use the cascaded structure because the deblurring subnetwork performs better on denoised images than the original noisy images.
Loss Functions
The proposed NIDF takes a noisy image N as input. NIDF produces a denoised image B 1 and a sharp image I 1 simultaneously (Eq. 2): (2) We denote B and I as the ground truth images of B 1 and I 1 .
For pretraining, we define the loss function L denoiser (Eq. 3) that is only related to the denoiser (Net1) in Fig. 2.
(3) Similarly, we define the loss function L deblurring (Eq. 4) that is only related to Net2.
(4) For joint learning, we define the joint loss function L joint (Eq. 5). By minimizing L joint , I 1 would get closer to I no matter B 1 contains noise or not. Without L joint , Net2 is independent from Net1 and unable to output a sharp I 1 from a noisy B 1 .
Datasets Setup
We choose 113831 training samples and 100 test samples from CelebA [12] to build a synthesized dataset. For each sharp face I, we generate a square PSF P of side length (2l + 1) (3 ≤ l ≤ 24) using the random walk algorithm [3]. We first resize I to 256 × 256, then convolve I with P to acquire the blurry face B (Eq. 6): GOPRO [13] dataset includes 2103 training images and 1111 test images (1280 × 720). For each sharp image I, the blurry image B is generated by averaging the nearby 100 frames of I. Most blurs in the GOPRO dataset are nonuniform.
For both datasets, given the blurry image B, we generate the noisy image N by adding AWGN: N = B + n; n ∼ N (0, σ 2 ). σ is chosen from {10, 20, 30, 40} of equal possibilities. N is the input of NIDF and B, I are the corresponding ground truth images of B 1 , I 1 in Eqs.2-5.
Training Details
During training stage, we first pretrain Net1 and Net2 separately. After they converge, we use the joint loss function L joint to train Net1 and Net2 simultaneously.
We use PyTorch to implement NIDF. All experiments are performed on an NVIDIA Tesla M40 GPU and a Xeon E5-2680 v4@2.40GHZ CPU with 256G memory. We use original face images from CelebA and 256×256 patches randomly cropped from GOPRO for training. The batch size is set to 16 and the optimizer is Adam [16].
In the pretraining stage, we set the learning rate to 10 −4 . We train both networks for 150 epochs on GOPRO and 3 epochs on CelebA because the training set of CelebA is much larger than that of GOPRO. We input N and B into Net1 and Net2 respectively, then minimize L denoising (Eq. 3) and L deblurring (Eq. 4) to train the two subnetworks separately.
In the joint learning stage, we only input N into NIDF. We train another 150 epochs on GOPRO and 5 epochs on CelebA by minimizing L joint (Eq. 5) with learning rate 10 −5 .
PSF Estimation
Given only a blurry and noisy image N , could we estimate the PSF P in Eq. 6? The estimated blurry face B 1 and sharp face I 1 can be produced from N by NIDF. According to Eq. 6 and the convolution theorem, the estimated P (denoted aŝ P ) could be calculated as: However, it is difficult to generate very accurate I 1 and B 1 when noise is severe, therefore Eq. 7 may not be precise enough. Pan et al. [17] proposed to guide face deblurring with exemplars. For a blurry face B 1 , they search in databases to find an exemplar S 1 whose edges are close to those of B 1 . Nevertheless, their method requires lots of searching and the ideal exemplars may not exist in the databases. In our work, we directly use I 1 as an exemplar of B 1 because I 1 usually preserves a fine face structure. The optimization objective is rewritten from [17]: where I l is the latent sharp image and ∇ denotes the gradient operator. We use the half-quadratic splitting technique in [17] to estimateP . Different from [17], we do not need to generate facial contour masks. We show an example of PSF estimation in Fig. 3. For other concatenations of denoisers and deblurring methods, we use the estimated sharp image as the exemplar of the denoised image to estimate the PSF via Eq. 8.
EXPERIMENTAL RESULTS
We use Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM [18]) to evaluate NIDF. For the face deblurring task, we also use kernel similarity [19] between the estimated PSF and the ground-truth PSF as another evaluation index. We evaluate NIDF on the synthesized CelebA dataset and the GOPRO dataset with additional noise. We compare to our method with three kinds of baselines: (1) Deblurring networks only; (2) Combinations of denoisers and deblurring networks; (3) NIDF without joint learning 3 , i.e., the two subnetworks in NIDF are trained separately.
We report the quantitative and qualitative results on CelebA and GOPRO datasets in Tables 1-4 and Figs. 4, 5. The concatenation methods, e.g., BM3D [7]+SRN [4] and BM3D [7]+FaceDeblur [20], often introduce ringing artifacts because of the residual noise. Besides, we assume that noise levels are known in our experiments for fair comparisons. However, in real applications, degraded images are often contaminated with noise of unknown levels. When using BM3D or DnCNN, we have to estimate the noise levels very accurately to avoid artifacts, which is challenging and inconvenient. Conversely, we can directly use NIDF without image preprocess. Although BM3D [7]+SRN [4] has higher PSNR than NIDF when noise are mild (σ ≤ 20), NIDF is more effective under severe noise (σ ≥ 30). Besides, BM3D+SRN is much slower than NIDF. We notice that NIDF (without joint learning) is inferior to NIDF (with joint learning) on both datasets. Therefore, the performance improvements of NIDF mainly owes to joint learning.
We have also tried to compare NIDF with CBDNet [21] that is excellent at handling spatial-variant noise. However, CBDNet produces evident artifacts when noise and blurs are severe (Fig. 4(c)). Table 3 shows that our proposed method is also beneficial to PSF estimation of the face deblurring task.
CONCLUSION
We propose a framework named NIDF to handle noise in image deblurring. Compared to previous deblurring methods, NIDF could tackle a more realistic deblurring problem where the blurred images contain noise. Our work has made three major contributions. First, joint learning of the two subnetworks in NIDF significantly improves the performance without increasing the model complexity. Second, NIDF does not require noise level estimation. Additionally, for the face deblurring task, NIDF could estimate the PSFs satisfactorily without searching exemplars. Third, extensive experiments show that our method is effective in terms of PSNR, SSIM, and kernel similarity [19]. We find that Pan et al. [22] performs excellently under mild noise and small noise on PSF estimation, however it is inaccurate when the image is severely distorted. Our future work includes imporvements on [22]. | 2020-01-28T02:00:50.785Z | 2020-01-27T00:00:00.000 | {
"year": 2020,
"sha1": "55d3f9b18d2516122efee43266d5bde6ab8d81cf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "55d3f9b18d2516122efee43266d5bde6ab8d81cf",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119116968 | pes2o/s2orc | v3-fos-license | Lie Algebras and Generalized Thermal Coherent States
In this paper, we developed an algebraic formulation for the generalized thermal coherent states with a Thermofield Dynamics approach for multi-modes, based on coset space of Lie groups. In particular, we applied our construction on $SU(2)$ and $SU(1,1)$ symmetries and we obtain their thermal coherent states and density operator. We also calculate their thermal quantum Fidelity and thermal Wigner function.
Introduction
The concept of coherent states was introduced by Schrödinger [1] in 1926, associated with classical states of the quantum harmonic oscillator. In 1963, Glauber [2,3,4] coined the term coherent state and showed that it is adequate to describe a coherent laser beam in Quantum Optics. At the same time, Klauder [5] presents a generalization through an over-completeness property. A group-theoretical formulation for the generalized coherent states was carried by Perelomov [6,7] and Gilmore [8] independently. According to this construction, if G is a Lie group and H is the isotropy subgroup for the state |ψ 0 ∈ H (Hilbert space), the coherent states are defined by a generalized displacement operator on |ψ 0 , where there is one-to-one correspondence to the coset representation of G/H. A beautiful review can be encountered in the reference [9].
In context of Thermofield Dynamics (TFD), that is a real-time quantum field theory at finite temperature, a thermalized version of field coherent states was introduced by Khanna et al [10]. A myriad of applications of TFD has been developed in Quantum Optics [11,12], Cosmology and String Theory [13,14], Gauge Theory [15], Casimir effect [16], Quantum Dissipation, [17] Quantum Entanglement [18] and Quantum Information [19].
The TFD formulation, or other formalism based in the duplication of the degrees of freedom, are natural candidates to be described by Hopf algebra [20,21]. TFD can be represent by a q-deformed Hopf algebra, allowing a classification for the unitary inequivalent representation on Quantum Field Theory. Our interesting relies on another algebraic approach based in Lie algebras representation, emphasized observables and generators symmetry [22,23]. This approach is interesting for the generalized coherent states since it is provides a general prescription to define thermalized states based in representation of Lie algebras and the pure states.
The main purpose of this paper is to present a general formulation for the generalized thermal coherent states for multi-modes, based on coset spaces of Lie groups, allowing to explore the symmetries of any coherent state of a Lie algebras in a thermal scenario. In order to illustrate our formalism, we consider the symmetries of su(2) and su(1, 1) Lie algebras in this scenario. We then calculate the thermal density operators. This calculation allows to obtain quantities of interest for quantum information such as quantum Fidelity and Wigner function. In this case these quantities are explicitly dependent on the temperature, and we denote by thermal quantum Fidelity and thermal Wigner function, respectively.
The structure of the paper is as follows: in Section 2, we first review the construction of Perelomov and Gilmore, bringing as examples the cases of the Harmonic Oscillator, the compact su(2) and the non-compact su(1, 1) Lie algebras. Section 3 we present the formalism of TFD used to analysis the thermal effects, based in the duplication of the Hilbert space. Section 4 is devoted to the main propose of this paper, that is we present the construction of the Thermal Coherent State for a arbitrary Lie algebra with multi-modes. In Section 5 we apply our construction for the Thermal Coherent State of the su(2) and su(1, 1), obtain the Thermal Density Operator, the Thermal Fidelity and the Thermal Wigner Function.
Coherent State for a Arbitrary Lie Algebra
In 1972 Perelomov [6] and Gilmore [8] independently show that the more consistent form to construct Coherent States for a arbitrary Lie algebra is by generalizing the concept of Displacement Operator and develope a group-theoretical approach. Let H be the Hamiltonian of the system, with a symmetry group G, that for us is a Lie group, with Lie algebra g and Hilbert space given by H, then i) If we define the state |ψ 0 ∈ H as a reference state, the maximum stability subgroup, that will be denoted for H, is a subgroup of G that consists of all group elements that leave the reference state invariant, that is, ii) The coset space G/H, with every element of g ∈ G have a unique decomposition into a product of two group elements so for every reference state |ψ 0 we can obtain a unique coset space. An action of an arbitrary group element g ∈ G on |ψ 0 is given by iii) The definition of the Coherent State for a arbitrary Lie algebra is given by the combination where Ω is the generalization of the Displacement Operator, which can be rewritten in terms of elements of the Lie algebra g.
Coherent State for the Harmonic Oscillator
The usual Hamiltonian of the Harmonic Oscillator is given by with ω is the frequency, = 1, a † and a are the creation and annihilation operators respectively, satisfying a, a † = I, [a, I] = 0, a † , I = 0, where I is unit operator and [, ] is the usual commutation relation. Consider the Hamiltonian states as {|n } n∈N , that form a basis for a Hilbert space H. The set of operators a † a, a † , a, I spans a Lie algebra that is, denoted by w 1 . The associated Lie group is W 1 , Heisenberg-Weyl group. The corresponding Hilbert space for W 1 is spanned by eigenstates |n . The maximum stability subgroup is the U(1) ⊗ U(1) so that The Coherent State of the Harmonic Oscillator is as proposed by Glauber [2,3,4].
Coherent State for the su(2) Lie Algebra
For the Lie group SU(2) with Lie algebra su(2), we have the operators {J x , J y , J z } that satisfy the commutation relations where ǫ ijk is the Levi-Civita symbols. Setting J ± = J x ± iJ y we have The SU(2) Lie group is compact, thus all irreducible representation is finite dimensional and it can be indexed by the symbol j. Then we can define the Dicke states where J 2 = J 2 x + J 2 y + J 2 z is the Casimir operator. For a reference state |j, −j the maximum stability subgroup is the U(1) such that for the coset space SU(2)/U(1) we have so that the Coherent State of su(2) with z √ 1+|z| 2 = ζ sin |ζ| |ζ| , as proposed by Atkins [24] and Arecchi [25].
Coherent State for the su(1, 1) Lie Algebra
Another case that we will analyze is the non-compact Lie group SU(1, 1), with Lie algebra su(1, 1) and generators {K 1 , K 2 , K 0 } that satisfy the relations also we have the relations with K ± = ±i (K 1 ± iK 2 ). Any irreducible representation is infinite dimensional, indexed by k and m so that where Taking the reference state as |k, 0 the maximum stability subgroup is again the U(1). For the coset space SU(1, 1)/U(1) one has Thus the Coherent state is with ζ = α |α| tanh|α|, as proposed by Barut and Girardello[26], with Γ be the Gamma Function.
Thermofield Dynamics
Thermal effects in quantum theory were introduced in consistent way by i) Matsubara [27] in 1955, known as imaginary time formalism using the Wick rotation, ii) Schwinger [28] and Keldsh [29] in the sixties with a real time formalism using the closed-time path formulation and iii) Takahashi and Umezawa [30] in 1975 with the Thermofield Dynamics (TFD) formalism, which requires doubling of the Hilbert space.
In this paper we will explore the TFD formalism [10] whose main propriety is the duplication of the original Hilbert space, preserving the structure of the operators algebra and the commutation relations. The basic idea of this formalism is to look for a state |0(β) , namely thermal vacuum, such that the ensemble average of a operator is equal to the mean value, i.e., If we assume that |0(β) ∈ H, we can span this in terms of a Hamiltonian basis |n resulting in n|0(β) = g n (β). For the ensemble average be equal to the mean value that imposes the condition on the coefficients g m (β) and g * n (β) where δ n,m is the Kronecker delta. The equation (21), like an orthogonality condition, cannot be satisfied by c-numbers, so |0(β) cannot be an element of the original Hilbert space. One possibility explored by Takahashi and Umezawa [30] is by introducing a doubling of the Hilbert space H, denoted by H, such that a vector basis is given by |n, m ∈ H ⊗ H. The idea of doubling the Hilbert space to introduce the thermal effect had already been proposed by Araki and Woods [31] in their works on Quantum Field Theory, so that doubled Hilbert space is not a single feature of TFD only.
In that case the resulting Thermal Vacuum |0(β) ∈ H ⊗ H, is and we can introduce a unitary transformation that maps the double vacuum |0, 0 into the thermal vacuum, namely Bogoliubov transformation U(β) In that way we can introduce a notion of thermal operator as is the Boltzmann constant and T is the temperature.
Applications
In this section we apply our formulation to obtain the generalized thermal coherent state for su(2) and su(1, 1) Lie algebras. We have obtained their thermal density operator, with was used to calculated the thermal quantum Fidelity and the thermal Wigner function.
Thermal Coherent State of su(2)
Consider now atomic coherent states, also known as spin coherent states [9,25]. These states can be realized by Bose-Einstein condensates and applied in the analysis of entanglement in Quantum Information [33,34]. In this case, we have the representative coset given by U(β) SU (2)×SU (2) with the commutation relations A thermal coherent state of the harmonic oscillator can be built considering the coset U † (β), with the resulting state [10] |α where W 1 is the Weyl algebra; this procedure is important because the coherent state |α(β) reduces to the pure state |α in the limit T → 0 ( T is a temperature) or β → +∞. The limit of the temperature going to zero can become quite problematic to perform in situation like phase transition, so our interest are in cases that the system is in a single phase. In according to this scheme we propose the state where z √ 1+|z| 2 = η sin |η| |η| and Baker-Campbell-Hausdorff formula was used [9].
Moreover, by using the two-boson Schwinger representation, we have and where the Bogoliubov transformation is given by U(β) = exp [−iG(β)], with For the state given by (45) we have the following properties i) Non-orthogonality ii) Over-completeness with dµ (z(β), z * (β)) = 2j + 1 π dz(β)dz * (β) Using the thermal average we determine that the density operator for the thermal coherent state of su (2) is with C m,m ′ Eq. (51) is the density matrix associated to state |z(β) . In the limit T → 0 (β → +∞) we have recovered the state |z [9].
su(2) Thermal Fidelity
The Fidelity F is a measure of distance in the Hilbert space that plays an important role in Quantum Information [35]; F ∈ [0, 1] is given by providing the distance between the su(2) Thermal Coherent State and the usual nonthermal su(2) Coherent State. For calculate the Fidelity we will use the equation (51) in the expression (53) of the Fidelity, that results in For T → 0 we have F → 1; so our su(2) Thermal Coherent State coincides with the usual non-thermal coherent state. An increase of temperature in the thermal state results in a growth of distance in relation to the non thermal state.
Thermal Wigner Function
The Wigner function is a quasi-probability distribution whose negative values are associated to the degree of non-classicality of the system [10]. It is defined by Using eq. (51) that carries all information about the su(2) Thermal Coherent State, we can find the expression of the Thermal Wigner Function as 1 −x 2 2 (−1) n 1 +n 2 +2j 2 max(n 1 +j+m,n 1 +j+m ′ ) 2 max(n 2 +j−m,n 2 +j−m ′ ) 2 n 1 +n 2 +2j × χ where L α n are associated Laguerre polynomials, As example, we plot the thermal Wigner function of the coherent state of su(2) in Fig. 1 for a temperature of 0.005K.
5.2.1. su(1, 1) Thermal Fidelity Similar to the previous section we can study the Fidelity of these states, with the intention of compare our su(1, 1) Thermal Coherent States with the non-thermal coherent states. In this case, the quantum Fidelity is Using eq. (66) we obtain F = +∞ n,n,n 1 = 0 For T → 0 the Fidelity go to F → 1, showing that for zero temperature we recover the usual state. For T > 0 the Fidelity is lower that 1 evidencing that the su(1, 1) Thermal Coherent State is a new state differing from the usual case.
Conclusions
In this paper, we developed and presented the Generalized Thermal Coherent State from coset spaces of Lie groups perspective, using the Thermofield Dynamics approach. This construction allows us to investigate effects of temperature in the Coherent State for an arbitrary Lie algebra for multi-modes. As applications we calculated the thermal coherent states associated to su(2) and su(1, 1) Lie algebra and we obtained their thermal density operators. Furthermore the Thermal Fidelity and Thermal Wigner Function where obtained. The thermal coherent states we obtained reduce to the original pure state in the limit T → 0 (β → +∞) for systems with the same phase. Notice that in the framework of the quantum field theory, with continuous limit relation k → V (2π) 3 d 3 k, we have ψ(β)|ψ(β ′ ) → 0 for β = β ′ , V → 0 as thoroughly discussed in the analogous context in the reference [21]. In the infinity volume limit, there is not unitary operator U(β) which maps the Hilbert space onto it self, i. e., the representations are unitarily inequivalent. As perspectives, an investigation about phase transitions is in progress. | 2017-01-17T16:03:19.000Z | 2015-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "2639f9417e06f6a9bda20856e19c854ef371ed91",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1507.00372",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dbcad8f1542a2756634968b3a02776c237ad6985",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
14802916 | pes2o/s2orc | v3-fos-license | On supersymmetry at high temperature
While it is possible to find examples of field theories with a spontaneously broken symmetry at high temperature, in renormalizable supersymmetric models any internal symmetry gets always restored. Recently, a counterexample was suggested in the context of nonrenormalizable supersymmetric theories. We show that non negligible higher loop effects actually restore the symmetry, without compromising the validity of perturbation theory. We give some arguments as to why the proposed mechanism should not work in general.
I. INTRODUCTION
In general, whether or not symmetries in field theory are broken at high temperature, is a dynamical question which depends on the parameter space of the theory in question. In spite of one's intuitive expectations on symmetry restoration [1], based on daily experience and proven correct in the simplest field theory systems [1][2][3], one can easily find examples with symmetries broken at high temperature [2,4]. This is an important issue, due to its possible role in the production of topological defects in the early universe. Symmetry nonrestoration at high temperature may provide a way out of both the domain wall [5,6] and the monopole problem [7][8][9].
A simple example of broken symmetry at high temperature is provided by supersymmetry. Due to the different boundary condition for bosons and fermions in thermal field theory, supersymmetry is automatically broken at any non-zero temperature. However, for the issue of topological defects, one would like to know what happens to internal symmetries in the context of supersymmetries. This question is nontrivial due to the highly constrained structure of SUSY models. It has been addressed carefully more than ten years ago [10]: in contrast with the non-supersymmetric case, it was shown that supersymmetry necessarily implies restoration of internal symmetries at high temperature. At least, this is what happens in renormalizable theories.
Recently, this conclusion was questioned by Dvali and Tamvakis [11] precisely by resorting to non-renormalizable potentials. They present an explicit example in which the inclusion of a quartic term in the superpotential allows apparently for non vanishing vevs at high temperature. Stimulated by their interesting suggestion, we have analyzed carefully their example, arriving however to the opposite conclusion. What happens, and what will be explained in detail below, is that the one-loop approximation used by them becomes invalid precisely due to the non-renormalizable nature of the superpotential. We find two-loop effects actually dominating the one-loop ones, and leading to symmetry restoration. We must stress that this is not due to a breakdown of perturbation theory, but rather a general feature of theories with more than one coupling, and in this sense it is completely analogous to the well known Coleman-Weinberg idea [12]. There, the one loop contribution to the Higgs self coupling due to gauge interactions may be as large as (if not larger than) the tree-level one. We believe this point will become clearer after the detailed discussion of our results.
The bulk of this paper is devoted precisely to this, in our opinion, important question. In the next section we give first a brief review of Dvali and Tamvakis work, and then present our findings. The idea raised in [11] can in fact be shown to provide a possible new and general mechanism, completely independent of SUSY, for symmetry nonrestoration at high temperature. We show in section III, on equally general grounds, why it cannot work, due to the necessarily dominating higher-loop effects.
II. THE EXAMPLE: NON-RENORMALIZABLE WESS-ZUMINO MODEL
We take here the prototype model for symmetry nonrestoration of Ref. [11], which is basically a Wess-Zumino model with a discrete symmetry D : Φ → −Φ and the addition of a non-renormalizable interaction term: where M ≫ µ. This leads to the scalar potential where φ = (φ 1 + iφ 2 )/ √ 2 is the scalar component of the chiral Wess-Zumino superfield Φ. Notice that φ 1 has a negative quartic self coupling. At T = 0, as usual, one finds a set of two degenerate minima: φ = 0 and φ 2 = µM. To see what happens at high T, in Ref. [11] the usual 1-loop induced correction to the effective potential is computed Dvali and Tamvakis conclude that for M 2 ≫ T 2 ≫ µM, one gets φ 2 = 0, as is immediately clear from (3). Before we move on to question this statement, let us see what really is going on up to this point. As is transparent from (4), one can attribute (as usual) the symmetry breaking to the negative T 2 mass term for φ 1 . However, the quartic self coupling of φ 1 in (2) being negative, one cannot ensure a nonvanishing vev. It is necessary to assume that either the φ 6 1 term in (2) or the φ 4 1 term in (4) (both positive, but suppressed by M 2 ), dominates over the (µ/M)φ 4 1 term, in order for φ 1 to have a vev. Now, as we have required T 2 ≫ µM, this is perfectly acceptable. But this is precisely where the problem lies: one has assumed that the non-renormalizable terms are not small in comparison with the renormalizable ones. Notice that this does not put in question the validity of perturbation theory, since the φ 4 terms are suppressed by the small parameter µ/M. This is the analogy with the Coleman-Weinberg case that we drew before. Perturbation theory is perfectly safe, since the next term in the series would be of order φ 8 /M 4 , or T 2 φ 6 /M 4 , which are strongly suppressed by T /M ≪ 1 or φ/M ≪ 1.
The question is: what about a term such as T 4 φ 2 /M 2 , which is obviously much bigger that (µ/M)T 2 φ 2 ? Notice that this is the only relevant term that one could have missed, and whose sign would decide the pattern of symmetry breaking, if present. Once again, the idea is to write the expansion in 1/M, but due to the fact that one has two different couplings to start with (namely, (µ/M)φ 4 and φ 6 /M 2 ), one cannot resort to the usual loop expansion, since T 4 φ 2 /M 2 does not appear at one loop. Again, notice the complete parallel with the Coleman-Weinberg analysis. There, assuming a small self coupling λ for the Higgs field, one finds an important φ 4 term proportional to g 4 (g being the gauge coupling constant) only at one loop, without implying the failure of perturbation theory.
We have found out that such a term, T 4 φ 2 /M 2 , does actually appear at the two loop level, through the diagrams depicted in Fig. 1.
Using the superpotential (1) and the usual rules for the evaluation of Feynman diagrams in thermal field theory [2,3], it is straightforward to calculate this contribution as We wish to stress again that this term, in the range of parameters considered (M 2 ≫ T 2 ≫ µM), dominates over the mass term in (4), and therefore must be taken into account. Since it is positive, the conclusion is contrary to the one in Ref. [11]: the discrete symmetry is restored at high temperature.
As we said before this result is valid up to leading order in an expansion in φ/M and T /M. As long as we stay far away from M, the perturbation theory guarantees symmetry restoration. The reader may still feel uneasy about the consistency of calculations performed in a non-renormalizable theory. For this reason we have also performed the calculations in the renormalizable version of the theory. That is, as suggested in Ref. [11], one can consider a renormalizable superpotential which upon integrating out the heavy field X, gives (1) after identifying M = M X /2λ 2 . Not surprisingly, considering all the graphs and in the limit M >> T , the same correction (5) for the effective potential is obtained.
III. GENERAL DISCUSSION
We have seen that, unfortunately, the suggestion put forward by the authors of Ref. [11] does not work. Here we should point out that the question raised by them has a much more general significance. Namely, if their idea were to work, this would pave a way to a new mechanism for avoiding symmetry restoration at high temperature, completely independently of supersymmetry.
To explain what is going on, let us recall the basics of symmetry nonrestoration at high temperature in renormalizable theories. Consider a theory with two scalar fields φ 1 , φ 2 and a potential symmetric under D: The self couplings λ i must be positive for the potential to be bounded from below, but one can always choose α > 0, β 1 , β 2 > 0 in (7), and require λ 1 λ 2 > α 2 . The high temperature corrections are By asking, e.g. α > 3λ 1 , one can keep one of the mass terms negative at any temperature, thus keeping the symmetry broken at any T. Notice that the signs of the scalar interactions and the corresponding T-dependent terms are equal. We have seen in section II that this feature persists in non-renormalizable theories.
In theories with a single field, this mechanism of nonrestoration would apparently be impossible, since the self-coupling must be positive in order to guarantee the boundedness of the potential. Here precisely enters the point of Ref. [11]: they have a negative quartic interaction for the φ 1 field but the theory is rendered stable through the positive nonrenormalizable φ 6 1 term.
One could extend this mechanism to an arbitrary non-supersymmetric theory. To see why this cannot work in general, let us take the example of a real scalar field with a negative and small quartic self coupling, and a discrete symmetry D:φ → −φ where we include the first important non-renormalizable term. The power n varies from model to model (n = 1 in the case discussed above). The idea of Ref. [11] is based on two important points: ǫ > 0 and ǫ << 1.
Notice that the non-renormalizable term makes the theory stable independently of the sign of ǫ. At one loop level, one gets for T << M the corrections The idea is then that the temperature-induced non-renormalizable term is to combine with the one coming form the negative self-coupling to induce a vev when ∆V (T ) starts to dominate, i.e for T 2 >> µ 2 . But of course, for this to happen one has to assume that the non-renormalizable term is not negligible, i.e. ǫ very small. Here comes the point: if the non-renormalizable term is not negligible, one has to take into account its contributions to the thermal mass. This means that the expansion cannot end at one loop, but has to be pursued up to n + 1 loops. At that level, the "butterfly" diagrams with n + 1 loops and two external legs of which Fig. 1 (a) is the n = 1 example, will induce the high temperature contribution ∆V n+1−loops | mass term (T ) = 1 2 Any other term in the expansion of the couplings 1/M 2n and ǫ will be suppressed. Each loop in the diagram will provide a positive contribution T 2 /12, so the sign of (11) is the sign of the coupling. A positive mass term already indicates that the symmetry will be restored, however one should look at all the temperature-dependent interactions that follow from the non-renormalizable terms. The diagrams that give the dominant 1/M 2n contribution to the φ 2m interaction terms are again the "butterflies" with 2m external legs, and they are readily calculated All the terms of the series have a positive sign, not surprisingly, as we mentioned before the high-T contributions carry the sign of the coupling constant. Symmetry restoration then follows.
We can easily generalize (12) to get the "butterfly" contribution to the high temperature effective potential of an arbitrary V (φ):
IV. CONCLUSIONS
According to our results, the idea of Ref. [11] of using higher dimensional effective interactions to provide nonrestoration of internal symmetries in a supersymmetric context seems not to work. This, in turn, would confirm the general result [10] which was proved only for renormalizable supersymmetric theories.
We have also offered arguments of why we believe this to hold in general. However, admittedly, we do not have a rigorous proof of symmetry restoration in the multifield nonrenormalizable supersymmetric case. The purpose of our paper is in a sense twofold: first, since the issue is so important, it was crucial to know whether the discussed mechanism for nonrestoration [11] is valid or not; and second, we hope that it may inspire the reader to provide either a rigorous proof of the no-go theorem [10], or a way out. | 2014-10-01T00:00:00.000Z | 1996-07-04T00:00:00.000 | {
"year": 1996,
"sha1": "72c46268496b707227fcc050e4560e066e1796f4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9607242",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "72c46268496b707227fcc050e4560e066e1796f4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
195772617 | pes2o/s2orc | v3-fos-license | Fabrication of a Monolithic Implantable Neural Interface from Cubic Silicon Carbide
One of the main issues with micron-sized intracortical neural interfaces (INIs) is their long-term reliability, with one major factor stemming from the material failure caused by the heterogeneous integration of multiple materials used to realize the implant. Single crystalline cubic silicon carbide (3C-SiC) is a semiconductor material that has been long recognized for its mechanical robustness and chemical inertness. It has the benefit of demonstrated biocompatibility, which makes it a promising candidate for chronically-stable, implantable INIs. Here, we report on the fabrication and initial electrochemical characterization of a nearly monolithic, Michigan-style 3C-SiC microelectrode array (MEA) probe. The probe consists of a single 5 mm-long shank with 16 electrode sites. An ~8 µm-thick p-type 3C-SiC epilayer was grown on a silicon-on-insulator (SOI) wafer, which was followed by a ~2 µm-thick epilayer of heavily n-type (n+) 3C-SiC in order to form conductive traces and the electrode sites. Diodes formed between the p and n+ layers provided substrate isolation between the channels. A thin layer of amorphous silicon carbide (a-SiC) was deposited via plasma-enhanced chemical vapor deposition (PECVD) to insulate the surface of the probe from the external environment. Forming the probes on a SOI wafer supported the ease of probe removal from the handle wafer by simple immersion in HF, thus aiding in the manufacturability of the probes. Free-standing probes and planar single-ended test microelectrodes were fabricated from the same 3C-SiC epiwafers. Cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were performed on test microelectrodes with an area of 491 µm2 in phosphate buffered saline (PBS) solution. The measurements showed an impedance magnitude of 165 kΩ ± 14.7 kΩ (mean ± standard deviation) at 1 kHz, anodic charge storage capacity (CSC) of 15.4 ± 1.46 mC/cm2, and a cathodic CSC of 15.2 ± 1.03 mC/cm2. Current-voltage tests were conducted to characterize the p-n diode, n-p-n junction isolation, and leakage currents. The turn-on voltage was determined to be on the order of ~1.4 V and the leakage current was less than 8 μArms. This all-SiC neural probe realizes nearly monolithic integration of device components to provide a likely neurocompatible INI that should mitigate long-term reliability issues associated with chronic implantation.
Introduction
Implantable neural interfaces offer a method for external electronic devices to be connected to the central nervous system (CNS) in order to stimulate or record neurological signals, such as action potentials or multi-unit extracellular potentials, with the additional benefit of high spatial and temporal resolution. This interface forms a link for direct communication with the CNS through which the complex activities of neurons can be decoded to control active prosthetic devices or to stimulate one or more neural circuits to restore or enhance physiological functions [1].
Attempts to understand the electrophysiology of the nervous system started in the 17th century with stimulation of frog sciatic nerve [2]. Later in the 19th century, stainless steel wire electrode arrays were first implanted in the amygdala nuclei of monkeys and cats to investigate brain activity [3]. This was followed by the implantation of tungsten microelectrodes in the visual cortex of cats to investigate the behavior of individual cortical cells [4]. Study of the visual cortex, which requires a denser array of electrodes, drove a transition from metal wire electrode arrays to silicon-based three-dimensional microelectrode arrays (MEAs), such as the Utah array, which was introduced in the late 1980s [5,6]. This design minimized the electrode area and, as a result, allowed for higher spatial resolution during recording and stimulation of small populations of neurons, as well as utilized a reliable and repeatable manufacturing process. The high density of electrode sites, ability to individually address each electrode site, high-throughput fabrication, and compatibility with integrated circuit fabrication processes has made silicon an attractive material for high density, electrical neural interface applications.
A milestone in the development of silicon-based implantable intracortical neural interfaces (INIs) was the Michigan probe, introduced in 1970 [7], which employed multiple electrode sites on a single shank for chronic intracortical stimulation of, or recording from, single neurons [8]. Nevertheless, the occurrence of mechanical, material, and biological failures, both acute and chronic [9], has been a major factor in the questionability of silicon-and metallic-based micro-INIs for human utilization. Mechanical failure, in the form of lead or connector breakage, material degradation, or insulation delamination, and biological failures, such as bleeding, cell death, meningitis, gliosis, and fibrotic encapsulation and extrusion, have been reported elsewhere [10]. In one report, collected from an evaluation of 78 silicon-based intracortical MEAs chronically implanted in rhesus macaques, nearly half of the chronic failures happened within the first year [11]. The majority of those chronic failures (53%) were reported as biological failure caused by meningeal encapsulation and extrusion from the tissue. These results indicate the importance of a mechanically and chemically robust INI that offers better compatibility with the CNS to provide long-term recording and stimulation capabilities.
In recent years, researchers developing neural implants have turned their focus to flexible materials and designs to develop tissue-like INIs that address both mechanical and form factor compatibility. One implementation is an ultra-flexible, polymer-based probe in which a metal layer is sandwiched between two layers of SU-8 polymer [12]. Although this polymer-metal probe, and other similar designs [13,14], have shown a reduced immune response and were able to record action potentials and stimulate neurons, difficulty in the fabrication of these polymer-based devices, insertion of flexible polymer probes into the brain, and oxidation still remain fundamental issues [15]. Another method used to enhance the biocompatibility of neural probes are coatings that alter surface chemistry to provide hemostatic or immunomodulatory support [16]. In one example [17], a L1 protein coating was used to reduce microglial surface coverage. However, the surface coatings lose effectiveness over time leading to increased impedance and reduction in the recorded signals, and, in some cases, the mechanisms through which modulation of neurodegeneration and the corrosive behavior of encapsulating cells occurred was unclear [18].
For an INI to stimulate and record neural signals reliably over many years, both choice of material and their homogeneity must be carefully taken into consideration. Crystalline silicon carbide (SiC) is a semiconductor with a short bond length that gives it high physical resilience and chemical inertness. One of the important properties of SiC is that it displays polymorphism, which results in numerous single-crystal forms with the principal being hexagonal (i.e., 4H-and 6H-SiC) and cubic (i.e., 3C-SiC). SiC has been used in both the high-power electronics and MEMS industries [19,20]. It has also demonstrated a high degree of biological tolerance in vitro [21][22][23][24]. In addition, amorphous SiC (a-SiC), which provides excellent electrical insulation, has also shown good compatibility with neural cells [24][25][26] and has previously been used in the fabrication of several types of MEAs [27][28][29][30][31][32]. The properties of crystalline and amorphous SiC, and the results of previous studies, indicate that SiC can address the interrelated issues of INI biocompatibility and long-term reliability.
In our previous work, we reported the fabrication and characterization of nearly monolithic MEAs made from 4H-SiC, a hexagonal polytype of crystalline SiC, with a-SiC insulation [33]. However, the manufacture of these devices, as well as their release from the bulk SiC wafer, made these devices difficult to fabricate and costly. Here we report on the design and fabrication of a Michigan-style SiC neural probe on a silicon-on-insulator (SOI) wafer for ease of manufacture. The probe is composed of 3C-SiC, which was epitaxially grown on a SOI wafer. A heavily doped n-type (n + ) 3C-SiC film was grown on a moderately doped p-type SiC layer, forming a p-n junction. The n + layer was used to form the traces and electrode sites, eliminating the need for metallic conductive traces and metallic electrode sites that are in direct contact with the CNS tissue. The p-n junction structure provides substrate isolation between the conductive traces. A thin film of a-SiC was deposited via plasma-enhanced chemical vapor deposition (PECVD) on the probe to provide insulation from the external environment. The oxide buried in the SOI wafer served as a sacrificial layer, allowing the SiC probe to be released from the wafer with a selective wet etch process. This new fabrication approach, based on an all-SiC probe design, eliminates residual stresses typically found in similar devices consisting of stacks of heterogeneous films. It is expected that this approach will enhance the long-term material stability of implantable neural probes in the CNS, therefore increasing device reliability over many years. However, now that the manufacture of the probes has been demonstrated, follow-on studies in laboratory animals is required to support this hypothesis and are in the planning stages.
Materials and Methods
The all-SiC neural probe was developed using variations of standard silicon semiconductor micromachining processes. This started with epitaxial growth of a 3C-SiC film on a SOI wafer [20], followed by patterning of the 3C-SiC epitaxial films via thin film contact photolithography techniques. This was followed by the subsequent etching of features using a deep-reactive ion etcher (DRIE), deposition of a conformal a-SiC film via PECVD, and a final probe definition etch through the buried oxide layer via a DRIE process. The final step was the release of the finished device from the substrate SOI wafer by wet etching the buried oxide layer. The thickness of the doped epitaxial films was measured using cross-section scanning electron microscopy (SEM) and the composition was verified through energy-dispersive X-ray spectroscopy (EDS). No S-peak was observed in the EDS spectrum, indicating that the device surface was free of chemical residue from the etch processes. A commercial connector (Nano Strip, Omnetics Connector Corporation, Minneapolis, MN, USA) was used to make the electrical connections to the probe. Planar single-ended test microelectrodes were fabricated from the same epiwafer material as the implants for ease of electrical testing. Cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) in a phosphate buffered saline (PBS) solution, as well as p-n junction isolation and leakage current tests, were conducted on the test microelectrodes to electrically characterize the fabricated probes.
Epitaxial Growth of 3C-SiC on SOI
A 100 mm diameter SOI ((100) Si-oriented) wafer, with an~26 µm silicon film on top of añ 2 µm buried thermal oxide layer, was used for fabrication of the all-SiC neural probes reported here. The growth process started with epitaxial growth of an~8 µm p-type 3C-SiC film on the SOI wafer, followed by an~2 µm heavily n-type (n + ) film, using a hot-wall reactor (LPE Epitaxial Technology, Baranzate, Italy) [34]. Heavy doping of semiconductors results in semi-metallic performance, which is the case for 3C-SiC. For this to be achieved, a n + doping density of~10 19 dopants/cm 3 is required. Hydrogen (H 2 ) was used as a carrier gas [19], ethylene (C 2 H 4 ) as the carbon precursor, and trichlorosilane (SiHCl 3 ) as the silicon precursor gas. The epitaxial growth temperature was set to~1370 • C with a process pressure of~75 Torr. The C:Si ratio was kept between 0.8 and 1.2 throughout the epitaxial growth process. Aluminum and nitrogen were the p and n dopants, respectively, and were introduced during the epitaxial growth process [35,36]. The doping level of the top n + 3C-SiC film was measured with a LEI 2017b Mercury (Hg) Probe (Lehighton Electronics, Inc., Lehighton, PA, USA) [33].
Fabrication of All-SiC Neural Probe
The fabrication process sequence is shown in Figure 1. First, the epiwafer with the SiC films was cleaned in a solvent and then a RCA bath. It was then dipped in hydrofluoric acid (HF, 49%, J. T. Baker, Inc., Phillipsburg, NJ, USA) diluted in water (50:1) to remove any oxide that may have formed on top of the epitaxial 3C-SiC layer, followed by a DI water rinse and N 2 dry. Next, the wafer surface was functionalized with HMDS (Hexamethyldisilazane; Microchemicals GmBH, Ulm, Germany) and a 15-18 µm layer of AZ 12XT-20PL positive photoresist (Microchemicals GmBH) was spun on top at 1500 rpm. After a soft-bake at 110 • C for 3 min, the photoresist was patterned by UV exposure (110 mJ/cm 2 ) with a Quintel Mask Aligner and then baked at 90 • C for 1 min. The wafers were re-hydrated at ambient condition for 2 h and then developed with AZ300 developer (Microchemicals GmBH). The patterned photoresist was thick enough to allow for a~3 µm deep etch of the epitaxial film using an Adixen AMS 100 DRIE. This process used oxygen (O 2 ) at 10 sccm and sulfur hexafluoride (SF 6 ) at 90 sccm. The pressure inside the chamber was set to 5.7 mTorr and the sample holder temperature was set to −20 • C. The sample holder power was kept at 550 W, while the source RF power was 1800 W. This process formed the traces and electrode sites on the probes.
A~250 nm layer of a-SiC was deposited on the sample using PECVD (Unaxis 790, PlasmaTherm, Saint Petersburg, FL, USA). Methane (CH 4 ) and silane (SiH 4 , 5% in He) were used as reactive gases to produce the a-SiC with flow rates of 200 sccm and 300 sccm, respectively. Helium (He), with a flow rate of 700 sccm, was used as the carrier gas. The RF power was set to 200 W, substrate temperature to 300 • C, and pressure to 1100 mTorr [37,38]. Following photoresist patterning using AZ 15nXT-450 CPS negative photoresist (Microchemicals GmbH), a reactive ion etch (RIE; PlasmaTherm) was run for 210 s to open windows in the a-SiC film for the electrode sites. Tetrafluoromethane (CF 4 ) and O 2 , at 37 sccm and 13 sccm respectively, were used as the process gases. The power was set to 200 W and the chamber pressure to 50 mTorr. In order to package the probes for electrical testing, metal bonding pads were formed on one end of the traces (for the implants this is located on a tab that would reside outside the skull of the animal during in vivo testing). A 20 nm titanium (Ti) film, followed by a 200 nm gold (Au) film, was deposited without breaking vacuum in an electron beam evaporator and patterned using a lift-off process. Thermal annealing was performed to create an ohmic contact at the interface between the semiconductor and metal in a rapid thermal processor at 650 • C for 10 min [39]. This process sequence formed the contact pads for the commercial connector, which was used to connect the electrodes to external electronics.
The last step of the fabrication process was probe release. The same DRIE etch recipe that was used for formation of the traces was employed in an etch-through process to define the probes, except that the duration was increased to 15 minutes in order to ensure complete through etch of the 3C-SiC epitaxial films and top silicon layer. A scrap piece of the epiwafer was cleaved and cross-section SEM was used to determine the 3C-SiC epilayer and Si device layer thickness so that proper etch depth and mask thickness were selected. After removing the photoresist, the etch depth was measured using a contact profilometer (Dektak 150, Veeco, Plainview, NY, USA). The probes were released via wet etch of the sacrificial oxide layer with HF (49%). Then they were carefully removed from the HF solution, rinsed with DI water, and dried with N 2 . To remove the backside silicon, the probes were adhered upside-down to a Si handle wafer with~1 µm thermally grown oxide on top using a thin photoresist layer and placed in the DRIE. The residual Si was removed using the same DRIE recipe used for the definition of the electrodes and traces.
P-N Junction Isolation and Leakage Evaluation
Since p-n junctions are formed between the n + and p epitaxial films, back-to-back diodes are present between adjacent traces, which provides isolation. This isolation was evaluated by measuring the forward and reverse blocking voltages of test structures consisting of p-n diodes and n-p-n junctions formed between adjacent traces that were built on the 3C-SiC wafer. A Keithley 2400 SourceMeter (Tektronix, Inc., Beaverton, OR, USA) was used to generate current-voltage (I-V) plots for adjacent traces to observe these voltages. The voltage was increased from −10 V to +10 V at a rate of 0.1 V/s for the diodes and n-p-n junctions, and the observed currents recorded. The forward voltage was estimated using a semi-logarithmic current scale I-V plot [40]. The breakdown voltage occurs when the current rapidly increases during application of negative voltage. The root mean square (rms) of the current amplitude between breakdown and forward potentials for the diodes was defined as reverse leakage current [33]. The threshold current for defining the breakdown voltages was 10 µA. junctions formed between adjacent traces that were built on the 3C-SiC wafer. A Keithley 2400 SourceMeter (Tektronix, Inc., Beaverton, OR, USA) was used to generate current-voltage (I-V) plots for adjacent traces to observe these voltages. The voltage was increased from −10 V to +10 V at a rate of 0.1 V/s for the diodes and n-p-n junctions, and the observed currents recorded. The forward voltage was estimated using a semi-logarithmic current scale I-V plot [40]. The breakdown voltage occurs when the current rapidly increases during application of negative voltage. The root mean square (rms) of the current amplitude between breakdown and forward potentials for the diodes was defined as reverse leakage current [33]. The threshold current for defining the breakdown voltages was 10 µA. The process flow inside the red rectangle shows the cross-section at the electrode sites while the blue rectangle provides the cross-section at the contact pads on the tab. (b) Starting SOI wafer, (c) ~8 µm of p-type 3C-SiC was grown on top, followed by ~2 µm of heavily n-type (n+) 3C-SiC. (d) The wafer was coated with photoresist and (e) patterned via photolithography. (f) DRIE process was used to form the conductive n + mesas and (g) a thin a-SiC insulating layer was deposited on top via PECVD.
(h) Photoresist was then patterned with photolithography and (i) the a-SiC was etched to form windows for the electrode sites using a RIE process. (j) After the a-SiC windows were opened, a layer of titanium, followed by gold, was deposited on the contact pads and thermally annealed. A deep DRIE etch through both epi layers and the oxide was performed to (k1) define the probes and (k2) form through-holes in the contact pads. (l1, l2) The oxide layer was etched in HF (49%) to release the probes. (m1, m2) Back-thinning via DRIE was performed to remove the residual silicon from the SOI device layer.
Electrochemical Characterization of All-SiC Probes
Electrochemical characterization of the 3C-SiC electrodes was performed via CV and EIS evaluation. A three-electrode setup was used with a potentiostat (VersaSTAT 4, AMETEK, Inc., Berwyn, PA, USA) to adjust the voltage between the working and counter electrodes in the presence of a reference electrode. CV provided information on the charge transfer properties of the electrodeelectrolyte interface and on the presence of electrochemical reactions and their reversibility. Potential limits of −600 mV and +800 mV, which is the electrochemical window for platinum (Pt), were chosen for CV because this allowed for direct comparison of our measurements with previously published results [27,32,41,42]. EIS provided complex impedance measurements (both magnitude and phase) The process flow inside the red rectangle shows the cross-section at the electrode sites while the blue rectangle provides the cross-section at the contact pads on the tab. (b) Starting SOI wafer, (c)~8 µm of p-type 3C-SiC was grown on top, followed by~2 µm of heavily n-type (n + ) 3C-SiC. (d) The wafer was coated with photoresist and (e) patterned via photolithography. (f) DRIE process was used to form the conductive n + mesas and (g) a thin a-SiC insulating layer was deposited on top via PECVD. (h) Photoresist was then patterned with photolithography and (i) the a-SiC was etched to form windows for the electrode sites using a RIE process. (j) After the a-SiC windows were opened, a layer of titanium, followed by gold, was deposited on the contact pads and thermally annealed. A deep DRIE etch through both epi layers and the oxide was performed to (k 1 ) define the probes and (k 2 ) form through-holes in the contact pads. (l 1 , l 2 ) The oxide layer was etched in HF (49%) to release the probes. (m 1 , m 2 ) Back-thinning via DRIE was performed to remove the residual silicon from the SOI device layer.
Electrochemical Characterization of All-SiC Probes
Electrochemical characterization of the 3C-SiC electrodes was performed via CV and EIS evaluation. A three-electrode setup was used with a potentiostat (VersaSTAT 4, AMETEK, Inc., Berwyn, PA, USA) to adjust the voltage between the working and counter electrodes in the presence of a reference electrode. CV provided information on the charge transfer properties of the electrode-electrolyte interface and on the presence of electrochemical reactions and their reversibility. Potential limits of −600 mV and +800 mV, which is the electrochemical window for platinum (Pt), were chosen for CV because this allowed for direct comparison of our measurements with previously published results [27,32,41,42]. EIS provided complex impedance measurements (both magnitude and phase) at frequencies of interest, which were used to evaluate the performance of the n + 3C-SiC conductor traces and electrodes.
Planar test microelectrodes fabricated alongside the neural probes on the same wafer were used for CV and EIS measurements [33,37]. The measurements were performed at room temperature in PBS with a pH of 7.40 ± 0.01, which was adjusted with hydrochloric acid (HCl). The PBS was composed of 137 mMol NaCl, 2.7 mMol KCl, and 10 mMol Na 2 HPO 4 . The gas levels in the PBS were ambient and no bubbling was done. The counter electrode was Pt and the reference electrode was Ag|AgCl. EIS measurements were performed from 0.1 Hz to 1 MHz with a rms voltage of 10 mV. The current was recorded 12 times per decade and three repetitions were averaged. CV measurement was initiated from open circuit potential, swept to −600 mV, and increased to +800 mV at a rate of 50 mV/s. This cycle was repeated three times and results were averaged. Charge values were calculated from the CV I-V curve via numerical integration with the trapezoidal method, trapz, in MATLAB (MathWorks, Natick, MA, USA).
Epitaxial 3C-SiC Films
A cross-sectional SEM view of the wafer, which allows for accurate estimation of film thickness (n + -, p-SiC, Si device film, and buried oxide), is shown in Figure 2a. This figure highlights various layers and the approximate thickness of each layer on the wafer used for the fabrication. The two epitaxial 3C-SiC films were measured, and their combined thickness determined to be~10 µm. The SOI Si device layer (~26 µm), as well as the thin (~2 µm) buried oxide layer are also visible in this figure. The epitaxial n + film in the center of the wafer was quite rough with a mean surface roughness of 244 nm and smoother near the wafer edge with a mean surface roughness of~21 nm. Figure 2b shows surface morphology of the smooth n + layer, which was taken using a DI AFM (Dimension 3100). Although rough in the wafer center (Figure 2c), the surface roughness was low enough for thick layers of photoresist to properly cover the entire surface for the subsequent fabrication steps. However, this roughness would be expected to impact device electrical performance, particularly p-n diode leakage current.
Micromachines 2019, 10, x 6 of 14 at frequencies of interest, which were used to evaluate the performance of the n + 3C-SiC conductor traces and electrodes. Planar test microelectrodes fabricated alongside the neural probes on the same wafer were used for CV and EIS measurements [33,37]. The measurements were performed at room temperature in PBS with a pH of 7.40 ± 0.01, which was adjusted with hydrochloric acid (HCl). The PBS was composed of 137 mMol NaCl, 2.7 mMol KCl, and 10 mMol Na2HPO4. The gas levels in the PBS were ambient and no bubbling was done. The counter electrode was Pt and the reference electrode was Ag|AgCl. EIS measurements were performed from 0.1 Hz to 1 MHz with a rms voltage of 10 mV. The current was recorded 12 times per decade and three repetitions were averaged. CV measurement was initiated from open circuit potential, swept to −600 mV, and increased to +800 mV at a rate of 50 mV/s. This cycle was repeated three times and results were averaged. Charge values were calculated from the CV I-V curve via numerical integration with the trapezoidal method, trapz, in MATLAB (MathWorks, Natick, MA, USA).
Epitaxial 3C-SiC Films
A cross-sectional SEM view of the wafer, which allows for accurate estimation of film thickness (n + -, p-SiC, Si device film, and buried oxide), is shown in Figure 2a. This figure highlights various layers and the approximate thickness of each layer on the wafer used for the fabrication. The two epitaxial 3C-SiC films were measured, and their combined thickness determined to be ~10 µm. The SOI Si device layer (~26 µm), as well as the thin (~2 µm) buried oxide layer are also visible in this figure. The epitaxial n + film in the center of the wafer was quite rough with a mean surface roughness of ~244 nm and smoother near the wafer edge with a mean surface roughness of ~21 nm. Figure 2b shows surface morphology of the smooth n + layer, which was taken using a DI AFM (Dimension 3100). Although rough in the wafer center (Figure 2c), the surface roughness was low enough for thick layers of photoresist to properly cover the entire surface for the subsequent fabrication steps. However, this roughness would be expected to impact device electrical performance, particularly pn diode leakage current.
Fabricated All-SiC Neural Probe
Epitaxial growth of single crystalline 3C-SiC with different types of doping enables realization of a nearly monolithic probe from homogeneous SiC material. The all-SiC probe is a Michigan-style, planar neural probe with 16 electrodes for recording and stimulating neurons. The connector tab has 18 metallic pads (approximately 0.8 mm by 0.4 mm) with through holes to which a commercial Omnetics connector is bonded. Two extra pads provide connections for the return and reference
Fabricated All-SiC Neural Probe
Epitaxial growth of single crystalline 3C-SiC with different types of doping enables realization of a nearly monolithic probe from homogeneous SiC material. The all-SiC probe is a Michigan-style, planar neural probe with 16 electrodes for recording and stimulating neurons. The connector tab has 18 metallic pads (approximately 0.8 mm by 0.4 mm) with through holes to which a commercial Omnetics connector is bonded. Two extra pads provide connections for the return and reference electrode wires. The diameter of the electrode sites is~15 µm and width of the traces is~10 µm. Figure 3 shows the optical and SEM micrographs of a free-standing probe.
The probe's shank, which contains the traces and electrode sites, is shown in Figure 3b. This figure shows a scanning electron micrograph of the electrode sites, which have a-SiC windows on top to allow contact with the extracellular environment. The traces and electrode sites are mesas formed from the n + 3C-SiC film. There are no metallic components on the shank, which is a homogeneous structure consisting entirely of SiC. The pads, which are shown in Figure 3c, contain titanium and gold layers in order to provide ohmic connections to external electronic devices via the Omnetics connector. However, since the metallic pads are not in direct contact with brain tissue, the issues regarding delamination of metallic parts and compatibility with CNS tissue are not a concern.
Micromachines 2019, 10, x 7 of 14 electrode wires. The diameter of the electrode sites is ~15 µm and width of the traces is ~10 µm. Figure 3 shows the optical and SEM micrographs of a free-standing probe. The probe's shank, which contains the traces and electrode sites, is shown in Figure 3b. This figure shows a scanning electron micrograph of the electrode sites, which have a-SiC windows on top to allow contact with the extracellular environment. The traces and electrode sites are mesas formed from the n + 3C-SiC film. There are no metallic components on the shank, which is a homogeneous structure consisting entirely of SiC. The pads, which are shown in Figure 3c, contain titanium and gold layers in order to provide ohmic connections to external electronic devices via the Omnetics connector. However, since the metallic pads are not in direct contact with brain tissue, the issues regarding delamination of metallic parts and compatibility with CNS tissue are not a concern.
Electrical and Electrochemical Characterization
The doping density of the top n + 3C-SiC film was determined by measuring the capacitance voltage profile of the Schottky contact at 1 MHz and ND-NA was estimated to be ~10 19 cm −3 . A similar measurement was also performed on the p-type epitaxial film exposed after DRIE processing and NA-ND was estimated to be ~10 16 cm −3 . These measurements indicate the feasibility of p-n junction formation between two epi films and high electrical conductivity of the top semi-metallic n + film that formed the traces and electrode sites. EIS was done to confirm this expectation.
As shown in Figure 4a, current-voltage (I-V) measurements on individual diode structures had a rectifying effect due to the diode formed between the n + -and p-type epitaxial films. In order to measure turn-on and breakdown voltages and the reverse leakage current, the I-V plot for four diodes on the same wafer was measured. The averaged turn-on voltage for these four diodes was determined to be ~1.4 V, with an average leakage current less than 8 µArms. In addition, Figure 4a also contains a current-voltage curve, obtained from measurements on one of the IDEs, showing isolation between adjacent traces.
CV curves for four test microelectrodes of the same surface area (491 µm 2 ) in 7.4 pH PBS are shown in Figure 4b. The upper (+800 mV) and lower (−600 mV) boundaries for the potential were based on the electrochemical window for Pt in water. The shape of the hysteresis cycle showed that the anodic and cathodic currents were charge balanced, with no indication of faradaic current resulting from oxidation or reduction reactions between +800mV and −600 mV. However, the phase behavior of the electrode-electrolyte interface (Figure 4d) only supports a capacitive-dominant mechanism at higher frequencies (e.g., −61.2 ± 3.7° at 1 kHz), while at lower frequencies the phase indicates a faradaic current (e.g. −30.3 ± 4.9° at 100 Hz), which contrasts with earlier results from 4H-SiC microelectrodes [33]. The average anodic charge storage capacity (CSC) was 15.4 ± 1.46 mC/cm 2 (mean ± standard deviation) and the cathodic CSC was 15.2 ± 1.03 mC/cm 2 . The average anodic charge per phase was 75.4 ± 5.06 nC and the average cathodic charge per phase was 74.8 ± 5.06 nC.
Electrical and Electrochemical Characterization
The doping density of the top n + 3C-SiC film was determined by measuring the capacitance voltage profile of the Schottky contact at 1 MHz and N D -N A was estimated to be~10 19 cm −3 . A similar measurement was also performed on the p-type epitaxial film exposed after DRIE processing and N A -N D was estimated to be~10 16 cm −3 . These measurements indicate the feasibility of p-n junction formation between two epi films and high electrical conductivity of the top semi-metallic n + film that formed the traces and electrode sites. EIS was done to confirm this expectation.
As shown in Figure 4a, current-voltage (I-V) measurements on individual diode structures had a rectifying effect due to the diode formed between the n + -and p-type epitaxial films. In order to measure turn-on and breakdown voltages and the reverse leakage current, the I-V plot for four diodes on the same wafer was measured. The averaged turn-on voltage for these four diodes was determined to be~1.4 V, with an average leakage current less than 8 µA rms . In addition, Figure 4a also contains a current-voltage curve, obtained from measurements on one of the IDEs, showing isolation between adjacent traces.
CV curves for four test microelectrodes of the same surface area (491 µm 2 ) in 7.4 pH PBS are shown in Figure 4b. The upper (+800 mV) and lower (−600 mV) boundaries for the potential were based on the electrochemical window for Pt in water. The shape of the hysteresis cycle showed that the anodic and cathodic currents were charge balanced, with no indication of faradaic current resulting from oxidation or reduction reactions between +800 mV and −600 mV. However, the phase behavior of the electrode-electrolyte interface (Figure 4d) only supports a capacitive-dominant mechanism at higher frequencies (e.g., −61.2 ± 3.7 • at 1 kHz), while at lower frequencies the phase indicates a faradaic current (e.g., −30.3 ± 4.9 • at 100 Hz), which contrasts with earlier results from 4H-SiC microelectrodes [33]. The average anodic charge storage capacity (CSC) was 15.4 ± 1.46 mC/cm 2 (mean ± standard deviation) and the cathodic CSC was 15.2 ± 1.03 mC/cm 2 . The average anodic charge per phase was 75.4 ± 5.06 nC and the average cathodic charge per phase was 74.8 ± 5.06 nC. Figure 4c,d show the EIS results for the same four test microelectrodes. As expected, the impedance magnitude was found to increase with decreasing frequency. At a frequency of 1 kHz, the impedance was 165 ± 14.7 kΩ (mean ± standard deviation). The electrode-electrolyte interface was determined to be predominately capacitive, as indicated by the negative phase angles for higher frequencies (i.e., >1 kHz).
Micromachines 2019, 10, x 8 of 14 Figure 4c and 4d show the EIS results for the same four test microelectrodes. As expected, the impedance magnitude was found to increase with decreasing frequency. At a frequency of 1 kHz, the impedance was 165 ± 14.7 kΩ (mean ± standard deviation). The electrode-electrolyte interface was determined to be predominately capacitive, as indicated by the negative phase angles for higher frequencies (i.e. >1 kHz).
Discussion
A nearly monolithic SiC neural probe has been fabricated from epitaxial 3C-SiC films grown on SOI wafers. A combination of ethylene (C2H4) and trichlorosilane (SiHCl3) were used as precursor gasses in the epitaxial process. This produced a varying surface morphology with mean surface roughness of approximately ~21 nm (specular, edge region) to ~244 nm (rough, center region) [43]. It is possible this surface roughness contributed to complications in the fabrication process, such as with photolithographic patterning, and may have had an effect on the mechanical properties of the grown films to the detriment of probe function [44]. It is suspected that this contributed to the higher than desired leakage current (less than 8 µArms). By optimizing various parameters in the epitaxial process, such as gas composition and flow rates, temperature, and pressure [35,43], the process can be improved to reduce this surface roughness. Additionally, post-processing steps, such as mechanical or chemomechanical polishing, can be added to further improve the surface morphology; particularly to reduce surface roughness [19].
A major issue with the previous 4H-SiC probes was the difficulty in releasing the probe [33]. Essentially, much of the 4H-SiC substrate would have to be removed, and there was no effective etch stop to prevent over-etching. In order to effectively solve this issue, we used SOI wafers to provide an effective release layer by the simple process of wet etching the oxide. However, the SOI wafer used here possessed a relatively thick layer of silicon that remained on the backside of the probes, which was removed later via back-thinning using DRIE. This thick silicon layer can cause residual stress, due to mismatches in the coefficients of thermal expansion and lattice parameters [45] at the interface between the SiC films and silicon, resulting in bowing or bending of the probes. The SOI wafer used
Discussion
A nearly monolithic SiC neural probe has been fabricated from epitaxial 3C-SiC films grown on SOI wafers. A combination of ethylene (C 2 H 4 ) and trichlorosilane (SiHCl 3 ) were used as precursor gasses in the epitaxial process. This produced a varying surface morphology with mean surface roughness of approximately~21 nm (specular, edge region) to~244 nm (rough, center region) [43]. It is possible this surface roughness contributed to complications in the fabrication process, such as with photolithographic patterning, and may have had an effect on the mechanical properties of the grown films to the detriment of probe function [44]. It is suspected that this contributed to the higher than desired leakage current (less than 8 µA rms ). By optimizing various parameters in the epitaxial process, such as gas composition and flow rates, temperature, and pressure [35,43], the process can be improved to reduce this surface roughness. Additionally, post-processing steps, such as mechanical or chemomechanical polishing, can be added to further improve the surface morphology; particularly to reduce surface roughness [19].
A major issue with the previous 4H-SiC probes was the difficulty in releasing the probe [33]. Essentially, much of the 4H-SiC substrate would have to be removed, and there was no effective etch stop to prevent over-etching. In order to effectively solve this issue, we used SOI wafers to provide an effective release layer by the simple process of wet etching the oxide. However, the SOI wafer used here possessed a relatively thick layer of silicon that remained on the backside of the probes, which was removed later via back-thinning using DRIE. This thick silicon layer can cause residual stress, due to mismatches in the coefficients of thermal expansion and lattice parameters [45] at the interface between the SiC films and silicon, resulting in bowing or bending of the probes. The SOI wafer used in this work had an~20 um thick top silicon layer and this may have been the cause of the bowing of the shank and some warping in the connector tab. The shank should be straight for a proper insertion trajectory into the neural tissue. Also, in order to maximize contact at the connector interface, the tab containing the contact pads should be as planar as possible. Using a SOI wafer with a thin silicon device layer may resolve this deformation problem and will be used in future all-SiC devices.
Epitaxial 3C-SiC thin films are ceramic-like materials with, relative to neural tissue, a high elastic modulus, measured to be 424 ± 44 GPa using microsample tensile testing [46] and 433 ± 50 GPa using nanoindentation [47]. Defects can reduce the value the Young's modulus of 3C-SiC [48] and doping may affect this value as well [49]. There is a trend towards utilizing softer materials, such as polymers, for implantable neural interfaces due to their potential to improve the interaction with neural tissue [50][51][52]. By decreasing the Young's modulus of the neural probe closer to the values of neural tissues, the harmful shear and normal stress applied from the shank to the tissue should decrease. However, it is really device stiffness, which includes cross-sectional area, rather than just device modulus, that seems to matter the most [53]. Additionally, use of these softer materials introduce challenges with fabrication processes, scaling to higher channel-count systems, particularly with respect to interconnects, and can lead to insertion difficulties. Once implanted, these materials face challenges with material stability and device reliability [54]. The hard, chemically inert nature, and ease of micromachining with traditional silicon processes means SiC neural probes may suffer less from these limitations. Clearly, long-term in vivo studies in an animal model are needed to assess the performance of the all-SiC INI and are planned.
It has been demonstrated that once the implanted structure size is reduced to subcellular scale, i.e., less than~10 µm, the foreign body response and associated neuron death is greatly reduced in a rat model [55,56]. With traditional silicon probes, reducing size increases the occurrence of probe fracture at high stress regions [57]. SiC is a much more robust material, with a reduced tendency to fracture at these desired smaller sizes, while maintaining the mechanical strength needed for proper penetration of neural tissue [24,27]. Therefore, SiC is an excellent material for developing a high electrode density neural interface, allowing for further reduction in size while greatly minimizing risk of fracture.
The heterogeneous composition of implanted neural interfaces that utilize metallic materials as electrode sites or conductive traces may increase the risk of delamination in chronic implantation, specifically, at regions under higher stress [57]. Delamination usually occurs at the interface between metal and semiconductor materials due to residual stress in the thin films. A homogeneous material composition can eliminate this residual stress, reducing the risk of delamination at the interfaces between different materials in the probe by eliminating them.
The 3C-SiC is a wide-band-gap semiconductor with a high band energy of~2.2 eV. This results in a higher turn-on voltage at the junction between n + -and p-type SiC. This higher turn-on voltage provides a wider voltage range to stimulate neurons while isolating individual channels via n-p-n junctions supporting simultaneous multichannel microstimulation and recording, as might be necessary for implementing a closed-loop system. The turn-on voltage for p-n junctions built from Si is~0.7 V, which is low compared to~1.4 V for SiC, and limits proper isolation via a n-p-n junction configuration. However, the higher leakage current in our all-SiC films may negatively affect the final device's functionality. Surface roughness is known to be associated with the density of crystal defects, thus a higher defect density may cause higher leakage current [43]. It is believed that the high surface roughness in this work, an indication of poor crystallinity, in conjunction with a high number of defects, may be the cause of the observed high leakage current. For reference, in our 4H-SiC devices with specular surface morphology, the leakage current was nA versus µA reported here [33]. A lower surface roughness via an optimized epitaxial growth process would be expected to improve both the mechanical properties and the leakage current [58].
The EIS results revealed that the doped, semi-metallic 3C-SiC conductors have impedance values approaching those of metals commonly used in implantable microelectrodes, such as gold, platinum, or tungsten, as well as highly doped polysilicon [59,60]. The average impedance for a surface area of 491 µm 2 was approximately 75% lower (165 kΩ vs. 675 kΩ at 1kHz) than previously reported for our 4H-SiC electrodes [33].
Both the charge balanced CV cycles and the negative phase angles from EIS measurements support a dominant capacitive charge transfer mechanism for 1 kHz and higher frequencies at the electrode-electrolyte interface, but faradaic currents may be present at lower frequencies. This differs from capacitive electrode materials like titanium nitride (TiN) [61], which has a phase closer to 90 • at lower frequencies [62]. Compared to values previously reported for 4H-SiC, the charge values calculated from CV measurements reported here were approximately two orders of magnitude higher, with the average charge storage capacity (anodic: 15 mC/cm 2 vs. 0.41 mC/cm 2 ; cathodic: 15 mC/cm 2 vs. 0.19 mC/cm 2 ) and an average charge per phase (anodic: 75 nC vs. 2.0 nC; cathodic: 75 nC vs. 1.0 nC) using a Pt electrochemical window (−600 mV to +800 mV). It is possible that the greater surface roughness accounts for this large difference in electrochemical properties. It is also possible that there were more faradaic reactions at lower frequencies leading to more oxidation and reduction at the surface, which may be linked to defect sites in the SiC.
Current neural probe technology built from materials like silicon suffer from long-term reliability issues that reduces their lifetime considerably, resulting in loss of recording and microstimulation function when chronically implanted. This limits their use in medical applications for humans. Device-based modalities could become a more common alternative to pharmaceuticals for treatment of neurological trauma or disease if the issue of long-term reliability in implantable neural interfaces is properly addressed. After further refinement of the design and optimization of the material processing, the performance of the all-SiC neural probe will be evaluated with chronic in vivo experiments in rodent models to investigate its long-term safety and effectiveness in neural tissue. There is accumulating evidence [25][26][27]29,30,32,63] that SiC could be an appropriate material for the greatly needed implantable neural interface that functions for the lifetime of the recipient.
Conclusions
The fabrication and initial electrical characterization of an all-SiC neural probe is presented. The SiC neural probe was fabricated from p− and n + -type 3C-SiC epilayers grown on SOI wafers. First, a moderately p-type 3C-SiC film was grown on a SOI wafer, followed by a layer of n + -type 3C-SiC. The surface morphology of the top n + epilayer was measured. Neural probes with sixteen traces, electrode sites, and other test structures were patterned on the 3C-SiC epilayers via MEMS microfabrication processes. Metallic traces were absent from the shank of the probe, and instead a semi-metallic n + layer was formed into traces and electrode sites. A thin layer of a-SiC film was deposited on top of the epilayers to serve as an insulator. The probes were harvested using dissolution of the buried oxide layer in the SOI handle wafer to provide ease of manufacture. The backside silicon layer remaining after release of the probes was removed via back-thinning in a DRIE. Adjacent traces were electrically isolated through a n-p-n junction. After completion of device fabrication, the performance of the n-p-n junctions was evaluated through current-voltage measurements and the turn-on voltage was determined to be~1.4 V. Electrical measurements showed satisfactory p-n junction performance, but leakage current needs to be improved via higher quality 3C-SiC epitaxial films. In addition, initial electrochemical characterization work with 491 µm 2 surface area test microelectrodes demonstrated good impedance, charge storage capacity, and charge per phase values. These results support the feasibility of neural stimulation and recording with the fabricated all-SiC neural probe. However, further studies are necessary to demonstrate the acute recording and stimulation capability and chronic stability of the fabricated SiC neural probes, and, consequently, in vitro accelerated aging and in vivo studies in a rodent model are planned and will be reported in the future.
Author Contributions: M.B. conducted all device fabrication steps except for a-SiC deposition. He also performed dry electrical testing of the p-n and n-p-n structures and co-wrote the manuscript. J.T.B. was responsible for electrochemical characterization, performed all EIS and CV measurements, and co-wrote the manuscript. C.A.K. carried out the a-SiC deposition and provided XPS analysis of the films on blank companion samples. C.L.F. and S.E.S. developed the all-SiC INI concept, device design, and provided technical consulting to the fabrication team. E.J.B. initiated the 3C-SiC INI device fabrication at USF prior to graduating with his doctorate in 2018 and provided technical consultation to the fabrication team on this work. F.L.V. is an expert on 3C-SiC epitaxial growth and provided the epitaxial wafers used in this work along with technical consultation on the device characterization performed. A.T. is an expert on electrochemistry and provided technical consultation on the EIS and CV measurements and analysis. All authors reviewed and edited the final draft of the manuscript.
Funding: Funding at the University of South Florida was provided via teaching assistantships (M. Beygi and J. Bentley) and from Dr. Saddow's overhead funds to purchase consumable items. | 2019-07-03T13:05:23.666Z | 2019-06-29T00:00:00.000 | {
"year": 2019,
"sha1": "0973f9523c089826fbecb287edf42acbaa4507c7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/10/7/430/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0973f9523c089826fbecb287edf42acbaa4507c7",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
10241805 | pes2o/s2orc | v3-fos-license | TGF-β1 promotes cerebral cortex radial glia-astrocyte differentiation in vivo
The major neural stem cell population in the developing cerebral cortex is composed of the radial glial cells, which generate glial cells and neurons. The mechanisms that modulate the maintenance of the radial glia (RG) stem cell phenotype, or its differentiation, are not yet completely understood. We previously demonstrated that the transforming growth factor-β1 (TGF-β1) promotes RG differentiation into astrocytes in vitro (Glia 2007; 55:1023-33) through activation of multiple canonical and non-canonical signaling pathways (Dev Neurosci 2012; 34:68-81). However, it remains unknown if TGF-β1 acts in RG-astrocyte differentiation in vivo. Here, we addressed the astrogliogenesis induced by TGF-β1 by using the intraventricular in utero injection in vivo approach. We show that injection of TGF-β1 in the lateral ventricles of E14,5 mice embryos resulted in RG fibers disorganization and premature gliogenesis, evidenced by appearance of GFAP positive cells in the cortical wall. These events were followed by decreased numbers of neurons in the cortical plate (CP). Together, we also described that TGF-β1 actions are region-dependent, once RG cells from dorsal region of the cerebral cortex demonstrated to be more responsive to this cytokine compared with RG from lateral cortex either in vitro as well as in vivo. Our work demonstrated that TGF-β1 is a critical cytokine that regulates RG fate decision and differentiation into astrocytes in vitro and in vivo. We also suggest that RG cells are heterogeneous population that acts as distinct targets of TGF-β1 during cerebral cortex development.
INTRODUCTION
Radial glia (RG) cells are considered the major progenitor cell population present in the developing cerebral cortex (Kriegstein and Alvarez-Buylla, 2009).These cells have a long radial fiber that elongate from its cell body, in the ventricular zone (VZ), through the entire developing cortical wall. During the initial steps of brain development, RG cells, which are derived from the neuroepithelium, are actively proliferative cells and, by RG-astrocyte differentiation is a well-recognized event, however the mechanisms and molecules that control generation of different pools of astrocytes and neurons are still elusive. Several lines of evidence suggest that increasing neuronal pools play essential role in the control of RG maintenance and/or differentiation (Hunter and Hatten, 1995;Anton et al., 1997;Nakashima et al., 1999;Mi et al., 2001;Takizawa et al., 2001;Uemura et al., 2002;Patten et al., 2003;Schmid et al., 2003;Nishino et al., 2004;Barnabé-Heider et al., 2005;He et al., 2005;Stipursky and Gomes, 2007;Stipursky et al., 2012a).
Although several soluble factors were demonstrated to control astrocytogenesis during CNS development such as leukemia inhibitor factors (LIFs) of the interleukin-6 (IL-6) family, including CNTF, LIF, and Cardiotrophin-1 (CT-1) (for revision see Stipursky et al., 2009), the role of neuronal derived soluble factors on RG-astrocyte transformation is still poorly known.
Although the presence of different isoforms of TGF-β molecules have already been described in the proliferative zones of the embryonic cerebral cortex (Mecha et al., 2008), there are few data regarding the expression, modulation and distribution of TGF-β receptors in RG cells in vivo. Further, the mechanisms that modulate neurogenesis to gliogenesis switch of RG induced by TGF-β1 are still unknown.
Here, we investigated the role of TGF-β1 on RG-astrocyte switch in the developing cerebral cortex and the implications of RG heterogeneity to this event. We showed that TGF-β1 induces premature gliogenesis and disrupts RG polarity mainly in the dorsomedial area of the cerebral cortex. For the first time, we provide evidence that specific RG subpopulations distinctly respond to TGF-β1 in vivo.
ETHICAL APPROVAL
All animal protocols were approved by the Animal Research Committee of the Federal University of Rio de Janeiro (DAHEICB024).
IN UTERO INTRAVENTRICULAR INJECTION
In utero intraventricular injections of E14 mice embryos were performed as described by Walantus et al. (2007). Pregnant Swiss mice in the 14 gestational day were anesthetized with intraperitoneal injection of 2-2-2 Tribromoethanol (Sigma Aldrich) 1 mg/g of body weight. After anesthesia, females were subjected to surgical procedure, in which the uterus was exposed. After visualization of the embryos, they were manually positioned to allow observation of brain hemispheres. Each embryo was subjected to intraventricular injection inside the lateral ventricles of 2 µl of control solution (PBS, 0.05% BSA, 0.025% Fast Green [Sigma Aldrich]), or solution containing 100 ng of TGF-β1 (R&D Systems) or 10 µM of SB431542 (Sigma Aldrich), using glass micropipettes. After injections, the uterus was repositioned inside abdominal cavity and abdominal muscle and skin layers sutured. Bromodeoxiridine (BrdU, Sigma Aldrich) was intraperitoneally injected in the preagnant mouse after 2 and 24 h of surgery, to follow cells generated from RG just after TGF-β1 stimulation and to analyze long lasting effects in RG population. Forty-eight hours after surgery, the female was sacrificed and embryos were perfused with ice cold 4% paraformaldehyde (PFA). Brains were collected and processed for immunohistochemistry and real time RT-PCR.
REAL TIME RT-PCR
Total RNA was isolated from embryonic mice cerebral cortex using Direct-zol™ RNA MiniPrep (ZymoReserch, USA) according to the protocol provided by the manufacturer, and quantified using NanoDrop ND-1000 Spectrophotometer ThermoFisherScientific, USA).Two micrograms of total RNA were reverse transcribed with RevertAid first Strand cDNA Synthesis Kit according to the manufacturer (Thermo Fisher Scientific, USA). Sense and antisense specific for FoxG1, and β-actin genes were used. β actin sense: TGG ATC GGT TCC ATC CTG G, anti-sense: GCA GCT CAG TAA CAG TCC GCC TAG A; FoxG1 sense: CGA CAA GAA GAA CGG CAA GTA CGA, anti-sense: AGC ACT TGT TGA GGG ACA GGT TGT. Sequences were verified to be specific using Gen Bank's BLAST (Altschul et al., 1997). Quantitative real-time RT-PCR was performed using Maxima SYBR green qPCR Master Mix (Thermo Scientific, USA). Reactions were per formed on ABI PRISM 7500 Real Time PCR System (Applied Biosystems). The relative expression levels of genes were calculated using the 2 −∆∆CT method (Livak and Schmittgen, 2001). The amount of target genes expressed in a sample was normalized to the average of thee ndogenous control.
STATISTICAL ANALYSIS
Statistical analyses were done using one-way non-parametric ANOVA coupled with Tukey post-test by GraphPad Prism 4.0 software, and P < 0.05 was considered statistically significant. The experiments were performed in triplicate, and each result represents the mean of at least 4-6 animals analyzed.
RG CELLS ARE POTENTIAL TARGETS OF TGF-β1 IN VIVO
In order to investigate RG cells responsiveness to TGF-β1, we first identified the TGF-β receptor type II (TGFRII) in RG cells in vitro and in vivo. To do that, we performed RG isolation from neurospheres derived E14 mice embryos cerebral cortex. Under this culture condition, these cells present a typical radial morphology and label for Nestin, BLPB, Notch1 and ErbB2 (Figures 1A-C), attesting their RG cells phenotype. We also detected high staining for TGFRII in their membranes (Figures 1D-F). Treatment of RG culture with TGF-β1 induced phosphorylation and nuclear translocation of Smads2/3, a hallmark of TGF-β1 signaling pathway activation (Figures 1G-J).
Immunohistochemical assays of the mouse brain revealed that TGFRII is more robustly expressed in the VZ (ventricular zone) and CP (cortical plate) of E14 and in the same layers as well as in SVZ (subventricular zone) of E18 and P0 mice cerebral cortex (Figures 1K-M, k -m ). We identified a punctate TGFRII staining in RG cell body and processes in the E14 telencephalon (Figures 1N,n ). Western blotting analysis revealed that TGFRII is negatively modulated during development, since this protein is present at high levels in E14 telencephalon, is slightly detectable in E18 and tend to disappear in P0 ( Figure 1O). The down regulation of TGFRII overlaps with the amount of phospho Smad2 at P0 ( Figure 1P). Together, this data suggest that RG cells might be target of TGF-β1 actions in vitro, as well as in vivo.
INTRAVENTRICULAR INJECTION OF TGF-β1 DISRUPTS POLARITY OF RG CELLS
RG cell elongated morphology is a critic characteristic that allows neuronal migration and correct positioning in the CP within the different layers of the cerebral cortex (Rakic, 1971(Rakic, , 1995Hatten, 1999;Yokota et al., 2007;Radakovits et al., 2009). Loss of this typical morphology is a hallmark of RG-astrocyte differentiation. Intraventricular injection of TGF-β1 resulted in profound morphological alterations especially in the telencephalon, resulting in dilated lateral ventricles, and evident reduction of cortical wall thickness in dorsomedial (DMc) and lateral (Lc) areas of the cortex (Figures 2A-E).We also observed reduced VZ thickness in TGF-β-injected brains compared with vehicle solution injected brains ( Figure 2F).These thickness reduction is observed along rostral to caudal regions of the cerebral cortex (data not shown).Interestingly, these morphological alterations were accompanied by severe disorganization of nestin labeled-RG fibers in TGF-β1-injected brains (Figures 2G,J). This disorganization characterized loss of polarity of the radial processes and was more prominently observed in DMc rather than in Lc areas of the cortex (Figures 2K-N).
In addition to RG fibers displacement, TGF-β1 also promoted an increment in approximately 98% on BLBP-labeled cells witha morphology similar to glial progenitors, in the midway of their differentiation path, which we called RG intermediate differentiation morphology (IDM; Figures 2O-Q). Injection with pharmacological inhibitor of TGF-β1 signaling pathway SB431542 did not affect BLBP+ IDM cells generation (Figure 2Q). We also observed that TGF-β1 caused ectopic laminin distribution in the pial region of the cortical wall (Figures 2R,S).These phenotypes were also associated with increasing numbers of pH3+ cells in SVZ, but not in VZ (Figures 2T-Z). In addition RG fibers disorganization were also followed by displacement of pH3+ cells at VZ, leading to ectopic positioning of these proliferative cell's nucleus (Figures 2T,W).
These data shows that TGF-β1 regulates cerebral cortex thickness, RG morphology and polarity and progenitor positioning, and suggest that these events might be associated to regulation of basal lamina structure, an issue clearly related to RG cell polarity.
TGF-β1 PROMOTES PREMATURE GLIOGENESIS IN DORSOMEDIAL (DMc) AREA OF THE CEREBRAL CORTEX
We previously demonstrated that TGF-β1 controls RG differentiation into astrocytes and neurons by distinct signaling pathways in vitro (Stipursky et al., 2012a). In order to assess the fate of RG under the influence of TGF-β1 in vivo, we took the advantage of in utero intraventricular injection technique. Injection of TGF-β1 inside the lateral ventricles of mouse embryos also caused robust premature astrocyte generation (Figure 3). In the telencephalon TGF-β1 injection caused appearance of GFAP+ cells in distinct regions compared with vehicle injected brains (Figures 3A,B), such as the cingulate cortex (2 * ) neuroepithelium related to the third ventricle associated with the ventral diencephalic sulcus (3 * ), and also at the pial region of the preoptic area (4 * ).In the evident hippocampal neuroepithelium there was no difference in GFAP labeling pattern in control and TGF-β1 injected brains (1 * ).
Apart from other regions, we observed that in DMc area of the cerebral cortex (cingulate cortex) astrocytogenesis was more evident. The appearance of GFAP+ cells bearing a yet radiallike morphology in this area (Figures 3C-F) suggest that TGF-β1 induced RG cells to adopt an astrocyte phenotype.
Astrocyte differentiation was significantly increased by TGF-β1 in the DMc area in comparison to the lateral area of the cerebral cortex (15 X; Figure 3G). Injection of a pharmacological inhibitor of TGF-β receptor, SB431542, did not affect the gliogenesis in this area ( Figure 3G).
In order to confirm the specificity of TGF-β1 actions in different cortical areas, we generated cultures of isolated RG cells from DMc and Lc areas and from total cortex (Tc). We observed that DMc cells were more responsive to TGF-β1 astrocytogenic induction, than Lc cells. The number of GFAP+ cells increased by 5 times in DMc cells treated with TGF-β1 whereas only 3 times in Lc cells. For Tc cells, the increasing in GFAP+ cell numbers was compared to those found in DMc-treated condition ( Figure 3H).
Thus, RG from different cerebral cortexareas respond to TGF-β1 by acquiring the astrocytic phenotype.
TGF-β1 AFFECTS NEUROGENESIS AND NEURONAL POSITIONING IN CORTICAL PLATE
Neurogenesis and neuronal migration are events that occur during specific time window in the developing cerebral cortex; both events directly dependent of RG cell stem cell and scaffold properties, respectively (Rakic, 1971;Costa et al., 2010;Vogel et al., 2010;Sild and Ruthazer, 2011;Stipursky et al., 2012b). We previously described that as well as astrocytogenesis, neurogenesis can be controlled by TGF-β1 by activation of canonical and non-canonical signaling pathways, respectively (Stipursky et al., 2012a). Although neurogenesis was reported to involve TGF-β1 action in vitro (Vogel et al., 2010), it is not known if this factor controls RG neurogenic potential in vivo. In order to address this question, we have performed intraventricular injection of TGF-β1.
TGF-β1 also affected neuronal generation and placement in CP of the Lc. Interestingly, numerous βTubulinIII+ cells were present in the VZ of TGF-β1-injected brains, counting for an 66% increment (Figures 4A-C), thus suggesting enhanced neurogenesis in this RG cell bodies enriched layer. Pharmacological inhibition of TGF-β1 signaling pathway by SB431542 injection yielded a greater enhancement of βTubulinIII+ cells numbers in VZ, compared to control condition. In order to access if this increment was due to generation of new neurons, we have labeled the cells for BrdU and Doublecortin, which label recent generated neurons from RG cells that migrated through cortical wall and reached their final destination in the CP (Pramparo et al., 2010). We observed a 55% decrease in the number of BrdU+cells in the Lc CP of TGF-β1 injected brains (Figures 4D-H), thus demonstrating that both neuronal migration and positioning are modulated by TGF-β1 in vivo.
TGF-β1 CONTROLS THE EXPRESSION OF FoxG1 IN DIFFERENT CORTICAL AREAS
Differences between the distinct regions of the brain are mainly generated during developmental controlled axis patterningrelated morphogen distribution. Cerebral cortex arealizationor patterning is controlled by the expression of a great repertoire of transcription factors that define neural stem cells and progenitors generation, self-renewal and phenotypes. Those factors, such as FoxG1, are modulated by diverse morphogenetic proteins distinctly distributed in different patterning centers (Takahashi and Liu, 2006;O'Leary and Sahara, 2008).
Quantitative analyses by real time RT-PCR of DMc and Lc tissues revealed that TGF-β1 distinctly modulated the levels of FoxG1 mRNA transcription factors in these regions. Whereas TGF-β1 reduced the expression level of FoxG1 in DMc by 80%, it had no effect in Lc (Figure 5). These results suggest that TGF-β1 controls the expression of a transcription factor related to cortical arealization in vivo. FIGURE 3 | TGF-β1 promotes premature gliogenesis in the cerebral cortex. Intraventricular injection of TGF-β1 in mouse embryos (injection at E14 and analysis at E16) caused premature appearance of GFAP+ cells (green) in different telencephalon regions: dorsomedial cortex/cingulate cortex (2 * ), neuroepithelium related to the third ventricule (3 * ) and pial surface of the preoptic area (4 * ). At the hippocampal formation (1 * ), GFAP labeling was not affected. TGF-β1 induced gliogenesis was more evident at the dorsomedial area of the cerebral cortex (DMc), than in lateral cortex (Lc) (C-G). Note the GFAP+ (green) radial fibers of differentiating cells (arrows, F). In radial glia (RG) isolated cultures, TGF-β1 also promoted appearance of GFAP+ cells in a greater extend in DMc than in Lc and total cortex (Tc) (H). ***P < 0.0005, *P < 0.005. Scales: 500 µm (A,B), 50 µm (C,H). Cp: cortical plate, Vz: ventricular zone.
DISCUSSION
In this study, we provide evidence for the role of TGF-β1 as a modulator of RG-astrocyte differentiation in vivo. Our data is pioneer in two aspects: (1) by demonstration of TGF-β1 action in radial-glial-astrocyte differentiation in vivo; (2) by showing distinct effects of TGF-β1 in different subpopulations of RG cells.
First, we demonstrated that RG cells express the TGF-β receptor and activate the Smad pathway in response do TGF-β1. Then, we demonstrated that TGF-β1 disrupts RG cells polarized morphology and promotes premature astrocytogenesis and neuronal displacement in specific areas of the cerebral cortex. Our findings show that RG cells are potential targets for TGF-β signaling pathway and suggest that these effects are region dependent. Our data not only contribute to the understanding of the mechanism underlying fate decision and specific phenotype acquisition in the cerebral cortex, but support the hypothesis of the existence of distinct RG subpopulations with different potentials in the cerebral cortex.
RG CELLS AS POTENTIAL TARGETS OF TGF-β1 IN VIVO : IMPACT ON RG POLARITY AND ASTROCYTIC DIFFERENTIATION
Evidence suggests that VZ cells are direct targets of different TGFβ family members (Miller, 2003;Mecha et al., 2008), however, the cellular pattern of expression of TGF-β1 signaling pathway members in the developing CNS has not been well characterized. Here, we have shown TGFRII expression in the developing telencephalon, specifically in the VZ/SVZ of the cerebral cortex. Additionally, we precisely identified its distribution in RG soma and fibers, an issue only previously suggested by other authors (Miller, 2003). Moreover, the levels of TGFRII and one of its downstream effectors, phosphorylated Smad2, seems to be negatively modulated through development. These results are corroborated by previous data that showed TGF-β1 and Smad2/3 proteins expression in different CNS regions including cerebral cortex VZ, neurons and progenitor layers in vivo (Miller, 2003;Sousa Vde et al., 2004;Mecha et al., 2008;Powrozek and Miller, 2009). In addition, our data is in accordance with previous reports that demonstrated that TGF-β signaling members are expressed in higher levels in early moments of the telencephalon development, and that are determining for the generation of different cell types of the CNS and other regions (Luukko et al., 2001). RG cell polarity and radial processes extension are essential characteristics that are directly related to RG maintenance of its progenitor potential and scaffold property for neuronal migration (Rakic, 1971). RG differentiation into astrocytes involves disruption of its polarity and gradual acquisition of immature astrocyte morphology (Voigt, 1989;Hartfuss et al., 2001). Here, we have shown that TGF-β1 induces specific disorganization of nestin positive RG fibers and displacement of their cell nucleus labeled for pH3. Moreover, we observed the appearance of BLBP positive cells bearing an intermediate morphology between RG and astrocytes throughout the cortical wall. Several mechanisms have been proposed to control RG cell polarity and correct positioning of migrating neurons such as modulation of cytoskeleton molecules (Yokota et al., , 2010Weimer et al., 2009) and ECM signal transduction (Haubst et al., 2006;Voss et al., 2008). Here we observed that disruption of RG polarity induced by TGF-β1 is followed by impaired organization of the basal membrane that covers pial surface of the telencephalon, where RG cells attach their pial process endfeet (Götz and Huttner, 2005). Laminin labeling revealed an ectopic distribution pattern of this protein in pial region of the cerebral cortex, associated with deficiencies in CP formation and displaced cell bodies. Our data is supported by previous results that TGF-β1 is a potent regulator of the synthesis of laminin, fibronectin, the adhesion protein nCAM and integrins (Brionne et al., 2003;Siegenthaler and Miller, 2004;Gomes et al., 2005). Further, similar phenotypes were found in mutant mice for C3G protein, a guanine nucleotide exchange factor for small GTPases of the Ras family, and also in laminin γ1III4 mutant (Haubst et al., 2006;Voss et al., 2008). In these mice, it is observed a robust loss of radial cell polarity, disruption of basal membrane and neuronal migration and CP deficits. Thus, although we cannot fully rule out additional mechanisms, our data strongly suggested an association between TGF-β1-control of laminin organization and maintenance of RG polarity.
TGF-β1 PROMOTES PREMATURE GLIOGENESIS IN DORSOMEDIAL AREA OF THE CEREBRAL CORTEX: IMPLICATIONS FOR RG HETEROGENEITY
In rodents, by the end of gestation, RG-astrocyte differentiation, is characterized within several molecular mechanisms by replacement of RG markers, such as BLBP and nestin, by astrocytic markers such as GFAP, the glutamate transporter GLAST and the calcium binding protein S100β (Dahl, 1981;Pixley and de Vellis, 1984).The correct timing of RG-astrocyte transformation is a crucial step to ensure correct number of neurons and cerebral cortex lamination. Here, we report that activation of TGF-β1 pathwayled to a premature appearance of GFAP+ cells in different regions of the embryonic telencephalon, mainly, in the cingulate cortex, neuroepithelium related to the third ventricle, and also at the pial region of the preoptic area. Although it has been reported the expression of TGF-β isoforms and also its different roles in these regions (Bouret et al., 2004;Dobolyi and Palkovits, 2008;Srivastava et al., 2014), the role of TGF-β1 in dorsomedial area of the cerebral cortex, cingulate cortex, specifically on astrocyte differentiation, is poorly known.
Here the reported event was region-dependent since in DMc area the appearance of GFAP+ cells and disruption of RG processes were more robust than in Lc area. This observation might be related to 2 alternatives: (1) distinct responsiveness of different brain regions to TGF-β1; (2) heterogeneity of radial glial cells. The first possibility is supported by our previous report that GFAP gene promoter from different brain regions distinctly responds to TGF-β1 (Sousa Vde et al., 2004). It is also possible that TGF-β1 might exert its actions controlling size of a brain area (Falk et al., 2008) by acting into the different subpopulations of RG cells and other progenitors previously described to contribute to cell diversity in CNS (Pinto and Götz, 2007;Stancik et al., 2010), and that this event accounts for diversity in the responsiveness to TGF-β1. Whether this is due to different levels of TGF-β receptor or intracellular signaling molecules, or even, by cell autonomous defined potentials, remains to be determined.
Several molecules have been described to guarantee the maintenance of RG self-renewal, BLBP expression and morphology characteristics, such as the proteins of Neuregulin family and its receptor ErbBs, and Notch1 (Gaiano and Fishell, 2002;Patten et al., 2003;Schmid et al., 2003;Yoon et al., 2004;Anthony et al., 2005;Ghashghaei et al., 2006Ghashghaei et al., , 2007. Thus, alterations of ErB2 and Notch1 expression in RG cells could lead to a premature astrocyte differentiation under TGF-β1 influence. This hypothesis is supported by reports that interaction between TGF-β1 signaling pathway proteins and radializing factors such as Notch intracellular cleaved domain (NICD) and ErbB4 is necessary to regulate the expression of target genes in neural precursors (Blokzijl et al., 2003) and the correct time of gliogenesis (Sardi et al., 2006). The exact mechanisms by which TGF-β1 pathway controls RG-astrocyte differentiation in the dorsomedial area of the cerebral cortex will require further investigation.
We reported here that activation of TGF-β1 signaling pathway in the cerebral cortex down regulates the expression of FoxG1 in DMcarea.FoxG1 is a member of the forkhead family of transcription factors, expressed by cells with high proliferation rates; it controls neurogenesis, by maintaining the undifferentiated state of neural progenitors (Dou et al., 2000;Siegenthaler and Miller, 2008). In addition, FoxG1 is mainly expressed in lateral areas of the mice cerebral cortex (Miller, 2003).Mutant mice models for FoxG1 functions share several similarities with many of the phenotypes described here, including reduction of cortical thickness and layers of the dorsal area. For example, mutant mice for FoxG1, present reduction of dorsal area, and pronounced increase of BMPs, a member of TGFβ family, expression in the telencephalon (Takahashi and Liu, 2006). Further, FoxG1 was described as a potent inhibitor of TGF-β signaling due to its association with Smad proteins (Dou et al., 2000;Siegenthaler and Miller, 2008). Although TGF-β1 affects more robustly DMc area, we also observe the effect of this factor in Lc, such as mild RG fibers morphology and neurogenesis induction, it is possible that other transcription factors responsible for arealization of the cortex might mediate TGF-β1 actions in Lc (O'Leary and Sahara, 2008). Thus, it is possible that TGF-β1 controls the balance between gliogenesis and neurogenesis by modulating the expression and activation of different transcription factors in vivo. Since FoxG1 is a lateral transcription factor, a gliogenic inhibitor, and negatively regulates Smads signaling, it is possible that FoxG1 is a mediator of TGF-β1 signaling in DMc.
Besides the role of TGF-β1 in the modulation of transcription factors at transcriptional level, it is possible that the lateral morphogen gradients might exert an inhibitory action on medial ones. It correlates with our observation that endogenous TGF-β signaling pathway might not be active or engaged in promotion of astrocytogenesis at this developmental stage, since pharmacological inhibition of endogenous TGF-β signaling by SB431542 did not affect RG morphological phenotype, as well as GFAP + cells numbers. Although we have shown that TGF-β1 is a potent inductor of astrocyte differentiation (Stipursky and Gomes, 2007;Stipursky et al., 2012a), this data confirm that RG cells are mainly committed in promoting neurogenesis at this stage (Noctor et al., 2001).
TGF-β1 AFFECTS NEUROGENESIS AND NEURONAL POSITIONING IN THE CORTICAL PLATE
Injection of TGF-β1 decreased the number of BrdU+ cells in the developing CP of the lateral area of the cortex. This effect might be the consequence of neurogenesis and/or migration deficits. The last hypothesis is more likely, since increased number of βTubulinIII+ cells was observed in the VZ, and although in the present work we cannot completely guarantee the identity of the pH3+ cells in the SVZ, it is possible that these cells could also contribute to neurogenic effect promoted by TGF-β1.
The role of TGF-β1 in neurogenesis is controversial; whereas it has been shown as inductor of neurogenesis in the cerebral cortex during embryonic stage and in the adult hippocampus (Vogel et al., 2010;Stipursky et al., 2012a;He et al., 2014); others have reported its action as negative modulator of neurogenesis in the adult SVZ (Roussa et al., 2004;Wachs et al., 2006;Siegenthaler and Miller, 2008). Although TGF-β1 has been shown to induce radial neuronal migration in the cerebral cortex, its effect in RG cell has not been previously addressed (Siegenthaler and Miller, 2004). Here we suggest that although TGF-β1 promotes neuronal generation from RG cells and as we previously demonstrated in vitro (Stipursky et al., 2012a), the morphological alterations triggered in radial processes in the lateral area of the cortex, even in a less extension that in DMc area, counteracts its effect and prevent neuronal migration and the accuracy in the establishment of these new generated neurons in the CP.
It is interesting that pharmacological inhibition of TGF-β1 signaling pathway by injection of SB431542 yielded an even greater increase of βTubulinIII+ cells in VZ, when compared with TGF-β1 injected brains. Although apparently contradictory, this result might indicate that endogenous TGF-β signaling pathway might be committed to control neuron generation in cerebral cortex during the neurogenic stage of the CNS development (Vogel et al., 2010). Further, it is possible that different levels of TGF-β signaling activation might be critical to elicit positive or negative responses to this factor. Accordingly, it has been demonstrated that opposite actions of TGF-β1 in neuronal migration is concentration dependent (Siegenthaler and Miller, 2004).
Together our results points to a new feature of TGF-β1 action in patterning the developing telencephalon. By acting in different RG populations, TGF-β1 promotes the generation of astrocytes and/or neurons in a regional dependent manner. Deficits in pathways that operate in RG physiology might generate dysfunctional cells, disorders in neuronal migration and premature astrocytogenesis, leading to diverse types of lamination defects in the developing cortex, such as observed in Lisencephaly and the congenital abnormality cortical dysplasia. Identification and characterization of the mechanisms underlying RG maintenance and differentiation might contribute to generation of therapeutic approaches to cell restocking in CNS parenchyma. | 2016-06-17T11:18:37.720Z | 2014-11-21T00:00:00.000 | {
"year": 2014,
"sha1": "bbb7dd07b53b48f9d570f1e524764155f26dabf8",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2014.00393/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbb7dd07b53b48f9d570f1e524764155f26dabf8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
54559271 | pes2o/s2orc | v3-fos-license | Embryo production by intracytoplasmic injection of sperm retrieved from Meishan neonatal testicular tissue cryopreserved and grafted into nude mice
Abstract Testicular xenografting, combined with cryopreservation can assist conservation of the genetic diversity of indigenous pigs by salvaging germ cells from their neonatal testes. Using Meishan male piglets as an example, we examined whether testicular tissue would acquire the ability to produce sperm after cryopreservation and grafting into nude mice (MS group). For comparison, testicular tissue from neonatal Western crossbreed male piglets was used (WC group). Sixty days after xenografting (day 0 = grafting), MS grafts had already developed seminiferous tubules containing sperm, whereas in the WC grafts, sperm first appeared on day 120. The proportion of tubules containing spermatids and sperm was higher in the MS group than in the WC group between days 90 and 120. Moreover, in vitro‐matured porcine oocytes injected with a single sperm obtained from the MS group on day 180 developed to the blastocyst stage. The blastocyst formation rate after injection of the xenogeneic sperm was 14.6%, whereas the ratio in the absence of such injection (attributable to parthenogenesis) was 6.7%. Thus, cryopreserved Meishan testicular tissue acquired spermatogenic activity in host mice 60 days earlier than Western crossbreed tissue. Such xenogeneic sperm are likely capable of generating blastocysts in vitro.
Furthermore, the reproductive ability of offspring produced using such xenogeneic sperm has also been confirmed in pigs (Kaneko et al., 2014). Thus, testicular xenografting, combined with cryopreservation and ICSI is an effective method of storing and utilizing germ cells from immature animals, unlike conventional reproductive methods using fully differentiated germ cells obtained from sexually mature animals. These techniques are therefore probably applicable to rescue genetic information from valuable immature pigs that die quite young. Alternatively, systematic collection of neonatal testes might be an aid to conserve indigenous pigs whose populations are often small, but which possess unique phenotypes, thus having the potential to maintain the genetic diversity of pig species. As an example of the former application, we have generated progeny using fetal testis obtained from cloned pigs harboring a disruption of the X chromosome-linked coagulation factor VIII (F8) gene (hemophilia-A pig) : such female cloned pigs (F8 +/− ), despite having a recessive X-linked condition, died of severe bleeding at an early age , as was the case for male cloned pigs (F8 −/Y ) (Kashiwakura et al., 2012). However, the above studies used testicular tissue from breeds of Western origin (Abrishami et al., 2010;Caires et al., 2008;Honaramooz et al., 2002;Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;. The Meishan pig is a Chinese indigenous breed noted for early sexual maturity (Lunstra, Ford, Klindt, & Wise, 1997) and high prolificacy (Haley & Lee, 1993). In Meishan boars, sperm appear in the lumina of seminiferous tubules by 60 days of age (Harayama, Nanjo, Kanda, & Kato, 1991;Lunstra et al., 1997), 60-90 days earlier than in Western breeds (4-5 months of age, FlorCruz & Lapwood, 1978;van Straaten & Wensing, 1977). On the other hand, testis weight at onset of sperm production is lower in Meishan boars (40-60 g/ paired testis weight, Harayama et al., 1991;Lunstra et al., 1997) than in Western breeds (100-250 g/paired testis weight, FlorCruz & Lapwood, 1978;van Straaten & Wensing, 1977). Thus, testis growth and differentiation in indigenous pigs are not always similar to those in Western breeds, and few studies of testis transplantation using indigenous boars have been reported.
In the present study, as an example of an indigenous pig breed, we examined whether testicular tissue from Meishan piglets would exhibit complete spermatogenic activity after cryopreservation and grafting into nude mice. We then injected sperm recovered from the host mice into in vitro-matured porcine oocytes and assessed the competence of these oocytes to develop to the blastocyst stage.
| Experimental animals
All experiments were performed in accordance with protocols approved by the Animal Care Committee (# H18-008-02) of the Institute of Agrobiological Sciences, National Agriculture and Food Research Organization (NARO), Tsukuba, Japan. Meishan (MS) and Western crossbreed (WC, Landrace × Large White × Duroc) pigs used in this study were produced and reared according to the Japanese Feeding Standard for Swine at the Institute of Livestock and Grassland Science of NARO. Male nude mice (Crlj:CD1-Foxn1 nu ) 5-6 weeks old purchased from Charles River Japan (Yokohama, Japan) were kept in an environmentally controlled room maintained at a temperature at 24°C and 70% humidity, and illuminated daily from 05:00 to 19:00.
| Chemicals
All chemicals were purchased from the Sigma-Aldrich Corporation (St. Louis, MO, USA), unless otherwise indicated.
| Xenografting of testicular tissue
MS-vitrified droplets after storage for between 2 and 20 months and WC-vitrified droplets stored for 5.4 years were transferred to a warming solution (BS supplemented with 0.4 mol/L trehalose) at 37°C for 2 min (Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;. The testicular fragments were consecutively transferred for 2-min periods to BS supplemented with 0.2, 0.1 and 0.05 mol/L trehalose to remove the cryoprotectants, and finally incubated in saline supplemented with antibiotics for at least 2 min at room temperature. Five fragments were randomly selected from those obtained from 5 individual MS donors and 5-8 fragments from 4 individual WC donors, and then fixed in Bouin's solution: these were used to obtain histological data for day 0 (day 0 = grafting).
Fifty-four male nude mice aged 5-6 weeks were assigned to receive testicular tissues obtained from MS (MS group, n = 28) and WC piglets (WC group, n = 26). All mice were anesthetized with isoflurane (Intervet, Tokyo, Japan), and then castrated. Immediately after castration, 23-25 testicular fragments were inserted under the skin of each mouse within 30 min after the end of warming (Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;.
| Recovery of grafts and sperm
Host mice in both groups were euthanatized by cervical dislocation under anesthesia with isoflurane on days 60, 90, 120 and 180.
All visible testicular grafts were immediately recovered from the host mice and placed in collection medium (Dulbecco's PBS; Nissui, Tokyo, Japan, supplemented with 5 mg/mL BSA) at 37°C and then weighed. Three pieces were excised from the different larger grafts in each mouse and fixed in Bouin's solution for histological examination. The remaining portions were cut into small pieces in the collection medium, and the presence of sperm released into the medium was recorded.
For ICSI, sperm were collected from two hosts in the MS group on day 180. The tissue suspension was centrifuged for 10 min at 600 × g, and the supernatant was discarded. After washing with the collection medium three times (Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;, the pellet was resuspended in a small volume of collection medium and maintained at room temperature until used for ICSI. Sperm obtained from each host mouse were used separately for ICSI.
| Oocyte maturation
Porcine cumulus-oocyte complexes (COCs) collected from ovaries at a local abattoir were matured in vitro in North Carolina State University (NCSU)-37 solution (Petters & Wells, 1993) with modifications (Kikuchi et al., 2002). After in vitro maturation culture for 44-46 hr, expanded COCs were denuded of their cumulus cells mechanically by gentle pipetting after brief treatment with 150 IU/ml hyaluronidase: oocytes showing extrusion of the first polar body were harvested as mature oocytes.
| ICSI and oocyte stimulation
ICSI was performed in accordance with our previous studies (Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;Men et al., 2016;Nakai et al., 2003Nakai et al., , 2006Nakai et al., , 2007Nakai et al., , 2009Nakai et al., , 2010. Approximately 20 in vitro-matured oocytes were transferred to a 20μl drop of IVC-PyrLac supplemented with 20 mmol/L HEPES (IVCPyrLac -HEPES) (Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;Men et al., 2016;Nakai et al., 2003Nakai et al., , 2006Nakai et al., , 2007Nakai et al., , 2009Nakai et al., , 2010) that had been placed on the cover of a plastic dish (Falcon 351005, Thermo Fisher Scientific). A small volume (0.5 μl) of the sperm suspension was transferred to a 2μl drop of IVC-PyrLac-HEPES supplemented with 4% (w/v) PVP (Mr 360,000) that had been placed close to the drops containing the oocytes. These drops were covered with paraffin oil (Paraffin Liquid; Nakarai Tesque, Kyoto, Japan). A morphologically normal single sperm was aspirated tail first from the suspension and injected into the ooplasm of the mature oocyte using a piezo-actuated micromanipulator (Prime Tech, Tsuchiura, Japan) (Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;Men et al., 2016;Nakai et al., 2009Nakai et al., , 2010. One hour after the injection, the sperm-injected oocytes (20 in all) were transferred to an activation solution consisting of 0.28 mol/L D-mannitol, 0.05 mmol/L CaCl 2 , 0.1 mmol/L MgSO 4 , and 0.1 mg/ml BSA. They were then stimulated with a direct current pulse of 1.5 kV/cm for 60 μs using an electric fusion generator (ECFG21, Nepagene, Ichikawa, Japan). As a negative control, parthenogenetic oocytes were used: 15 oocytes were stimulated with an electrical pulse under the same conditions (1.5 kV/cm for 60 μs) as those for the ICSI groups without pretreatment with an injection pipette (parthenogenetic group). The above procedures were carried out separately on sperm obtained from two host mice (2 replications).
| Assessment of the developmental ability of sperm-injected oocytes
In this study, the developmental competence of oocytes that had been injected with xenogeneic sperm was assessed by in vitro culture to the blastocyst stage. Sperm-injected oocytes or parthenogenetic oocytes were cultured for 6 days, as described previously (Kikuchi et al., 2002).
They were then fixed with acetic alcohol (acetic acid-to-ethanol = 1:3, v/v) and stained with 1% aceto-orcein. Any embryo with an apparent blastocele consisting of more than 10 cells was defined as a blastocyst.
The rate of blastocyst formation and the total number of cells in the blastocysts were assessed.
| Histological analyses
Testicular fragments before transplantation and those excised from the grafts in mice were embedded in paraffin, and then cut into sections 6 μm thick and stained with hematoxylin and eosin. The seminiferous cord/tubule cross-sections were then sorted into the followingcategories, as described previously (Kaneko et al., 2008(Kaneko et al., , 2012(Kaneko et al., , 2013(Kaneko et al., , 2014Kaneko, Kikuchi, Men, et al., 2017
| Statistical analyses
The mean weight of four sets of 25 fragments before grafting was 23.6 ± 2.0 (±SEM) mg for MS piglets and 25.8 ± 0.5 for WC piglets.
When the total weight of grafted tissues recovered from a single mouse exceeded the 95% confidence limit (19.6-27.6 mg for MS grafts and 24.8-26.8 mg for WC grafts), porcine testicular tissues were judged to have grown in the mouse, and data obtained from such mice were subjected to statistical analyses. The effects of donor strains and time after grafting on testis development were compared between the groups using two-way ANOVA. Testis development in relation to time after grafting in each group was examined using one-way ANOVA. When a significant effect was detected by ANOVA, the difference between two means was determined by Student's t test and differences among more than two means were determined by Tukey's test. The General Linear Models of Statistical Analysis Systems, ver 9.4 (SAS Inc., Cary, NC, USA), was used for these analyses. Data are expressed as mean ± SEM unless otherwise indicated. Differences at p < 0.05 were considered significant.
| Growth of testicular tissue
Growth of porcine testicular tissue, judged using the criteria explained in the data analysis section was recorded in all of the host mice (n = 28) in the MS group and in 24 out of 26 mice in the WC group (Table 1). The weights of all visible grafted tissues per mouse increased (p < 0.05) with time after grafting in each group. However, those in the MS group were higher (p < 0.05) than in the WC group between days 60 and 90 (Figure 1). After day 120, there were no significant differences in graft weights between the two groups (p > 0.1).
| Differentiation of seminiferous tubules
In the testicular tissue before grafting (Day 0), more than 85% of seminiferous cords contained only gonocytes/spermatogonia and Sertoli cells in the MS and WC piglets (Figures 2, 3a and 3b).
Sixty days after grafting, testicular grafts in the MS group had already developed elongated spermatids or sperm (Figure 3c): the proportions of seminiferous tubule cross-sections containing these cell types were 7.0 ± 3.4% and 2.1 ± 1.6%, respectively ( Figure 2a). At the same sampling point, grafts in the WC group showed a very low proportion (0.6 ± 0.5%) of tubules containing round spermatids as the most advanced cell type (Figures 2b and 3d). From days 90 to 120, the proportion of tubule cross-sections containing round spermatids, elongated spermatids or sperm was higher (p < 0.05) in the MS group than in the WC group, whose sperm were first detected on day 120 (Figure 2b). In the MS grafts on day 120, tubule cross-sections containing elongated spermatids or sperm occupied 43% of the total tubules (Figures 2a and 3e), while in the WC grafts, the ratio was only 2%, although tubules containing spermatocytes were predominant (61.1 ± 13.8%, TA B L E 1 Number of host mice in which porcine testicular grafts gained weight and released sperm into the collection medium after being minced into small pieces
| Recovery of sperm
Sperm were recovered from grafts in 2 out of 7 mice in the MS group on day 60 (Table 1 and Figure 4b). On the other hand, in the WC group, sperm were recovered from 1 of 6 mice on day 120 and the ratio increased to 100% on day 180 (Table 1). Most of the sperm in the MS group were morphologically normal and often had cytoplasmic droplets (Figure 4). Sperm in the WC group were morphologically similar to those obtained from the MS grafts.
| Developmental ability of Meishan xenogeneic sperm
The in vitro developmental ability of oocytes that had been injected with sperm obtained from Meishan testicular xenografts, or activated parthenogenetically, is shown in
| D ISCUSS I ON
Indigenous pigs are expected to be a reservoir of unique genetic diversity (Ishihara et al., 2018), since they have adapted to geographically isolated environments for many years. According to a report from the Food and Agriculture Organization of the United Nations (FAO), 14% of pig breeds are categorized as being at risk for continuance (Baumung & Wieczorek, 2015). Preservation and utilization of germplasm from indigenous pigs is valuable for conserving the genetic diversity of pig species. In order to utilize neonatal testis for the conservation of indigenous pigs, it is necessary to know whether their testicular tissue, after cryopreservation, can acquire the capacity to produce sperm by xenografting, and whether these xenogeneic sperm have developmental competence. The findings of the present study clearly indicate that cryopreserved testicular tissue obtained from indigenous Meishan piglets can produce sperm in host mice, and that these xenogeneic sperm have the ability to generate blastocysts.
In the present study, MS grafts gained weight faster than WC grafts between days 60 and 90, but this tendency disappeared after day 120. These findings are consistent with previous studies that have assessed in situ testicular growth in both breeds: Testicular weight increased more rapidly in Meishan than in Western crossbreed boars between 70 and 112 days of age (Ford & Wise, 2009), but at around 6 months of age, Meishan testis had a lower weight (252 g/paired testis weight, Harayama et al., 1991) than Western breed testis (320-564 g/paired testis weight, Allrich, Christenson, Ford, & Zimmerman, 1983;FlorCruz & Lapwood, 1978). In addition, histological examination revealed that differentiation of seminiferous cord/tubules in MS grafts was more promoted than that in WC grafts until 120 days after xenografting. Onset of sperm production defined as the appearance of sperm in the seminiferous tubules was evident at day 60 in the MS group, but at day 120 in the WC group, as reported in our previous study (Kaneko et al., 2013). The percentages of tubule cross-sections containing elongated spermatids or sperm were higher in the MS group than in the WC group from days 90-120 (36-43% in the MS group and 0.8-2% in the WS group).
Thus, even in the same milieu (i.e., nude mice), Meishan testicular xenografts showed more rapid growth and differentiation than Western breed grafts, a characteristic that appears to be inherent in Meishan testis.
Our vitrification protocol (Dinnyés et al., 2000;Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;Somfai et al., 2007) using EG, PVP and trehalose as cryoprotectants has proven to be useful for ultra-rapid cooling of Meishan testicular tissues, as well as testes of Western breeds (Kaneko et al., 2013;Kaneko, Kikuchi, Men, et al., 2017;, oocytes and zygotes ). Moreover, the present WC grafts, that had been vitrified and stored in liquid nitrogen for more than 5 years, retained the ability to produce sperm, similar to grafts after short-term storage (Kaneko et al., 2013). This suggests that storage of gonadal tissue for periods of up to 10 years after vitrification would be possible.
Development of fertilized oocytes to the blastocyst stage is
an important result indicating that further development to piglets might be possible. We therefore examined the developmental ability of Meishan xenogeneic sperm by injecting them into mature oocytes, and these sperm-injected oocytes developed to the blastocyst stage after 6 days of culture. The quality of the present blastocysts (total number of cells) was similar to that in previous studies (Kikuchi et al., 2002;Somfai et al., 2009) where live piglets were obtained after in vitro fertilization. Our previous ICSI using xenogeneic sperm obtained from Western crossbreed piglets generated live piglets (Kaneko et al., 2013;Nakai et al., 2010). Moreover, the rate of blastocyst formation was 14.6% in the ICSI group, compared with 6.7% in the parthenogenetic group. Considering the above findings, we can infer that at least half of the blastocysts obtained by ICSI resulted from fertilization with sperm obtained from xenografted Meishan tissue. Thus, xenogeneic Meishan sperm are likely to have the ability to support embryonic development to the blastocyst stage.
In summary, we have cryopreserved testicular tissues obtained from neonatal Meishan boars and transplanted them into nude mice. The Meishan xenografts grew and differentiated faster than xenografts from Western crossbreed piglets and produced morphologically normal sperm. Sperm retrieved from porcine xenografts were proven to have the ability to generate blastocysts after ICSI.
Testicular tissue from indigenous pigs can provide a genetic reservoir enabling conservation of rare breeds. Ford, J. J., & Zimmerman, D. R. (1983).
ACK N OWLED G M ENTS
Pubertal development of the boar: Age-related changes in testicular morphology and in vitro production of testosterone and | 2018-12-12T19:54:12.141Z | 2018-12-06T00:00:00.000 | {
"year": 2018,
"sha1": "61edad36ef75c3a1c92fbe422a4ca65f4defd753",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/asj.13138",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "61edad36ef75c3a1c92fbe422a4ca65f4defd753",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
202887642 | pes2o/s2orc | v3-fos-license | Evolution of the phases and the polishing performance of ceria-based compounds synthesized by a facile calcination method
Ceria-based compounds with additions of La and F (CLF compounds) were prepared by using industrial-grade fluorinated lanthanum cerium carbonate as a precursor via a facile calcination method. The evolution of phase structures of the compounds during preparation and the relationship between the structure and the polishing performance were investigated. The compounds consist of three phases: CeO2, LaOF, and LaF3. The phase component could be controlled by tuning the calcination process. A higher degree of fluorination and a higher calcination temperature led to the formation of more LaOF and less LaF3 phases. The LaOF phase performs a higher stock removal rate. The best polishing efficiency was achieved with LaOF phase ratio around 18%. Intermetallic LaF3 is a low-hardness phase and is easily crushed during polishing, which lowers the removal rate and shortens the useful life of the polishing powder.
Introduction
In recent years, ceria-based compounds have been paid considerable attention for their potential applications in emerging technologies in environmental and energy related applications. [1][2][3][4] It is widely used as various silica-containing glass materials because of their unique mechanical and chemical polishing performance. [5][6][7][8] Usually, the cutting efficiency of pure CeO 2 is lower than that of ceria-based compounds added with La. [9][10][11] With the development of the precision optics industry and the emergence of new applications, the requirements for a neat nish of the glass surfaces are getting more stringent. As a result, rare earth polishing compounds with the addition of La are getting more attention in the market. 12,13 Ceria-based polishing materials are uorinated during the preparation of the precursors, which results in the formation of new phases in the nal products. 14,15 These phrases play an important role in determining the hardness, the crystal structure, and the morphology of the polishing particles, which in turn inuences the polishing performance. Fluorine can form two phases, namely LaOF and LaF 3 , in the ceria matrix and a small number of F atoms dissolve in the lattice of ceria. 16,17 The F atoms can signicantly improve the chemical activity of ceria in the polishing process as well as enhance its polishing performance. 18,19 Therefore, ceria-based compounds added with La and F (CLF compounds) have gained much interest and became promising polishing materials. Previous studies showed that the behavior of the F atoms and the phase structures including the formation and the distribution of CeO 2 , LaOF, and LaF 3 , are closely related to the polishing performance. 20,21 However, it is still not clear how to control the phase components in the ceria-based polishing powders. More importantly, there are few reports on what role the phases play on the polishing performance. 22,23 In this work, the evolution of the structure of the CLF compounds during preparation and the relationship between the structures and the polishing performance were subsequently investigated. The inuence of the phase components and the role of the different phases on the polishing process were discussed. In addition, the preparation method of the R a compounds polishing powder was optimized. The work provides an insight into the mechanism that La and F improve the polishing performance, and set out fundamental guidelines for the preparation of CLF compounds with an optimal polishing performance.
Preparation of the CLF compound powders
Industrial-grade cerium lanthanum carbonate was added into a reactor containing a given volume of deionized water to obtain a Key Laboratory for Anisotropy and Texture of Materials, College of Materials Science and Engineering, Northeastern University, Shenyang 110819, China. E-mail: peiwl@ atm.neu.edu.cn; xianghui0205@163.com b Demeter (Suzhou) Electronic Environmental Protection Material Co., Ltd., Suzhou 215000, China a mixture with a solid content of 50%. Then, the mixture was then stirred for 30 min to obtain a solution and different amounts of HF (30%) were added. Aer 2 h reaction at 80 C, a 0.5 mol L À1 NH 4 HCO 3 solution was gradually added until the precipitation was complete. The prepared mixture was then ltered, washed, and dried at room temperature to obtain the precursor materials with different uorine contents. All precursor materials were milled to obtain a smaller particle size with a similar particle size distribution (PSD) via a ball-milling process. The precursor materials were then calcined in a muffle furnace to obtain the nal CLF compound powders.
Characterization
The particle size distribution (PSD) of the polishing powders was measured using a laser particle size analyzer (Malvern 2000E) and the specic surface area (SSA) of the polishing powder was measured with a BET system (H300-2000A). The crystal structures were determined by X-ray Diffraction (XRD) on an X'Pert Powder 003 system. The morphology and the microstructures were analyzed by transmission electron microscopy (TEM) on a JEOL JEM-2100F system. The polishing performance was checked by using a 9B polisher to work on the normal commercial K9 glass. The surface roughness of the polished glass was evaluated by an EDGE QQAFM-01 atomic force microscope (AFM).
Inuence of the F content on the CLF compound powders
The precursors with a uorine content of 2%, 5%, and 10% were calcined from 800 to 1000 C for 3 h. The average particle size of the samples was measured and summarized in Fig. 1. Normally, the particle size increases with the calcination temperature. The uorine content signicantly inuenced the particle size. For the same calcination temperature, the highest uorine content resulted in the larger particles.
The TEM micrographs of the samples with a uorine content of 2%, 5%, and 10% obtained at 950 C for 3 h are shown in Fig. 2. Higher uorine contents result in larger particles for the same calcination temperature, which agrees with the results from the laser analyzer. The particles with the lowest uorine content of 2% are agglomerated. The agglomerate is composed of smaller particles with an irregular polygonal shape, which indicates that the crystallization degree of the particles is lower. When the uorine content increases, the particles adopt a hexagon-like morphology and present a higher crystallization degree. The addition of the more F atoms benets crystal growth. The particles with a higher uorine content present a more sufficient crystallization degree.
Evolution of the phases of the CLF compounds
The cerium lanthanum precursors with 0%, 2%, 5%, and 10% uorine were calcined at 950 C for 3 h. Fig. 3 shows the XRD diffraction patterns of the samples for which the diffraction peaks of CeO 2 , LaOF, and LaF 3 could be indexed. The La 2 O 3 phase was not observed in the XRD pattern, which indicates that both La and F atoms dissolved into the CeO 2 lattice of the cubic uorite structure. The characteristic peak for CeO 2 shied to lower angles with the increasing F content. This is because the solid solution of La atoms and a large number of F atoms increases the inter-planar spacing of CeO 2 , which results in a shi of the characteristic peak. Noticeably, the component of the phases in the samples with different uorine contents is diverse. The diffraction patterns were analyzed using the X'Pert HighScore soware to calculate the phase component of the samples. Fig. 4 lists the proportion of different crystal phases in the samples with different uorine contents.
The proportion of the LaF 3 phase decreases and the LaOF phase increases when the F content increases. A La atom is larger than a Ce atom. When the La atoms replace the Ce atoms in the structure, the lattice is distorted and defects are caused near the area where the La atom entered the lattice. F atoms have a small size and are likely to occupy the lattice of highenergy defects and form LaOF to reduce the density of defects and form a more sufficient crystallization degree. The more F atoms dissolve into the CeO 2 phase doped by La, the more LaOF can be obtained. Therefore, controlling the uorine content is an efficient way to tune the phase component in our method. Fig. 5 shows the particle size of the CLF compounds has a close relationship with LaOF content. The particle size grows up with the increase of LaOF content, which means the more F content can facilitate the particle growth. Fig. 6 shows the XRD patterns of a sample prepared by calcinating the precursor with a uorine content of 10% from 800 C to 950 C. The intensity of the diffraction peaks increased and the half-width of the diffraction peaks decreased with the increasing calcination temperature. It indicates that a higher calcination temperature results in a more sufficient crystallization degree and a larger grain size in the CLF compound powders. The LaF 3 phase was found in the samples obtained at a lower temperature but was no longer present for a temperature above 900 C. Therefore, a high enough temperature can avoid the presence of the LaF 3 phase in the product. Fig. 7 demonstrates a comparison of the infrared spectroscopy results for pure ceria and the CLF compounds with an F content of 5% prepared by calcination at 800 C and 1000 C. In the gure, the peaks at 478.92 cm À1 and 1241.40 cm À1 are due to the stretching vibration of Ce-O. The stretching vibration of the La-F bond appears near 1123.10 cm À1 and that of F-O is around 1048.74 cm À1 . The infrared spectrum of the calcined product of uorocarbonate also shows the vibration peak of Ce-O. In addition, the vibration peaks of La-F and F-O also appeared in the calcined uorocarbonate. The presence of LaF 3 and LaOF compounds was also demonstrated, indicating that the La and F atoms are doped into the crystal lattice of CeO 2 and precipitate to form LaF 3 and LaOF phases.
Evaluation of the polishing performance
Cerium-based polishing powders with different phase ratios were obtained by adjusting the uorine content. The powders were mixed with water to form an aqueous slurry with 10% solid content to polish the K9 glass on a 9B polisher with a polishing mat. The polishing time was xed at 30 min with a polishing pressure of 20 kPa, along with a rotation speed of the plates of 30 rpm and a slurry ow rate of 1000 ml min À1 . The material removal rate is dened by the change in the mass of the glass substrates before and aer polishing. Fig. 8 shows the dependence of the material removal rate (MRR) and the average roughness (R a ) of the glass surface on the percentage of LaOF phase in the matrix of the polishing powder. As shown in Fig. 8, the MRR increases with the LaOF phase ratio, saturates around 18% LaOF phase ratio, then decreases. The best R a was achieved around a 10% LaOF phase ratio.
According to the Cook theory, 24 CeO 2 is the main functional component of ceria-based compounds for polishing SiO 2 -containing glass. During glass polishing, two functional processes are taking place: the mechanical grinding and the chemical action of Ce 4+ in the slurry solution on SiO 2 . The LaOF crystal phase of the ceria-based polishing powder with high uorine content increases the hardness of the polishing particles and enhance the effect of mechanical polishing on the glass surface. At the same time, the presence of the LaOF phase causes more distortion in the lattice, which increases the energy and narrows the energy band gap of Ce 4+ . As a result, the chemical reaction of Ce 4+ with SiO 2 is enhanced by the higher distorted energy eld. The hardness and the integrity of the polishing particles/grains grow further when the LaOF phase ratio is over 18%, which increases the particle size and reduces the contact area with the polished glass surface to lower the MRR.
When the phase ratio of LaOF is low, the effect of mechanical polishing is not enough to remove the grinding marks that come from the previous step. For this reason, the surface roughness of the glass is higher. A further increase in the phase ratio of LaOF enhances both the mechanical and chemical polishing actions. The surface quality of the glass is optimal for a 10% LaOF phase ratio. When the phase ratio of LaOF increases further, the increased hardness of the polishing particles will add more scratches to the surface of the polished glass and R a increases.
The surface roughness was measured by using AFM. The AFM patterns presented the different height difference by using color contrast, and the highest value and the lowest value could be obtained by using the AFM soware. In this work, the average height difference between the highest zone was calculated as the surface roughness. The 2D and 3D AFM scans of K9 glass before (Fig. 9a and c) and aer ( Fig. 9b and d) polishing by 10% LaOF polished powder are shown in Fig. 9. It can be found that the surface roughness is 34.8 nm, but drops to 5.4 nm aer polishing, which signicantly improves the surface quality and meets the surface quality requirements for optical glass. The LaF 3 phase is maintained at a low percentage level, which indicates that the F and La atoms are more likely to form a LaOF phase in the CeO 2 matrix and lower the free energy. Fig. 10 shows the HRTEM micrographs of the samples with different F content obtained at different temperatures. The insets in (a) and (c) are the morphology of the samples calcined at 800 C and 900 C, respectively. The dash lines separate the different phases. As shown in Fig. 10, LaF 3 is an intermetallic phase precipitated in the main matrix. The Mohs hardness of the intergranular LaF 3 precipitate is 4.5 while the Mohs hardness of CeO 2 is about 7.0 and that of the ceria-based rare earth polishing powder is higher than 7.0 because the La and F atoms dissolve into the matrix phases to strengthen the solution. 25 Fig. 7 The infrared spectra of pure ceria powder calcinated at (a) 800 C, and the CLF compounds powder with 5% F calcinated at (b) 1000 C and (c) 800 C. As sketched in Fig. 11, the stress is easily concentrated around LaF 3 phase due to the pressure during polishing, which causes a com-pressive force between the particles and the glass surface to cause a fraction of the LaF 3 phase, and thus the particle, to break. This lowers the polishing performance of the particles. Therefore, the LaF 3 phase is damaging to the uorinecontaining ceria-based polishing powder particles. The presence of the LaF 3 phase in the polishing compound must therefore be reduced to extend the polishing life of the powder. According to the data in Fig. 4, the ratio of LaOF can be controlled by adjusting the degree of uorination in the precursor material or by formulating a more reasonable calcination process. In addition, when formulating the calcination process, the ratio of the harmful phase LaF 3 can be reduced by increasing the calcination temperature or prolonging the holding time.
Conclusion
In summary, CLF compound polishing powders were prepared by a facile calcination method. The compounds are composed of three phases: CeO 2 , LaOF, and LaF 3 . The investigation of the evolution of the phrase indicates that the phase component of the CLF compounds can be controlled by tuning the uorine content of the precursor and increasing the calcination temperature. The LaF 3 phase is an intermetallic compound precipitated in the main compound matrix. The polishing particles with more LaF 3 phase are prone to cracking during polishing, which can deteriorate the MRR. The LaF 3 phase is responsible for shortening the polishing lifetime of the CLF compounds. The LaOF phase improves the polishing performance. The LaOF phase in the CeO 2 matrix lattice has the highest MRR when the ratio of LaOF in the polishing particles reaches 18%. The lowest surface roughness (R a ) is achieved when the ratio of LaOF is around 10%. Reasonable content of LaOF and few LaF 3 are necessary to prepare the CLF compound powders with high polishing performance.
Conflicts of interest
There are no conicts to declare. | 2019-09-17T01:08:54.813Z | 2019-08-23T00:00:00.000 | {
"year": 2019,
"sha1": "5b9b6150dab826637b9762f1beabbc857a130ff3",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra05751j",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "748cff45eb3fc8d8ace093dab15be7035d166625",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
245355390 | pes2o/s2orc | v3-fos-license | Novel Probable Glance at Inflammatory Scenario Development in Autistic Pathology
Autism Spectrum Disorder (ASD) is characterized by persistent deficits in social communication and restricted-repetitive patterns of behavior, interests, or activities. ASD is generally associated with chronic inflammatory states, which are linked to immune system dysfunction and/or hyperactivation. The latter might be considered as one of the factors damaging neuronal cells. Several cell types trigger and sustain such neuroinflammation. In this study, we traced different markers of immune system activation on both cellular (immune cell phenotypes) and mediatory levels (production of cytokines) alongside adverse hematology and biochemistry screening in a group of autistic children. In addition, we analyzed the main metabolic pathways potentially involved in ASD development: energy (citric acid cycle components), porphyrin, and neurotransmitter metabolism. Several ASD etiological factors, like heavy metal intoxication, and risk factors—genetic polymorphisms of the relevant neurotransmitters and vitamin D receptors—were also analyzed. Finally, broad linear regression analysis allowed us to elucidate the possible scenario that led to the development of chronic inflammation in ASD patients. Obtained data showed elevated levels of urinary cis-aconitate, isocitrate, alfa-ketoglutarate, and HMG. There were no changes in levels of metabolites of monoamine neurotransmitters, however, the liver-specific tryptophan kinurenine pathway metabolites showed increased levels of quinolinate (QUIN) and picolinate, whereas the level of kynurenate remained unchanged. Abovementioned data demonstrate the infringement in energy metabolism. We found elevated levels of lead in red blood cells, as well as altered porphyrin metabolism, which support the etiological role of heavy metal intoxication in ASD. Lead intoxication, the effect of which is intensified by a mutation of the VDR-Taq and MAO-A, leads to quinolinic acid increase, resulting in energy metabolism depletion and mitochondrial dysfunction. Moreover, our data backing the CD4+CD3+ T-cell dependence of mitochondrial dysfunction development in ASD patients reported in our previous study leads us to the conclusion that redox-immune cross-talk is considered a main functional cell damaging factor in ASD patients.
Autism Spectrum Disorder (ASD) is characterized by persistent deficits in social communication and restricted-repetitive patterns of behavior, interests, or activities. ASD is generally associated with chronic inflammatory states, which are linked to immune system dysfunction and/or hyperactivation. The latter might be considered as one of the factors damaging neuronal cells. Several cell types trigger and sustain such neuroinflammation. In this study, we traced different markers of immune system activation on both cellular (immune cell phenotypes) and mediatory levels (production of cytokines) alongside adverse hematology and biochemistry screening in a group of autistic children. In addition, we analyzed the main metabolic pathways potentially involved in ASD development: energy (citric acid cycle components), porphyrin, and neurotransmitter metabolism. Several ASD etiological factors, like heavy metal intoxication, and risk factors-genetic polymorphisms of the relevant neurotransmitters and vitamin D receptors-were also analyzed. Finally, broad linear regression analysis allowed us to elucidate the possible scenario that led to the development of chronic inflammation in ASD patients. Obtained data showed elevated levels of urinary cis-aconitate, isocitrate, alfa-ketoglutarate, and HMG. There were no changes in levels of metabolites of monoamine neurotransmitters, however, the liver-specific tryptophan kinurenine pathway metabolites showed increased levels of quinolinate (QUIN) and picolinate, whereas the level of kynurenate remained unchanged. Abovementioned data demonstrate the infringement in energy metabolism. We found elevated levels of lead in red blood cells, as well as altered porphyrin metabolism, which support the etiological role of heavy metal intoxication in ASD. Lead intoxication, the effect of which is intensified by a mutation of the VDR-Taq and MAO-A, leads to quinolinic acid increase, resulting in energy metabolism depletion and mitochondrial dysfunction. Moreover, our data backing the CD4+CD3+ T-cell dependence of mitochondrial dysfunction development in ASD patients reported in our previous study leads us to the conclusion that redox-immune cross-talk is considered a main functional cell damaging factor in ASD patients.
INTRODUCTION
Autism and its related neurodevelopmental disorder (Autism Spectrum Disorder-ASD) are clinically heterogeneous pathologies, which are caused by a number of factors. Such heterogeneity makes it difficult to single out individual causal elements of this disease(s). However, certain genetic and environmental triggers are already suggested, including molecular/genetic changes affecting brain development (1). According to CDC studies, the number of children diagnosed with ASD has increased over the last decade and ASD currently affects as many as 1 out of 54 individuals (2). Clinical signs of ASD are frequently present at 3 years of age and recent prospective studies in toddlers indicate that abnormalities in social, communicative, and play behavior, which may represent early indicators of autism, can be detected as early as at 14 months of age (3). Abnormalities in language development, delayed mental development, and epilepsy are frequent problems in the clinical profiles of patients with autism, and some patients may exhibit features of clinical regression, in which neurodevelopmental milestones are lost and/or other clinical signs are worsened (4). Cases of ASD are clinically heterogeneous and can be associated in up to 10% of patients with well-described neurological and genetic disorders, such as tuberous sclerosis, fragile X, Rett's and Down syndromes, although in most patients the causes are still unknown (5, 6).
Continuing investigations for a neurobiological basis of ASD support the view that genetic, environmental/toxic (heavy metal intoxication, particularly by lead and mercury), neurological, metabolic, digestive, and immunological factors contribute to its etiology. In particular, there is evidence that suggests an association between ASD and neuroinflammation in anterior regions of the neocortex (7,8), and areas related to cognitive function appear to be affected by inflammation due to activation of microglia and astrocytes (9). In vivo measurements of structural brain changes with magnetic resonance imaging in ASD patients detected gray matter loss in the orbitofrontal cortex and impairment of cognitive functions mediated by the orbitofrontal-amygdala circuit (10,11). Furthermore, markers of oxidative stress are elevated in the orbitofrontal cortex in postmortem samples of ASD patients and in blood of autistic children (12,13).
A strong inflammatory state associated with ASD is currently increasingly reported in literature (14). The inflammatory states observed in ASD patients are mostly related to immune system dysfunction (15). Increased cytokine levels (IL-1β, IL-6, IL-8, and IL-12p40), among others, were found to be associated with impairments in stereotypical behaviors, suggesting that dysfunctional immune responses could affect core behaviors in ASD (16). Over-production of pro-inflammatory cytokines was also demonstrated in vitro, in cultured and stimulated peripheral blood monocytes from ASD children (17). Both Th1 and Th2 cytokines have been reported to be increased in ASD children (18).
As reported by Siniscalco and coauthors, plasma cytokine profiles were different between the two grades of severity of ASD (moderate and mild, according to the Childhood Autism Rating Scale (CARS) test) (19). IL-12p40 levels were higher in the patients with mild disease severity whereas tumor necrosis factor alpha (TNF-α) appeared to be more pronounced in the patients with moderate severity (20,21). It has been suggested that TNF-α levels positively correlate with ASD severity (as tested by Autism Behavior Checklist, ABC), thus being an indicator of ASD phenotype (19).
The immune system is closely related to the pro-/antioxidant homeostasis. The importance of ROS in immune defense is exemplified by their generation and release in the form of an "oxidative burst" by phagocytic cells (e.g., neutrophils and macrophages). Recent studies revealed numerous immunologic abnormalities among children with autism including elevated generation of free radicals in lymphocytes (22). In our previous study, we demonstrated that persistent inflammation shown in ASD patients may lead to depletion of the respiratory burst capability in neutrophils, and accumulation of damage and pathological change resulting in disability and disease (13).
Earlier, it was shown that ASD pathology is also accompanied by mitochondrial dysfunction. As ROS can cause mitochondrial dysfunction, oxidative stress may be a key mechanism by which mitochondria are damaged by factors linked to ASD such as pro-oxidant environmental toxicants (23)(24)(25)(26). Furthermore, dysfunctional mitochondria can create a self-perpetuating cycle of progressive damage that amplifies functional deficits. Indeed, mitochondria are a major source of ROS as well as the target for ROS-mediated damage (27). So, environmental prooxidants may alter the electron transport chain complex I, thus producing higher levels of ROS (28). In turn, ROS can inhibit electron transport chain function and MnSOD activity causing further mitochondrial dysfunction (29). However, a link between bioenergetics and the immune response in autism has not been explored yet.
While our previous study was focused on the role of imbalanced red-ox states in the pathogenesis of autism (13), the current report is dedicated to the immunological aspects of ASD pathology development. Analysis of immune system activation markers (immune cell phenotype and cytokines) is complemented with adverse hematology and biochemistry screening. In addition, we traced energy producing (citric acid cycle) components, neurotransmitter and porphyrin content, genetic polymorphisms of relevant neurotransmitters, and vitamin D. Heavy metal content as a main ethological factor in ASD has been assessed. Finally, broad linear regression analysis allowed us to elucidate the possible scenario that led to the development of chronic inflammation in ASD patients.
Participants
Twenty-four preschoolers aged 3-6 years old participated in the study. Among them 12 children (age range: 3-6 years old) diagnosed with ASD/MD who had not previously received any type of treatment were recruited for this study (autistic group). All children met the DSM-IV criteria for Autistic Disorder and this diagnosis was also corroborated by psychologists using the Autism Diagnostic Interview-Revised (ADI-R) and the Autism Diagnostic Observation Schedule (ADOS). Children with PDD-NOS, Asperger syndrome, seizure disorder, current ear infection, uncontrolled asthma, inability to equalize ear pressure, fragile X syndrome, and ongoing treatment with chelation medication were excluded from participation in this study. Written informed consent was obtained from the parents and, when possible, the child. Unaffected 12 siblings of the same age (age range: 3-6 years old) with no history of behavioral or neurologic abnormalities according to parents' reports, were recruited for the comparison group (control group). The protocol was approved by the Ethics Committee of YSMU, and all parents signed informed consent.
Sample Collection and Preparation
Blood samples from fasting patients (10.0 mL) were collected before 9:00 a.m. into two (5.0 ml per tube) EDTA, trace element free, tubes (royal blue top; BD Vacutainer, Franklin Lakes, NJ). Aliquot of the whole blood was used for complete blood count assay. Tubes were centrifuged for 1,500 × g for 15 min at 4 • C within 30 min of initial blood draw. After centrifugation, plasma, including the white buffy layer, was separated using disposable pipettes and centrifuged again under the same conditions. Plasma samples were transferred into the new Eppendorf tubes for further biochemical tests. Cell pellet was resuspended in PBS (10 mM; pH 7.4) and immediately used for respiratory burst analysis and single nucleotide polymorphism (SNP) assay. RBC pellet formed after the first centrifugation was used for trace element measurements (arsenic, cadmium, lead, mercury, and thallium). To test energy production (citrate, cis-aconitate, isocitrate, alfa-ketoglutarate, succinate, fumarate, malate, and hydroxyethylglutarate), neurotransmitter metabolism (vanilmandelate, homovanillate, 5-hydroxyindoleacetate, kynurenate, QUIN, and picolinate) biomarkers, and porphyrin content of urine, the first morning urine samples (10.0 mL) were collected into sterile plastic containers. Urine common biochemistry was also tested. Barcoded plasma, red blood cells, and urine aliquots were frozen at −70 • C until measurements were taken.
CBC Count
CBC was analyzed in EDTA-treated whole blood samples using the Sysmex XS 500i fully automated hematology analyzer (Sysmex Corporation, Kobe, Japan).
Urine Measurements
Routine biochemical urine tests were performed using Cobas Urisys 1100 analyser (Roche Cobas, Swiss) using appropriate test strips. Energy production (citric acid cycle intermediatescitrate, cis-aconitate, isocitrate, alfa-ketoglutarate, succinate, fumarate, malate, and hydroxymethylglutarate), neurotransmitter metabolism (quinolinate, picolinate, vanilmandelate, homovanillate, 5-hydroxyindolacetate, and kynurenate) biomarkers, and porphyrin content (uroporphyrins, heptacarboxylporphyrins, hexacarboxylporphyrins, pentacarboxylporphyrins, coproporphyrin I concentration, coproporphyrin III, total porphyrins, precoproporphyrin I, precoproporphyrin II, precoproporphyrin III concentration, and total precoproporphyrins) in the urine were performed using LC/MS-MS spectrophotometry (Organix Comprehensive Profile; Metametrix, Inc, Duluth, GA). Urine samples were collected in accordance with instructions provided by Metametrix Clinical Laboratory. In brief, participants were instructed to void (empty bladder) before going to bed and then place a collection basin over their home toilets. Participants used a pipette to place 12 mL of their first morning sample plus any overnight sample into a test tube. At the laboratory, urine samples were frozen and then shipped to Metametrix Clinical Laboratory for biochemical analyses.
Packed Red Blood Cell Elements Assay
Potentially toxic mineral elements (arsenic, cadmium, lead, mercury, and thallium) were measured by Doctor's Data (St. Charles, IL, USA) in packed RBC. Packed red blood cells were spun for 15 min in a centrifuge at 1,500×g, the plasma and buffy coat were removed, and the remaining packed red blood cells were submitted for testing. Elemental analysis was performed after digesting an aliquot of sample using a temperaturecontrolled microwave digestion system (Mars5; CEM Corp; Matthews, SC). The digested sample was analyzed by Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) (Elan DRCII; Perkin Elmer Corp; Shelton, CT). Results were verified for precision and accuracy using controls from Doctor's Data.
Flow Cytometry Assay of Respiratory Burst
Spontaneous and N-formyl-L-methionyl-L-leucyl-Lphenylalanine (fMLP)-induced respiratory burst of monocytes was determined by flow cytometry using Cayman's Neutrophil/Monocyte Respiratory Burst Assay Kit according to manufacturer instructions (Cayman Chemical, Ann Arbor, MI, USA). The flow cytometry analyses of respiratory burst were based on dihydrorhodamine 123 (DHR) oxidation to the fluorescent rhodamine 123 (RHO). Respiratory burst was activated in monocytes by fMLP (100 nM, 10 min, 37 • C) (Sigma, St. Louis, USA). Samples without any external stimulus were used to measure spontaneous respiratory burst. Gated neutrophil population was monitored by determining the RHO relative fluorescence intensities (expressed as a mean channel number-MCN) on FACSCaliburTM instrument using CellQuestTM software (BD Biosciences, Heidelberg, Germany).
Neurotransmitter Related SNP Genotyping
Three genes (five polymorphisms), encoding for essential proteins involved in neurotransmitter metabolism, were studied: catechol-O-methyltransferase (COMT V158/M and COMT H62H), vitamin D receptor (VDR Taq and VDR Fok), and monoamine oxidase A (MAO-A R297R). The genomic location, known nonsynonymous coding variants, and the number of single nucleotide polymorphism (SNP) markers were genotyped at each locus. SNPs were selected to efficiently assay common variation in the genes of interest. SNPs were genotyped on the Illumina BeadArray platform using the Golden Gate genotyping technology as part of a 384-SNP customized assay. SNPs were genotyped by Holistic Health International, LLC (279 Walkers Mills Road, Bethel, ME). Genome data were uploaded in dbSNP repository, Variation File submission: SUB10619200.
Infectious Diseases Assay
Antibodies (IgM/IgG) to Chlamydia trachomatis, Toxoplasma gondii, Herpes simplex virus (HSV-1,2), and Cytomegalovirus (CMV) were tested by chemiluminescence assay using Cobas E411 (Roche Cobas, Swiss), using appropriate test kits purchased from the manufacturer. Additionally, CMV and Epstein-Barr virus (EBV) were screened by the conventional PCR method using GeneAmp 9700 thermal cycler (Applied Biosystems, USA). DNA samples were obtained from peripheral blood, monocytes, and throat swabs. PCR detection kits for the aforementioned infections were purchased from DNA-Technology (DNA-Technology LLC, Russia).
Data Analysis
Data analysis was carried out in GrafPad InStat software Version 3.10 (GraphPad Software, San Diego, CA). Non-parametric Wilcoxon rank sum test was applied. ASD patient data were compared to the appropriate control (comparison) group or reference values (hypothetical median
The study of serum biochemistry parameters demonstrates diminished content of vitamin B12 (−50%; P < 0.05) and especially vitamin D (−75%; P < 0.01; Figures 2C,D). In addition, we also recorded enhanced (40%; P < 0.05) activity of ASAT. However, GGT indeed showed a significant decrease, as did BIL, and creatinine values. At the same time, liver tests, such as ALAT, and ALP were not significantly different from the control group (Figures 2E,G,H,J,K). In other tests of serum biochemistry, as well as urine biochemistry, parameters remained within reference limits (Figure 2; Table 1).
Potential Etiological and Risk Factors of the Autistic Pathology
Among the many known etiological factors leading to autistic pathology development, we decided to investigate the levels of potentially toxic elements in packed RBC, SNP of enzymes responsible for the metabolism of some neurotransmitters, as well as incidents of some potential infectious diseases.
Heavy Metal Changes
Arsenic, cadmium, lead, mercury, and thallium were measured in packed red blood cells (Figure 3). Among all of these elements, lead was the only one with levels exceeding the upper reference limit (more than twofold, P < 0.001).
SNPs. Mutated allele ratios of catechol-O-transferase (COMT V158M-GA and H62H-CT) and vitamin D receptor (VDR Fok-CT) in autistic children and the healthy group were found to be very similar (Figure 4). However, the ratios of T-mutated allele of VDR Taq-CT and monoamine oxidase A (MAO-A R297R-GT), OR = 3.71 and 4.89, respectively (P < 0.05), were found to be higher in ASD patients.
Infectious Diseases
Several congenital pathogens were screened in ASD patients (Chlamydia trachomatis, Toxoplasma gondii, HSV-1,2, EBV, and CMV). Neither the immunochemiluminescent test nor PCR detected any infections (excluding single cases) except CMV as shown by the detected antibodies (IgG) to CMV (208.6 AU/mL; P < 0.01) with high avidity (97.9%; P < 0.01) ( Table 2). PCR analyses for CMV and EBV were done in all possible targetsblood, monocytes as well as throat swabs.
Citric Acid (Krebs) Cycle Intermediates
The levels of citrate, cis-aconitate, isocitrate, alfa-ketoglutarate, succinate, fumarate, malate, and hydroxymethylglutarate (HMG) Biochemical parameters were measured in an automatic biochemical analyzers Roche Cobas C311 and Cobas E411 (Roche Cobas, Swiss) using appropriate test kits purchased from manufacturer. Blue graph-control data, red graph-ASD data. Non-parametric Wilcoxon rank sum test was applied. ASD patients' data were compared to the appropriate control group. P < 0.05 were used to indicate statistical significance. * P < 0.05, ** P < 0.01. were measured to assess citric acid cycle function in ASD patients. The urinary levels of cis-aconitate, isocitrate, alfaketoglutarate, and HMG were found to be elevated (by 55-76%; P < 0.05) in autistic children (Figure 5). All of the indicated metabolites are components of the first stage of the citric acid cycle: from citrate to succinate.
Neurotransmitter Metabolites
Study of the neurotransmitter metabolites in the urine demonstrated that there were no changes in levels of monoamine metabolites. Particularly, concentrations of vanilmandelate (epinephrine and norepinephrine degradation marker), homovanillate (dopamine degradation marker), 5-hydroxyindolacetate (serotonin marker) were within the reference ranges. At the same time, study of the liver-specific tryptophan ynurenine pathway metabolites showed increased levels of QUIN and picolinate (by 125% and 94%, respectively; P < 0.001), whereas the level of kynurenate remained unchanged (Figure 6).
Linear Regression Analysis
In order to clarify the relationships between the measured parameters and to deduce the possible way of ASD pathology development, we computed Pearson correlation coefficients. The following correlations were investigated: 1. relations between the potential ASD etiological and risk factors and the changes in T-helper cells; 2. correlation of mitochondrial damage markers with the T-helper cells; 3. the pathway of effector cell (monocytes Green area-reference range. Non-parametric Wilcoxon rank sum test was applied. ASD patients' data were compared to the appropriate reference values-hypothetical median. P < 0.05 were used to indicate statistical significance. * P < 0.05. respiratory burst) activation; and 4. Correlation between lead poisoning and precoproporphyrin appearance in urine of ASD patients. Three factors (infection, SNP, heavy metals) having etiological roles in the development of the autistic pathology were compared to the number of helper T-cells (CD3+/CD4+). As it was shown above, all patients were characterized with anti-CMV IgG appearance with high affinity. Nevertheless, we did not detect a statistically significant correlation between anti-CMV IgG values and CD3+/CD4+ cells numbers (r = −0.5191; p = 0.1521; Figure 9A). On the other hand, both VDR Taq SNP and the lead content in packed RBC highly correlated with helper T-cell numbers (r = 0.8757; p = 0.0020 and r = 0.9198; p < 0.0001, respectively; Figures 9B,C). Several metabolites associated with mitochondrial damage: HMG, QUIN, and ASAT were increased in ASD patients and strongly correlated with the changes in the content of T-helper lymphocytes (r = 0.8338; p = 0.0007, r = 0.8748; p = 0.0002, r = 0.7928; p = 0.0108; Figures 9D-F).
To trace the pathway of effector immune cell activation (monocyte phagocytosis) in autistic children, we analyzed the Green area-reference range. Non-parametric Wilcoxon rank sum test was applied. ASD patients' data were compared to the appropriate reference values-hypothetical median. P < 0.05 were used to indicate statistical significance. ** P < 0.01. correlation between a pro-inflammatory cytokine (TNF-α) in different cell populations and the final effector property of monocytes-respiratory burst. Consequently, a strong correlation was shown between the levels of TNF-α in lymphocytes and CD3+/CD4+ T-cells (r = 0.9540; p < 0.0001; Figure 10A). Furthermore, lymphocyte and monocytederived TNF-α levels were also strongly correlated to each other (r = 0.9570; p < 0.0001; Figure 10B). And finally, a significant correlation was shown between monocyte TNF-α and the respiratory burst of the same cells (r = 0.8942; p = 0.0002; Figure 10C).
We did not detect any statistically significant correlations between enhanced urine porphyrins level and lead content in RBC. In order to find a possible relationship between the urinary porphyrin levels and the increased concentration of lead, we attempted to find the porphyrins with the largest data scatter. Precoproporphyrin I was the best suited for this requirement (SD = 54.6%). Direct correlation between the values of lead and precoproporphyrin I was not found (r = −0.4145; p = 0.2050; Figure 11A). An attempt was made to find a factor mediating the dependence of the precoproporphyrin I level on the lead content. Inverse correlation between the lead content and monocyte count is shown in Figure 11B (r = −0.7154; p = 0.0089). However, a positive correlation was shown between monocyte count and precoproporphyrin I content (r = 0.7108; p = 0.0142; Figure 11C).
DISCUSSION
Our data describe autism pathology development by combining data on environmental toxicants, genetic predisposition, bioenergetics, and finally immune system over activation. Environmental metal and metalloid intoxications (lead, mercury, aluminum, and arsenic) are known as potential . Green area-reference range. Non-parametric Wilcoxon rank sum test was applied. ASD patients' data were compared to the appropriate control values. P < 0.05 were used to indicate statistical significance. * P < 0.05, ** P < 0.01. etiological factors leading to ASD (30). Indeed, we only observed enhanced levels of lead in packed RBCs of autistic children. Enhanced urine porphyrin content in autistic patients is typical for this pathology and is related to heavy metal intoxication (31)(32)(33). Here also, we showed disordered porphyrin metabolism, manifested predominantly by elevated concentrations of urinary coproporphyrins, total porphyrins, as well as uroporphyrines. However, we did not find any significant correlations between increased level of urinary porphyrins and lead concentration in RBC. At the same time abnormalities in porphyrin metabolism support the etiological role of heavy metal intoxication in ASD development in the matter of increased levels of lead and unchanged mercury and arsenic.
A simple complete blood count revealed the intoxication (diminished RBC parameters and enhanced RDW) and inflammation state (increase of the numbers of platelets, monocytes, neutrophils) in ASD children. The platelet count increase in ASD patients was shown earlier and this is typical for monogenetic and complex neurological diseases as several parallels exist between platelets and the brain cells (34)(35)(36).
Noteworthy are the diminished levels of vitamins B12 and D on the one hand, and enhanced ASAT on the other, in the blood serum of autistic children. A physiological level of ALAT with a simultaneously enhanced ASAT level indicates the extrahepatic origin of both enzymes. The ASAT is expressed in two forms: cytosolic (in the red blood cells and heart) and mitochondrial (liver, brain, etc.) (37). The cytosolic origin of the enhanced ASAT observed in our study can be excluded due to the physiological level of MCHC test shown in CBC data. This makes the mitochondrial origin of the increased ASAT in ASD patients more probable. As it was stated above, due to the low level of ALAT, its liver origin seems unlikely. On the other hand, the ASAT enzyme is assigned to important functions in astrocytes and neurons (38). In addition, as it was shown by Guidetti and coauthors (39), catalyzing the formation of the neuroinhibitory metabolite kynurenic acid, mitochondrial ASAT may be involved in a range of physiological and pathological processes associated with glutamatergic and nicotinergic signaling. Some studies demonstrate possible increase in permeability of blood-brain barrier (BBB) in ASD (40). Therefore, it can be speculated that neuronal cell damage and increased BBB permeability in ASD patients underlies the elevation of ASAT levels in blood serum.
Lead intoxication might be an etiological factor for neuronal cell damage and consequent ASD development. Negative association between blood lead levels and vitamin D/B12 concentrations has been reported (41)(42)(43). Probably, diminished vitamin D, among its classical Ca/P regulating role, would have much a more negative effect on immune system function in autistic children.
The most important form of vitamin D, cholecalciferol (vitamin D3), is known to stimulate differentiation of immune cells (44,45). This concept was supported by observations that showed different expressions of the vitamin D receptor (VDR) and α-1-hydroxylase at the different stages of differentiation of macrophages. Some studies show that human macrophages are able to synthesize 1,25(OH) 2 D3 upon exposure to IFNγ (43)(44)(45)(46). VDR receptor gene polymorphisms were identified in various diseases as shown in reviews of Valdivielso and Fernandez (47) and Uitterlinden et al. (48). Polymorphisms in Bsm-I, Taq-I, Apa-I, and Fok-I were associated with renal diseases, cancer, neurolithiasis, and diabetes. In addition, some authors showed correlation between VDR gene polymorphisms and susceptibility to asthma and atopic dermatitis (49,50). Abnormalities in the vitamin D receptor, and low levels of vitamin D were both linked to Parkinson's disease and autism (50)(51)(52). It is known that lead exposure may be associated with increased risk of ASD (41,42). Polymorphisms in the genes coding for VDR may affect susceptibility to lead exposure (53). Our results convincingly demonstrate the enhanced ratio of T allele at position Taq-I (rs731236) (OR = 3.71) and of the VDR gene in autistic children. The ratio of T allele at position Fok-I (rs2228570) was not significantly different in ASD patients. Association between VDR Taq polymorphism and susceptibility to ASD was shown earlier (52,54).
As it was stated above, vitamin D3 plays a crucial role in the development and function of the brain (55), and vitamin D can therefore be implicated in neuropsychiatric disorders, such as autism spectrum disorder (56). By interaction with the specific VDR, the developmental and functional consequences of vitamin D in the nervous system can be modulated. It was shown that patients bearing mutations in their VDR receptor gene might have a different activation threshold than the wild form of the receptor (57,58).
In addition, ASD patients have been described with enhanced ratios (OR = 4.89) of the T allele of monoamine oxidase A (MAO-A R297R-GT). It is known that children with the lowactivity MAO-A allele have both lower intelligence quotients and more severe autistic behavior than children with the highactivity allele (59,60). This is due to the diminished activity of MAO-A leading to decreased serotonin degradation and its accumulation in the brain. This has long been implicated in the psychopathology of autism (60,61). Such low MAO-A activity leading to serotonin accumulation is also expected to cause decreases in 5-hydroxyindolacetate (5-HIAA) levels which, nevertheless, were not observed in our study. It was also shown that the activation of the kynurenine pathway (KP) of tryptophan degradation in neuroinflammation results in reduced serotonin synthesis from tryptophan and production of KP metabolites (62). Indeed, we have demonstrated the enhanced contents of QUIN and picolinate in ASD patients, which might indicate the overactivation of kynurenine pathway of tryptophan catabolism.
The pathological levels of QUIN are associated with numerous neurological diseases: Alzheimer's disease, anxiety, depression, epilepsy, ASD, etc. (63,64). Moreover, generation of QUIN is thought to be the major link between the KP and inflammatory response (65). The first enzyme of the KP, indoleamine 2,3dioxygenase (IDO-1), is induced by various proinflammatory cytokines (63). So, in immune-activated states, IDO-1 may catabolize a large proportion of tryptophan leading to shortage for the serotonin-melatonin pathway. Also, the increased levels of QUIN in the brain could alter the excitation/inhibition ratio of the N-methyl-D-aspartate receptor, ultimately leading to excitotoxicity. Hence, QUIN may act as an endogenous excitotoxin that contributes to the pathogenesis of ASD, especially during neuroinflammation (60).
Indeed, the presence of a proinflammatory environment in ASD pathology is described in this paper. We have observed upregulation (in peripheral blood leukocytes) of TNF-α, which is known as one of the major proinflammatory cytokines involved in the pathogenesis of different diseases (64). At the same time, the level of the anti-inflammatory cytokine, IL-10, was also increased, indicating the existence of a probable negative feedback loop (66,67). In addition, immune cell phenotype analysis showed elevation in T-cell (CD3, CD4, and CD8) count and monocyte phagocytosis increase. In line with the latter, the analysis of respiratory burst (spontaneous and fMLP-induced) of peripheral blood monocytes in autistic patients revealed enhancement of spontaneous phagocytosis and unaltered fMLP-induced phagocytosis. It has been reported by different authors and also by us that mitochondrial dysfunction observed in autistic children was accompanied by a lower oxidative burst in the phorbol-12-myristate-13-acetate (PMA)stimulated granulocytes (13,68), that is also confirmed by the current study. In monocytes obtained from peripheral blood of autistic patients we did not observe characteristic right shift of Rhodamin 123 fluorescence upon induction by fMLP. This suggests that the persistent inflammation shown in ASD patients may lead to the depletion of respiratory burst capability in neutrophils and accumulation of damage and pathological changes resulting in disability and disease (69).
In this study we showed activation of adaptive immunity, which manifested by an increase in T cell count, and even slight down regulation of innate immunity, which, in turn, was expressed by a decrease in level of NK cells. Interestingly, the elevation of T-cells was accompanied by an increase of TNFα production in monocytes, macrophages (CD14+), and Tlymphocytes (CD3+). A systematic review by Mitchell R. and Goldstein B. documented preliminary evidence of the association of pro-inflammatory markers in almost 4,000 children with neuropsychiatric and neurodevelopmental disorders, including ASD (70). Like others we also suggest that a feasible mechanism for the role of inflammation in ASD is the violation of functional integrity of the CNS by cytokines, thereby contributing to the neuroinflammation (71). There are three main pathways for peripheral cytokines and their mediated signals to access the brain-humoral, neural, and cellular (72). In the humoral pathway, cytokines cross the BBB through leaky regions (choroid plexus and circumventricular organs); with the neural pathway, activated monocytes and macrophages stimulate primary afferent nerve fibers of the vagus nerve. For the cellular pathway, it is suggested that cytokines, principally TNF-α, could stimulate microglia to recruit monocytes into the brain (possibly via activation of the production of monocyte chemoattractant protein-1) (72,73). Thus, based on the literature data and our findings, it can be suggested that TNF-α produced by activated monocytes and macrophages initiates and/or mediates inflammation in the brain of ASD patients. We are prone to believe the humoral mechanism of cytokine access to the brain in ASD. In this regard the possible increase of BBB permeability in ASD (40) also supports our thoughts. At the same time, we do not exclude that inflammation develops directly in the brain through glial cells, independently of peripheral events or in addition to them.
Oxidative stress resulting in overproduction of ROS is a wellknown factor in the development of inflammatory response (74). The role of ROS in pathogenesis of ASD was discussed in our previous study (13). Here, we have focused on the factors leading to the abnormal ROS production and chronic inflammation in ASD, such as energy production machinery, which probably leads to mitochondrial dysfunction and chronic inflammation in ASD patients.
Increased QUIN and picolinate observed in our study apparently indicate decreased activity of quinolinic acid phosphoribosyltransferase (QPRT). QPRT is an important enzyme in the kynurenine pathway (KP) which regulates the intracellular NAD + synthesis in human astrocytes and neurons. The NAD + levels are mainly dependent on the KP metabolism and Indoleamine-pyrrole 2,3-dioxygenase (IDO) and QPRT regulation (75,76). Thus, anticipated inhibition of QRPT should lead to the observed QUIN increase and NAD + decrease. Altered QUIN levels could also result in altered NAD + biosynthesis, which in turn may affect the poly (ADP-ribose) polymerase (PARP) activity leading to neuronal cell death (77). Moreover, it has been suggested that QPRT protein acts as an inhibitor of spontaneous cell death by suppressing overproduction of active-caspase-3. The inhibition of caspase-3 activity/synthesis or posttranslational modification might have a pro-apoptotic effect and lead to neuronal cell death (78). It has also been demonstrated that the autonomous cell generation of NAD + via the KP regulates macrophage immune function in aging and inflammation. Isotope tracer studies revealed that macrophage NAD + , to a large extent, depends on the KP metabolism of tryptophan. Genetic or pharmacological blockade of the de novo NAD + synthesis results in NAD + depletion, suppressed mitochondrial NAD + -dependent signaling and respiration, and impaired phagocytosis and resolution of inflammation as discussed above (79).
It is known that NAD + pools can modulate the activity of compartment-specific metabolic pathways, such as glycolysis in the cytoplasm and the citric acid cycle cycle/oxidative phosphorylation in mitochondria (80). It is also well-known that the citric acid cycle is the main electron donor (in the form of NADH) for mitochondrial respiratory chain. Any hindrance in the electron flow might bring about ROS overproduction, mitochondrial dysfunction, and oxidative stress. All these FIGURE 12 | Probable glance at inflammatory scenario development in autistic pathology. Lead intoxication, the effect of which is intensified by a mutation of the VDR-Taq and MAO-A leads to quinolinic acid increase, resulting in energy metabolism depletion and mitochondrial dysfunction, which is expressed in ROS overproduction. The latter are also known to be signal-trigger molecules which activate the T-cell dependent immune response. Activated T-helpers produce pro-inflammatory cytokines (TNF-α), which leads to the activation of effector cells -macrophages and the development of a chronic inflammatory response and induce persistent inflammatory signals. KP, kynurenine pathway; Q, coenzyme Q10.
aberrations have been observed in autistic pathology. In order to evaluate the function of the citric acid cycle, we have measured the main analytes of this pathway and demonstrated the increase of Cis-Aconitate, Isocitrate, and α-ketoglutarate. This might indicate the inhibition of the α-ketoglutarate converting reaction catalyzed by oxoglutarate dehydrogenase (OGD). OGD catalyzes the conversion of α-ketoglutarate to succinyl CoA, with reduction of NAD+ to NADH. OGD is a key regulator in the citric acid cycle and is inhibited by succinyl-CoA and NADH. OGD is considered to be a redox sensor in the mitochondria. Increased NADH/NAD+ ratio is associated with enhanced ROS production and inhibited OGD activity (81,82). Thus, diminished NAD + de novo synthesis could lead to lower NAD + /NADH ratio, mitochondrial dysfunction, and oxidative stress (83).
We have also observed HMG increase in autistic patients. HMG is a metabolite related to the energy production pathway and cholesterol synthesis. A high HMG level might indicate insufficient Coenzyme Q10 (CoQ10) production resulting in the leakage of ROS (84). CoQ10 is considered an effective endogenously synthesized lipid soluble antioxidant, which inhibits oxidative stress and overactive inflammatory response by regenerating vitamin E or by quenching superoxides or another ROS. CoQ10 is a key component of the oxidative phosphorylation process of the mitochondrial respiratory chain. Apart from its antioxidative function, CoQ10 also appears to modulate immune functions (85)(86)(87).
Inflammatory markers, such as IL-1β and TNF-α, were shown to increase in the brains of many ASD patients (88).
Apparently, this could be compared with the enhanced TNFα in monocytes and lymphocytes of autistic patients and simultaneously enhanced IL-10 produced by monocytes and CD 14+ cells only, as shown in the current study.
It can be suggested that activated inflammatory pathways, oxidative stress, mitochondrial dysfunction, and brain metabolic disorders are involved in the ASD pathophysiology. We have attempted to address the question of if there is any correlation between these pathways altered in ASD and which of them is more important in the development of this pathology. Correlation analyses suggest that the main factor(s) in the development of ASD is T-helper (CD4+ CD3+) lymphocytes. The increase in the levels of these cells strongly correlates both with etiological factors (VDR-Taq polymorphism and Lead poisoning) and with pathogenic factors (HMG, QUIN, and ASAT). Based on this analysis, we propose the following scenario for the development of a chronic inflammatory response in autism. Lead intoxication, the effect of which is augmented by a mutation of the VDR-Taq, leads to mitochondrial dysfunction and, therefore, ROS overproduction. The latter were shown to activate the T-cell dependent immune response (89). We have also observed HMG increase in autistic patients. HMG is a metabolite related to the energy production pathway and cholesterol synthesis. A high HMG level might indicate insufficient Coenzyme Q10 (CoQ10) production resulting in the leakage of ROS (90). CoQ10 is considered an effective endogenously synthesized lipid soluble antioxidant, which inhibits oxidative stress and overactive inflammatory response by regenerating vitamin E or by quenching superoxides or another ROS. CoQ10 is a key component of the oxidative phosphorylation process of the mitochondrial respiratory chain. Apart from its anti-oxidative function, CoQ10 also appears to modulate immune functions (91)(92)(93).
Inflammatory markers, such as IL-1β and TNF-α, were shown to increase in the brains ofmany ASD patients (94). Apparently, this could be compared with the enhanced TNFa in monocytes and lymphocytes of autistic patients and simultaneously enhanced IL-10 produced by monocytes and CD 14+ cells only, as shown in the current study. It can be suggested that activated inflammatory pathways, oxidative stress, mitochondrial dysfunction, and brain metabolic disorders are involved in the ASD pathophysiology. We have attempted to address the question of if there is any correlation between these pathways altered in ASD and which of them is more important in the development of this pathology. Correlation analyses suggest that the main factor(s) in the development of ASD is T-helper (CD4+ CD3+) lymphocytes. The increase in the levels of these cells strongly correlates both with etiological factors (VDR-Taq polymorphism and Lead poisoning) and with pathogenic factors (HMG, QUIN, and ASAT). Based on this analysis, we propose the following scenario for the development of a chronic inflammatory response in autism. Lead intoxication, the effect of which is augmented by a mutation of the VDR-Taq, leads to mitochondrial dysfunction and, therefore, ROS overproduction. The latter were shown to activate the T-cell dependent immune response (95).
Activated T-helpers produce pro-inflammatory cytokines (TNFα), which lead to the activation of effector cells-macrophages and the development of a chronic inflammatory response (Figure 12).
LIMITATIONS
Several limitations of the study should be noted. All metabolites' measurements were done in blood and urine; nevertheless, it would be better to measure some of them in the brain and CSF also. However, due to obvious barriers to getting such material we limited ourselves to the data obtained from the specified samples. Next, we acknowledge that our sample size was somewhat small and may have limited the power to detect subtle group differences and associations. However, blood sample collection in young children under the age of 6 years and, in particular with neurodevelopmental disorders, is an intractable issue, especially taking into account the small population in Armenia. In spite of the small sample size in the present study, observed data might be generalizable due to target statistical approach and the huge range of measurements, both of which enhance the statistical power of the study. On the other hand, based on our current findings bettercontrolled studies with larger sample sizes have to be designed and conducted.
CONCLUSION
In summary, our study suggests a possible scenario of the development of autistic pathology (see Figure 12). Lead intoxication, as a potential etiological factor, and VDR Taq and MAO-A polymorphisms, as potential risk factors, trigger ASD development. As proven, the aforementioned factors invoke disturbances in the functioning of such key cellular systems as the kynurenine pathway, the citric acid cycle, and mitochondrial respiratory chain, leading to mitochondrial dysfunction and ROS overproduction. Our findings argue the latter brings about the T-cell (CD4+CD3+) dependent immune system activation and persistent chronic inflammatory response in ASD patients.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. The deposited data can be found in the following repository: https:// www.ncbi.nlm.nih.gov/SNP/snp_viewTable.cgi?handle=YSMU.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Yerevan State Medical University After Mkhitar Heratsi, Yerevan, Armenia. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. | 2021-12-22T14:24:43.713Z | 2021-12-22T00:00:00.000 | {
"year": 2021,
"sha1": "004d2b4b15a7263110095eb00472f08a5d29512a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "004d2b4b15a7263110095eb00472f08a5d29512a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214257798 | pes2o/s2orc | v3-fos-license | The Mediation Effect of Organizational Commitment in the Relation of Organization Culture and Employee Performance
The present study investigate the relation of organization culture and employee performance and the mediation role of organizational commitment. The research conducted in a local public company who served clean water to the society. Non-probability sampling method with accidental sampling technique was used to gather 260 employees to participate in this study. Descriptive and verificative method were used to analyze the relation between variables through hypothesis testing. The study found that organizational culture has significant effect on employee performance. Meanwhile organizational commitment proved to be a mediator in the relation. Organization should develop strong culture and commitment in order to enhance employee performance. Keywords—Organizational Commitment; Organization Culture; Employee Performance
INTRODUCTION
According to [1] organization performance greatly affected by their human resources. It is also believed that human resources play an important role in helping the organization to thrive and win the competition. To boost their performance, organization need to drive their human resources to excel. All employees must maintain great performance. The state-owned water company (PDAM) in Purwakarta although they do not have actual competition but they need to serve their customer better. [2] argued that PDAM's performance to meet the customer demand is still low. This is exacerbated by the increasing of population which makes more difficult to provide clean water. Based on research from [3] only 20% of the Indonesian population enjoy the clean water from PDAM.
In Purwakarta the situation is not different. [4] convey that in May PDAM service was down for 36 hours. In 2016 the same problems still exist [5]. To overcome these problems Head of Purwakarta Regency, Dedi Mulyadi provides PDAM with Rp 33 billion to improve the facilities and infrastructure which related to the water supply [6]. Furthermore, the Head of Regency also insists that PDAM should improve its customer service especially in the way they handle the complaint [7].
Despite these enourmous challenges, PDAM Purwakarta also has problems regarding their employee performance. One indicator which can be related to the performance is the attendance and disciplinary report. According to the internal report, in 2015 there are negative trend regarding the employee absenteeism. The number of employees who was not come to work without notice increase 18.84% from previous year. Another poor records came out from the number of employee who was sick, increase 66.9% from 2014. With such unsavory records, PDAM need to improve their human resources performance. Moreover, the Head of Regency also emphasizes that PDAM should change the way they conduct the business. They have to become more professional to catch up with the customer demand.
2.1
The current study investigate the effect of organization culture on employee performance with the mediation of organizational commitment. Company need to develop strong culture which can provide positive work environment to drive the performance. Meanwhile, the organizational commitment also believed to have positive impact on performance. Highly commited employee tend to give their best at work. The study hopefully can help the company to improve the employee performance by applying the right instruments. A. Organization culture and organizational commitment Corporate culture considered a critical factor when organization needs to enhance or pursue their goals and objectives. The core value inside the organization culture can guide the employee to achieve more. Thus, we can say that the effectiveness of an organization might be influenced by organizational culture. Culture usually brought to practices in terms of how management manage their planning, organizing, controlling, evaluation were carried out. Strong corporate culture would enable employees to easily understand the company's goals. Organization culture is a vital foundation in developing and sustaining commitment. The more employee felt they fit with the culture, they will work towards organizational goals and then drive the increase of commitment. In a study in India, [8] used 200 middle level executive from public & private organizations and found that participative organization culture is related to identification with involvement type of organizational commitment. Meanwhile, [9] argued that in Hongkong and Australia innovative and supportive cultures had positive effects on both job satisfaction and commitment. Furthermore [10] conducted study in Taiwan and found that organizational culture plays an important role in driving the level of job satisfaction and commitment. Another research using 1838 employees in US and China by [11] reveals significant relations between perceived organizational culture and work attitudes. In particular, perceived constructive culture has a strong positive relationship with job satisfaction and organizational commitment. More studies from Nigeria [12], Iran [13], [14], and Malaysia [15], also confirm the finding that there is a significant relationship between organizational culture and organizational commitment. Either as one dimension or in sub-dimension, the strong culture of the organization, will eventuate to the higher organizational commitment. For this study we propose the first hypothesis as below; H1: Organization culture will have significant positive effect on organizational commitment. B. Organization culture and employee performance Nowadays organization consistently faces opportunities and challenges to thrive. One component which can affect the organization's capability is their employees who become the key element. The success or failure of the organization was influenced by employee performance. One particular factor which can affect employee performance is organizational culture. Enormous study has been done in search of the relationship between organizational culture and performance.
Nevertheless, yet organization culture still received less attention among several others possible antecedents of employee performance. We will contribute to this area by adding more reference which discussed the relation. Social norms, rites, the way the work should be done, and other specific and unique way of each organization could affect employee performance. The culture is important as basic for human resources practices. [16] argued that culture of organizations has a significant positive impact on employee's job performance. Study in a software houses in Pakistan found that employee's participation is the most important factor for achieving organizational goals. Study from [17] in India banking industry reveals that organizational corporate culture has influence on employee work performance and also the level of productivity of the organization. Positive relationship between organization culture and employee performance was also established in Kenya [18]. Another result from Kenya [19] found that organizational culture has a great influence on performance. Using subdimension for organization culture, [20] in Somalia argued that competitive culture, entrepreneurial culture and consensual culture statistically have significant and positive effects on employee performance. Further research from [21] claimed that strong culture of an organization based upon the action of managers and leaders would help improving employee's performance. Finally, [22], [23] both studies found significant positive correlation between organizational culture and employee's performance. It is vital to make organizational culture strong in order to enhance the job performance of employees. Based on the result from previous literatures, we determine the second hypothesis as; H2: Organization culture will have significant positive effect on employee performance. C. Organizational commitment and employee performance Employes are willing to give more to their jo if they felt interested or committed. They will perform better and even exceed the standard. Such behavior will certainly have positive impact on organization performance. Strong organizational commitment has been believed as critical success to achieve higher performance. Enhancing organizational commitment among employees is an important aspect to perform better since the success of organization very much depends on the performance of its employees. We studied several previous academic articles that tend to reinforce the notion that organizational commitment has significant positive relationship with employee performance. Research in Indonesia by [24] found that organizational commitment significantly influence employee performance directly or indirectly through work satisfaction. A study in oil and gas sector in Pakistan [25] revealed positive relationship between organizational commitment and employees' job performance. Using employee from educational industry as participant, Tolentino in Manila 2013 found that only affective commitment correlates significantly with job performance. Meanwhile among the administrative staff, not a single commitment dimension is related to job performance. This result gives new perspective that different type of jobs might reveal different result. Meanwhile, [26] revealed a positive relationship between organizational commitment and employee's performance in banking industry in Iran. Reseach from Indonesia using 115 employees in a district hospital [27] showed that organizational commitment has a positive and significant impact on employee performance. Other studies that support a positive influence came from [28] in Bali, [29] in Nusa Tenggara Indonesia, [30] in Iran, and [31] in Pakistan. They are all argued that the more committed employees for their organization, the more possible they tend to foster their performance. Indeed the organizational commitment had a positive significant effect on the job performance. Contrary to the other result, [32] studied 274 Portuguese workers and argued that commitment components did not present significant predictor strength for employee performance. This Advances in Economics, Business and Management Research,volume 117 is interesting since proved that researcher cannot take for granted just one result. For our study, we prepare our hypothesis using most common finding. H3: organizational commitment will have significant positive effect on employee performance.
III. METHODOLOGY
A. Participants PDAM Purwakarta consists of three regency and has total employees 550. We distribute the questionnaire to 300 employees and received 260 feedback as participants (86.7% return rate). Participants were asked to complete the questionnaire in their office. 56% of participants were male and 44% were female. It showed the balance between male & female in PDAM which mean most of the job can be done by both man and woman. From the table below we can see that the dominant age group is between over 30 -40 years. People within this range considered quite mature. Surprisingly, PDAM still had employee who had education lower than senior high school (7%). The dominant range for education is senior high school (vocational) which can be explained because of the nature of work in PDAM. Actually most of jobs do not need higher education level. Most of employees in PDAM are regular which mean they already receive full benefit from organization. Most of PDAM employees pertained happy in their job. This is indicated by the highest percentage (37%) had more than 5 years of service. Table 1 display detail information regarding the demographic aspects. To test the significance of organizational commitment as mediation, bootstrapping approach using the PROCESS Macro for SPSS 23 was used [33]. Bootstrapping approach considered as appropriate because it do not require normality assumptions of the sampling distribution through the application of bootstrapping confidence intervals [34]. Macro for SPSS facilitates quite easy bootstrapping equation. The tool shows the significance of mediation effects if Upperlevel and Lower level Confidence Interval contain no zero value. Table 2 present the model coefficients and other statistics information resulted from the mediation analysis obtained from a macro program [33] using SPSS. Organization culture had significant positive effect on organizational commitment (p-value 0.000) and employee performance (p-value 0.000). Organizational commitment had significant positive effect on employee performance (p-value 0.000). This means hypothesis H1, H2 and H3 all were accepted. Employee in PDAM who perceive stronger organization culture will show higher commitment and perform better. Meanwhile, employee who perceive higher organizational commitment also had impact on their performance. That is the management can use both culture and commitment to improve the employee performance.
C. Measurement
A questionnaire consisted of total 89 items (28 items for organization culture, 15 items for job satisfaction, 14 items for organizational commitment, 12 items for work discipline, and 20 items for employees performance) used to collect the data. Each item has 5 choices of answers based on Likert's scale option from 1 -strongly disagree, 2 -disagree, 3-neither agree nor disagree, 4agree, and 5 -strongly agree. Cronbach's Alpha for internal consistency reliability for items representing organizational culture was .957; for job satisfaction was .955; for organizational commitment was .897; for work discipline was .869; and for employee performance was .907.
Advances in Economics, Business and Management Research, volume 117
Organization should build strong and positive culture which correspond to the employee. After they established great culture the process of socialization of the culture must be done immediately with the right method. Then the management have to embed the culture in every decision, policy, and action. Only with such continuous implementation, then employees gradually would understand and implement the culture. Understanding and implementation of organizational culture will strengthen the organizational commitment which in the end will affect the performance. Table 3 showed the total, direct, and indirect effect of organization culture on employee performance. As we can see from te table, total effet was bigger than direct effect which mean, the organizational commitment had significant positive mediation effect. This is corroborated by the Lower Level Confidence Interval (LLCI) and Upper Level Confidence Interval (ULCI) both did not have zero value. In PDAM case employee performance can be improved using approach related to culture and commitment. PDAM already develop strong culture and the positive side is that employee perceived such culture somewhat suitable fot them. It is up to the management to take advantage from this situation. Routine communication, good relationship between peers, good system & work procedures, the way they took decision and solving the problems, should be formalized. Formal way of doing in PDAM then can become more useful if employees instill in their works. When employees work with their heart, they become more committed and usually follow by increase in performance.
The present study support previous literatures regarding the relation of organization culture, organizational commitment, and emplyee performance. Study in India [8], Hongkong and Australia [8], Taiwan [10], US and China [11] reveal significant relations between perceived organizational culture and work attitudes (organizational commitment). Our study also shows the significant and positive relation between culture and organizational commitment. Both in western and eastern culture, it was proved that organizational culture can become a driver to boost organizational commitment.
Researches that discuss organizational culture and employee performance has often been done. [16] stated that organization culture has a significant positive impact on employee's job performance. Other study which reveals the same conclusion conducted in India [17], Kenya [18]; [19], and [20]. Furthermore, study from [21] also mentions that culture of an organization based on the action of managers and leaders would help improving employee's performance. The same notion delivered by [22], [23]. Indeed, the organization culture would positively impact employee performance. The next previous researches studied the relation of organizational commitment and employee performance. [24] found significant influence of organizational commitment on employee performance. Study from Pakistan [25], [31], Phillipine [35], Indonesia [27], [28], [29], and Iran [26], [30] found the same direction. The difference result presented by (Cesario & Chambel 2017) who studied Portuguese workers and found that commitment components did not predict employee performance.
MANAGERIAL IMPLICATION
Public company should aware that development of strong culture would drive organizational commitment which in the end will have impact on employee performance. The result suggests that managers have to set the way organization do and conduct the business. Then they have to become the first to implement the organization way before it passes on the all employees. For example if they want result oriented and disciplined work environment, then they should show seriousness. Only then the lower level of employees can understand and take such positive action to change and adjust to the new culture.
Management can rely on their human resources department to create program to introduce new culture. A set of training inside and outside the organization can help employee more quickly to adjust. Outbound program can enhance the sense of cooperation between departments. Implement a rotation also can increase employee awareness for different department so they would not give negative judgment. Human resources department could propose other comprehensive program which fit with organization's need.
V. CONCLUSION
All three hypotheses were accepted, organization culture significantly positive affect organizational commitment and employee performance, while organizational commitment has significantly positive effect on performance. Employees in PDAM perceive that they were fitted with the culture the company developed, they also have high organizational commitment and perceive their performance was met the organization standard. The latter is debatable because employee answers the question using self-evaluation. Management needs to discuss the differences to improve the overall performance. They should investigate what causes the customer's complaint or any other deficiencies. From the result, PDAM could develop its culture to enhance commitment and in the end affecting the performance. The result adds to the enrichment of the discussion regarding organizational culture, commitment, and employee performance. | 2020-03-05T10:58:15.719Z | 2020-02-07T00:00:00.000 | {
"year": 2020,
"sha1": "e066fb5c46c742b2a14e085b3e4cb7584fa1f5c1",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125933720.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e9e6a99cc0862761dfe70a2113b26af557e05102",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
247679048 | pes2o/s2orc | v3-fos-license | Children of the Revolution: The Impact of 1960s and 1970s Cultural Identification on Baby Boomers’ Views on Retirement
There is widespread speculation that baby boomers will make significant changes to the retirement landscape. Some attribute these changes, at least in part, to countercultural movements this generation pioneered during the sixties and seventies. However, empirical investigation into the long-term impact of countercultural identification in youth is scarce. To address this, our study examines associations between baby boomers’ retirement views and identification with counterculture. Using data from 6024 pre-retired Dutch older workers, we investigate whether greater identification with counterculture is associated with more active retirement views. Our results show that greater identification with counterculture is associated with more active retirement views, even when controlling for potential confounders. Beyond highlighting the diversity of the baby boom generation, these findings support the idea that (counter)cultural identity in youth has an impact across the life course and may therefore have implications for other key questions of life’s third age beyond retirement.
Introduction
The nineteen sixties and seventies witnessed a revolution in attitudes and cultural norms in terms of music, sexuality, drugs and politics (Braunstein & Doyle, 2002). These changes were driven by countercultural movementsthe hippy movement, anti-war movements, civil rights movements, second wave feminism and the gay rights movement, amongst others (Chalmers, 2012). Those who came of age during this revolution, so-called baby boomers, are credited with breaking the mould of the traditional life course (Gilleard & Higgs, 2002) and radically changing societal norms including marriage and living arrangements that are associated with the second demographic transition ( Van de Kaa, 1987). Consequently, there has been widespread speculation about the legacy of this generation (Martin & Roberts, 2021, and whether they will also inspire wholesale changes to the next milestone they encounter, retirement (Harkin & Huber, 2004).
Accounting for the cultural climate in which they grew up may help us understand how and why baby boomers may retire differently than previous cohorts (Gilleard & Higgs, 2002). Hamilton and Hamilton (2006) posit that maturing during the unique period of cultural change of the sixties and seventies is responsible, at least in part, for the reformulation of retirement we see and expect amongst baby boomers. This suggestion fits with the notion that identification with youth culture can be highly influential in shaping identity and subsequent experiences (Kehily, 2007). However, others have called into question the very idea that baby boomers will reinvent retirement, positing instead that baby boomers are merely continuing changes to retirement instigated by their parents' generation (Chambré & Netting, 2018). Moreover, despite its intuitive appeal, links between sixties and seventies counterculture and perceptions of retirement remain unexplored. To address this, our study will investigate whether identification with countercultures of the sixties and seventies are linked to the retirement views found amongst baby boomers.
Our study focuses on the retirement views of nearretirement older workers. Understanding these individuals' perceptions of future events such as retirement is vital, as mental representations people form of who they could or should become may influence the way they organize and evaluate their actual and future development (Frazier et al., 2002). Differently put, how people picture retirement may shape motivation and behaviour in the present. Better understanding how individuals perceive and experience retirement may be beneficial not only in terms of predicting retirement behaviour and outcomes, but in guiding practitioners and policy makers seeking to help those navigating the retirement transition (Maggiori et al., 2014).
Various retirement typologies have been developed to explore and better understand perceptions and experiences of retirement. The most widely known are those of Hornstein and Wapner (1986) and Schlossberg (2004). Hornstein and Wapner (1986) identified four main retirement styles: Transition to old age/rest: where retirement involves slowing down and diminishing activity; New beginning: a new phase of life where retirees focus on their own needs and goals rather than those of others; Continuation: where retirement gives freedom to pursue existing interests and activitiesincluding workin a more relaxed, self-directed way; and Imposed disruption: where retirement is the loss of a highly valued activity. Schlossberg (2004) identifies five distinct retirement types. The Continuer and Adventurer styles match closely with the Continuation and New Beginning styles of Hornstein and Wapner (1986). The remaining styles are: Easy gliders who value the freedom offered by retirement; Searchers who are uncertain and indecisive around retirement; and Retreaters who tend to disengage from life entirely in retirement.
While these qualitative investigations of retirement views have moved the literature beyond an outdated one-size-fits-all view of retirement, there remains a dearth of quantitative literature surrounding these retirement typologies. Despite recent attempts to develop and validate empirical measures of retirement styles (e.g. Maggiori et al., 2014), little is known about the prevalence and distribution of various retirement views within the general population, nor factors associated with these views.
Our study aims to address the paucity of empirical investigations into both retirement views and the long-term impact of identification with counterculture A large sample (N = 6024) of pre-retired Dutch older workers aged between 60 and 65 at the time of data collection (2015) were recruited for the current analysis. This study will be the first to empirically test the assumption of Hamilton and Hamilton (2006) that the experience of maturing in the 1960s and 1970s may shape the retirement views of the baby boom generation. In doing so, we will make several contributions to the literature. First, to the best of our knowledge, this study provides one of the few large-scale, prospective, empirical investigations of the retirement views of working baby boomers. Second, we take the novel approach of measuring identification with cultural movements of their youth, such as the hippy movement, feminism and anti-establishment. In doing so, we can investigate the impact of countercultural identity on retirement views. Approaching this generations' retirement views from a cultural rather than a more traditional cohort perspective, offers a more nuanced and broader understanding of their impact on and relationship to retirement (Gilleard & Higgs, 2007). Additionally, this approach allows us to examine within-cohort differences along lines of cultural identification, an important strength given the tendency in previous literature to treat this diverse cohort as a homogenous group (Hughes & O'Rand, 2004).
Generation and Identity
The notion that we garner a sense of identity from our generation is pervasive in modern society (Willetts, 2010). Generations are usually viewed as birth cohorts, comprising a set of common experiences and historical and geographical locations (Kertzer, 1983). Alternatively, they may be viewed through a more cultural lens, with generations described as cultural constructs involving historical participation guided by individuals' consciousness (Mannheim, 1970). The latter approach, in which the idea of social generations is explored, has garnered increased research attention in recent years (Roberts & France, 2021). While some have argued for the superiority of a cultural approach over a cohort approach to understanding the retirement of the baby boom generation (Gilleard & Higgs, 2007), there appears at least to be consensus that shared experiences are crucial in shaping generations and generational identity. Those in our study shared the experience of coming of age during the 1960s and 70s, a period of profound social and cultural change. It is this experience of the post-war cultural revolution that is believed to have moulded the baby boomer generation and established their generational consciousness (Gilleard & Higgs, 2002). However, while members of this cohort all witnessed these societal changes, the extent to which they identified with and participated in countercultural movements driving these changes may vary widely (Weisner & Bernheimer, 1998). As such, the effects these countercultural identities can be expected to exert on individuals throughout the life course may differ substantially.
The Importance of Youth in Identity Formation
The view that the form identity takes during adolescence significantly impacts later life is common among social scientists (Kinney, 1993), and the belief that generational identity is formed in youth is why links between social change and generations is most commonly investigated through youth (Roberts & France, 2021). The complexity of the concept of identity makes empirical research on the subject challenging (Stryker & Burke, 2000). However, identity is generally believed to emerge and solidify in adolescence (Klimstra & van Doeselaar, 2017). Support for the notion that youth is central in identity development can also be inferred from cognitive neuroscientific research, which finds that functional and structural changes in the brain between the ages of 10 and 20 may reflect a sensitive period for adaption to an individuals' social environment (Blakemore & Mills, 2014). That said, meta-analytic evidence indicates that identity is by no means solidified by adolescence or early adulthood, and identity development generally strengthens with age (Kroger et al., 2010). This is echoed in calls to extend the period we view as crucial in the life course to include much of the twenties, or 'emerging adulthood' (Wood et al., 2018).
Youth Cultural Identity and the Life Course
A common assumption in popular culture, found also in academic literature, is that underpinning changes made by baby boomers to the life course thus far is a desire to remain youthful. The countercultural movements they witnessed or engaged in were cultures of youth. So, when faced with what these countercultures opposedthe trappings of agethis generation preferred to ignore or redefine what it meant to get older (Gilleard & Higgs, 2007). Many believe this pattern will continue as boomers reach the third age, with some commentators predicting that their desire to avoid traditional notions of old age will usher in an era of more active retirement in terms of lifestyle, consumption and participation in the labour force (Harkin & Huber, 2004). Others more directly link this reformulation of retirement to attachment to youth culture stemming from the countercultural movements of the sixties and seventies (Hamilton & Hamilton, 2006). However, the notion that baby boomers will truly reinvent the retirement landscape, or are the sole arbiters of these changes, is not universally accepted. Others believe changes they may bring to retirement are more accurately viewed as a continuation of a long-term shift in retirement initiated by the preceding generation (Chambré & Netting, 2018).
Indeed, while links between the cultural aspects of their youth to baby boomers' retirement hold intuitive appeal, empirical investigations on the impact of youth identity later in the life course remain scarce. Weisner and Bernheimer (1998) followed 254 US-based families from the mother's third trimester of pregnancy (1974)(1975) to when the children were in their late teens (1993)(1994). Through a series of interviews and home observations, they investigated the impact of countercultural identities in youth on parenting and beliefs in midlife. They found identification with 1960s counterculture did have some impact on parenting, child outcomes and parents' beliefswith consistently high identification with counterculture over time in the parents even associated with a protective effect against substance abuse problems and school related issues amongst their offspring.
Another study by D. E. Sherkat (1998) investigated the impact of countercultural (protest) participation in the 60s and 70salong with life-course factors, and traditional agents of socializationon religious beliefs and practices. They found that countercultural participation had a significant negative effect on participants' (N = 1034) religious beliefs over the 17year study period (1965)(1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973). While life-course factors and agents of socialization were found to have relatively stronger effects on religious orientation, the results nevertheless indicate the long-term impact of countercultural identification. Therefore, while these works, and subsequent follow up studies (Weisner, 2001), did not investigate the impact of youth countercultural identity on retirement specifically, they nevertheless lend credence to the idea that countercultural identities developed in the 1960s and 1970s may shape the lives of baby boomers, in later life.
In marketing and consumer psychology research, evidence has been found indicating that an individual's past identity can impact their pattern of behaviour during retirement. Through a series of semi-structured interviews with US retirees (N = 65), Schau et al. (2009) investigated identity-related consumption patterns amongst retirees. They found identity-related activities and projects were put on hold after adolescence, but often re-emerged in retirement. This 'identity renaissance' was often cited by participants as shaping their behaviour and consumption in old age. However, while youth identities played a role in the consumption pattern of some retirees, this effect was not universal, with new consumption patterns and identities also emerging alongside or instead of the revival of identities from youth.
Recent studies have also highlighted the importance of the baby boom generations' values in influencing behaviours in later life such as the provision of childcare for grandchildren (Airey et al., 2021). However, links between these values and (counter)cultural experiences in youth were not explicitly made, nor was retirement or views surrounding it a central focus of the work. Therefore, while these studies can be seen as offering preliminary support for the idea that past experiences, identities and values are important in retirement, they also highlight the need for additional investigation into whether, and to what extent, countercultural identification impacts the retirement of members of the baby boom generation.
Countercultures of the 1960s and 1970s
This lack of empirical investigation into the long-term impact of identification with countercultures of this era is hardly surprising given the amorphous nature of the countercultural movements themselves. While the hippy movement may loom largest in public consciousness, perhaps owing to their distinct aesthetic (Moretta, 2017), this movement alone does not define the era. Protest culturefrom civil rights marches to anti-war and environmental protests (D. S. Sherkat & Blocker, 1993) was a key feature of the age. The label antiestablishment also emerged in the sixties to describe a variety of groups opposed to the prevailing societal institutions and values. This era also witnessed a growing drug culturewith rapid increases in legal and illegal substance use across Western societies (Marchant, 2013); and the proliferation in alternative lifestyles such as non-marital cohabitation and homosexual relationships. (Rubin, 2001). Calls for women's rights grew louder too, with the rise of second wave feminism and the women's liberation movement (Freeman, 1973). The sixties (Marwick, 2011), and to a larger extent seventies, are often associated with increased individualism. So much so that the latter has been dubbed the 'Me Decade' in some quarters (Wolfe, 1976). Thus, rather than a unified entity, the counterculture of the time was made up of varied subcultures that permeated and transformed society (Marwick, 2011).
Our study will investigate the association between identification with these countercultural movements and the retirement views of near-retirement baby boomers (born 1950-1955) in the Netherlands. Given previous assertions that members of this generation wish to maintain their youthful identity and eschew the idea of growing old and inactive (Harkin & Huber, 2004), we hypothesize that greater identification with counterculture will be linked to more 'active 'retirement views such as the New beginning and Continuer retirement views (hypothesis 1a). In contrast, we expect less identification with counterculture to be associated with the more 'inactive' Freedom From Work retirement view, and the more avoidant Searcher and Retreater retirement view (hypothesis 1b).
Design and Method
Data. Data in the current study was taken from the first wave of the NIDI Pension Panel Study (NPPS), a prospective cohort study conducted in 2015 (Henkens et al., 2017). Participants were drawn from the three largest Occupational Pension Funds in the Netherlands. A stratified sampling procedure based on organizational size and sector was used. Within this stratified sample, participants were randomly drawn from those who were aged 60-65 and worked at least 12 hours per week (N = 15,480). Participants received a hardcopy of the questionnaire from their pension fund provider but could also choose to complete the questionnaire online. A reminder letter was sent to participants 2 weeks following the start of data collection, with another reminder sent 6 weeks later to those who still had not completed the questionnaire. A total of 6793 completed questionnaires were returned, giving a response rate of 44 percent. Participants who received a shortened version of the questionnaire with some key independent variables omitted (n = 499), and those who failed to respond to the dependent variable (n = 270), were excluded from the sample. This left a total of 6024 participants for further analysis. Item non-response was generally low (average of 3.09%), ranging from 0 to a maximum of 7.98% for our measure of wealth. Given the relatively high percentage of missing data in some variables we will use multiple imputation by chained equations (MICE) in which 20 imputed datasets are generated. Estimates reported will be obtained using the mi estimate command (Stata Version 16) to control for variability between imputations when reporting coefficients and standard errors.
Measures
Dependent variables. Building on the retirement styles outlined in the work of Hornstein and Wapner (1986) and Schlossberg (2004) we developed a brief measure of identification with retirement views, the Short Measure of Retirement Views. Participants were asked to choose which of the following descriptions of retirement suited them best: (a) 'Retirement means enjoying the fact that you are no longer working'; (b) 'Retirement is something I'd rather not think about'; (c) 'Retirement means that you finally have time to develop yourself and learn new things'; (d) 'Retirement means continuing work activities, but at a slower pace' or (e) 'Retirement is still unknown ground for me'. Participants' selection on this variable was taken as indicative of their preferred retirement view. These views were subsequently labelled Freedom From Work, Retreater, New Beginning, Continuer and Searcher.
Independent variable. The primary independent variable in our analysis is identification with counterculture. Participants were asked 'To what extent did you identify with the characteristics of the 1960/70s in your youth'. Respondents then indicated on a scale from 1 (a lot) to 4 (not at all) how much they identified in their youth with (1). Hippy culture, (2). Protest culture, (3). Individualism, (4). Feminism, (5). Drugs culture, (6). Anti-establishment culture and (7). Alternative lifestyles. These items were reverse coded so that a higher number indicated a greater identification with this aspect of counterculture. These items were then combined to form a countercultural identity scale, which showed good reliability and internal consistency (α = .81). Factor analysis further confirmed these items best followed a unidimensional structure.
Control variables. Several demographic and individual characteristics of participants were included as control variables.
Participants were asked to state their gender (Male or Female), and their partner status (Living alone or living with partner). Participants were then asked to indicate the highest level of education they attained from a list ranging from 1 (primary school) to 7 (university degree). Education levels were based on the International Standard Classification of Education (ISCED). Based on previous work (Idler & Benyamini, 1997), subjective health was measured by asking participants 'How would you characterize your health in general?' Answers were given on a scale of 1 (excellent) to 5 (very poor). This scale was then reverse coded so that a higher score indicated higher subjective ratings of health. Wealth was assessed by asking participants to estimate how large their total wealth (including own house, savings and stocks minus debts/mortgage) from categories ranging from 1 (less than €5000) to 7 (more than €500,000). Wealth was subsequently categorized into low (1, 2, 3), moderate (4, 5) ine and high (6, 7) levels of wealth. Participants were also asked to rate how stressful, and how physically demanding their current jobs were. Ratings for both items were given on a scale from 1 (very) to 4 (not at all). These measures were subsequently reverse coded for ease of interpretation. These measures were adopted from the Study on Transitions in Employment, Ability and Motivation (van Vegchel et al., 2004).
Additionally, we measured participants' retirement selfefficacy using the following questions: 'I can handle whatever comes my way in retirement' and 'I will definitely realize the plans I make for retirement'. Participants responded on a fivepoint scale ranging from 1 (completely agree) to 5 (completely disagree). These two items were then reverse coded so that a higher score indicated greater retirement self-efficacy and combined to form a retirement self-efficacy scale (α = .53, r = .37). Finally, we measured participants' future time perspective using items primarily drawn from earlier work by Jacobs-Lawson and Hershey (2005) by asking participants to rate on a scale of 1 (completely agree) to 5 (completely disagree) how much they agreed with the following statements: 'It is important to take a long term perspective', 'I enjoy thinking about how I will live years from now in the future' and 'I pretty much live on a day-to-day basis'. We then reverse-coded the first two times so that higher scores on each of the items would indicate a greater future orientation. The three items were combined to form a single future-timeperspective scale (α = .60) that was treated as a continuous measure.
The means, standard deviations and correlations for all the independent and control variables included in our analyses can be found in Table 1. The results of this correlation matrix indicate that our dependent variable, identification with counterculture, is associated with education. Better educated individuals have a stronger identification with counterculture than those with less education.
Analysis
Following the exploration of descriptive statistics, Multinomial logistic regression analysis (MNL) (Stata Version 16: mlogit) was used to investigate the impact of counterculture on retirement views. MNL was selected as the nominal categorical nature of our dependent variable makes analyses using other traditional statistical methods, such as multiple regression, inappropriate for use (Zickar & Gibby, 2003). The dependent variable in the current analysisretirement viewhas five categories: 'Freedom From Work', 'Retreater', 'New Beginning', 'Continuer' and 'Searcher.' Contrasts of predictive effects on the dependent variable were created comparing four of the categories in the MNL analysis to the baseline category 'Freedom From Work'. Thus, the impact of counterculture was tested for (i) 'Retreater' against 'Freedom From Work', (ii) 'New Beginning' against 'Freedom From Work', (iii) 'Continuer' against 'Freedom From Work' and (iv) 'Searcher' against 'Freedom From Work'. Freedom from work was chosen as the baseline category given that it represents the most traditional notion of retirement and was the largest category. We estimated two MNL models. The first (Model 1) estimated the impact of our key independent variable countercultural identity on identification with retirement views, without controlling for possible confounding factors. The second MNL model (Model 2) investigated the association between countercultural identity and retirement view, this time with the inclusion of control variables. Clustered standard errors (Stata version 16: vce (cluster)) were used to control for the nesting of participants within organizations. All analyses were conducted using Stata version 16.1.
Impact of Counterculture on Retirement View
Results of the multinomial logistic regression analyses investigating the association between countercultural identity and retirement view are presented in Table 3. In these analyses, the likelihood of identification with the four retirement view categories -New Beginning, Retreater, Continuer and Searcheris calculated relative to the base category Freedom From Work. Model 1 outlines the association between identification with counterculture retirement view without the inclusion of control variables. In keeping with hypothesis 1a, we found a significant positive association between identification with counterculture and more active retirement view such as New Beginning (β = .54, SE = .06, p < .001) and Continuer (β = .39, SE = .07, p < .001), compared to those who viewed retirement as Freedom From Work. Contrary to hypothesis 1b, a significant positive association was also found between counterculture and the Searcher (β = .20, SE = .08, p < .001) retirement view; with no significant association between counterculture and the Retreater retirement view found (β = À.13, SE = .16, p = .380).
Controlling for confounders
Model 2 (Table 3) shows the results of the second MNL analysis investigating the association between counterculture and retirement view with the inclusion of control variables. As with Model 1, the likelihood of identification with the four retirement view categories in this analysis is calculated relative to the base category Freedom From Work. With respect to control variables, women were more likely to identify with the Retreater, New beginning and Searcher retirement views than men. Compared to less educated individuals, better educated individuals were significantly more likely to identify with the New Beginning and Continuer retirement views. Regarding partner status, those living alone were more likely to select New Beginning and Searcher retirement views than those cohabiting with a partner. Higher subjective health was associated with greater likelihood of the Continuer and the Searcher retirement view. Moderate levels of wealth reduced the likelihood of identifying with the Retreater retirement view compared to those with lower levels of wealth, while the likelihood of selecting the Continuer view increased amongst the highest earners. Greater job stress was negatively associated with the Continuer or Searcher retirement view. Those who reported greater job physicality were less likely to fall under the Retreater view. Higher retirement self-efficacy was negatively associated with the Retreater, Continuer and Searcher styles. Finally, those with greater future orientation were less likely to fall under the Retreater, Continuer or Searcher view but were more likely to view retirement as a New Beginning.
Regarding the relationship between counterculture and retirement view, the results of Model 2 echo those observed in Model 1, with counterculture remaining a statistically significant predictor of retirement views despite controlling for potential confounders. The more participants identified with counterculture, the more likely they were to identify with either the New Beginning, Continuer and to a lesser extent the Searcher retirement view when compared to the baseline category Freedom From Work.
To further illustrate the relationship between counterculture and retirement view, we computed the predicted values of participants' likelihood to select each of the five retirement views across levels countercultural identification both without (Figure 2(a)) and with (Figure 2(b)) the inclusion of control variables. Figure 2(a) shows that those with greater identification with counterculture are much less likely to view retirement as a life phase of Freedom From Work. While those with the lowest scores on counterculture variable have a 60% likelihood of seeing retirement as a phase of rest and relief from work, this percentage is much lower (30%) among those with a strong identification with the counterculture. The opposite pattern is observed for the category New Beginning, with the probability of identification with this retirement view increasing sharply with greater levels of countercultural identification. Figure 2(b) illustrates that a similar, albeit less pronounced, pattern of association between counterculture and retirement view is observed with the inclusion of control variables.
Discussion
The retirement of the baby boom generation has prompted much speculation about possible changes they may make to life's third age, including the notion that the cultural revolution they witnessed in youth has been instrumental in shaping their life course (Gilleard & Higgs, 2002), and retirement views (Hamilton & Hamilton, 2006). In line with our hypotheses, our results support the idea that countercultural identification is associated with retirement views. Those who identified more with countercultural movements in their youth were more likely to identify with active retirement views such as the new beginning and continuer retirement views. Similarly, countercultural identification was negatively associated with the more inactive, traditional view of retirement as a time to enjoy no longer working. From a theoretical perspective, our results support not only the specific idea of the importance of the culture of the sixties and seventies in the retirement views of boomers (Gilleard & Higgs, 2002), but more broadly, the importance of identity formed in youth and its enduring impact on the life course (Kehily, 2007;Kinney, 1993).
That those who identified more with counterculture in their youth are more likely to envision a more active retirement could indicate that those who were at the forefront of countercultural movements in the sixties and seventies may be trailblazers to this day; and may be forerunners of the transformation of retirement expected from the baby boom cohort. A key question emerging from this finding is how enduring are potential changes to retirement brought about or sustained by members of this generation likely to be? If identification with countercultural movements of their youth is associated with baby boomers' retirement views, should we expect their views, and any subsequent changes to retirement, to endure in succeeding generations not exposed to the same cultural climate? We posit, in line with second demographic transition theory ( Van de Kaa, 1987) and diffusion of innovation theory (Rogers & Shoemaker, 1971), that those who identified strongly with the countercultures of the sixties and seventies represent early adopters of lasting social change which is likely to continue to ripple outward. While it is beyond the scope of the current work to ascertain whether changes baby boomers may make to retirement truly represent innovations or are adopted, at least in part, from their predecessors (Chambré & Netting, 2018), we believe that any such changes may likely mark a cultural shift in the field of retirement. Countercultural identification may also be associated with other domains of older adulthood in which values play a role. It may therefore be interesting to examine baby boomers' attitudes towards topics such as old age more generally, assisted living and other long-term supports and services (Robison, Shugrue, Fortinsky, & Gruman, 2013), and attitudes towards death and dying (Yi & Hong, 2016) through the lens of youth (counter) cultural identity. Some additional insights emerging from our study relate to other factors that may be associated with retirement views. Although included primarily as a control, our results indicate that education may be a strong predictor of some retirement views, with better educated individuals more likely to view retirement as a time for self-development or new beginning or a time to continue their work at a slower, more self-directed, pace. These findings are in line with previous works finding a strong educational gradient in views on retirement (Henkens & van Solinge, 2021). Given that educational level is likely to be higher in future cohorts the trend towards retirement as a time of learning and self-development is likely to continue to grow.
Our study has several strengths. First, to the best of our knowledge, this is the first large-scale quantitative study exploring the long-term impact of identification with countercultures, on retirement. Second, not only does our study investigate the previously understudied retirement views of baby boomers, but we examine these through a cultural lens rather than the traditional cohort approach. We believe this novel cultural approach offers a more well-rounded understanding of retirement views as well as capturing the diversity of this cohort; both in terms of retirement views and identification with counterculture, and along demographic and social lines.
While our study makes several contributions to the literature, it is not without limitations. No information regarding other potentially relevant psychological predictors of retirement views, such as personality factors was available. Furthermore, our study investigates the impact of countercultural identity on retirement views in one specific country context. Given disparities between retirement systems and pension schemesand the heterogeneity in countercultural movements worldwide (Marwick, 2011), it is plausible that findings may vary in other countries with a less generous retirement system, or that underwent differing levels of societal and cultural change in the sixties and seventies. Additionally, our independent variable is retrospective in nature. The broad scope of this variablemeasuring identification with and connection to particular cultural movementsmeans it is likely to rely on semantic aspects of autobiographical memory. Therefore, it may be less vulnerable to inaccuracies than were we to investigate more specific, episodic, aspects of autobiographical memory (St Jacques & Levine, 2007;Tulving, 2002), it is nevertheless possible that the retrospective nature of this measure may have some impact on its accuracy. Finally, the data in this study is cross-sectional, limiting the causal inferences that can be drawn from the results.
The prospect of baby boomers reaching old age has often been met with trepidation by researchers, clinicians and policy makers alike (Knickman & Snell, 2002). However, understanding the underlying views and values of this generation may be crucial in understanding their transition to later life and how successful this will be (Huber & Skidmore, 2003). To this end, our study has provided important initial insights into how baby boomers view their retirement, and possible links between these views and their unique cultural upbringing during the sixties and seventies. What remains to be seen however, is whether, and to what extent, the baby boom generation will change the nature of retirement as an institution, and what repercussions these changes may have for generations to come.
Note
To gain additional insight into the relationship between countercultural identification and education, we examined the correlation between the individual items making up our countercultural measure and education level. The following correlation coefficients were reported between Educational level and individual elements of our countercultural measure: hippy movement (r = .10), protest generation (r = .29), individualism (r = .14), feminism (r = .26), drugs culture (r = .01), anti-establishment (r = .25) and alternative lifestyles (r = .20). Note. Model 2 includes control variables used in the primary analysis, but their coefficients are omitted here for the sake of brevity. | 2022-03-26T06:23:44.513Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "c4f38ccc979905c0be4ef155a48c0d0d7062bc92",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/01640275211068456",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "bb49fdd9d37eaf62c2b04004b5ca3d9bb57d4cf0",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15192461 | pes2o/s2orc | v3-fos-license | KSHV Induction of Angiogenic and Lymphangiogenic Phenotypes
Kaposi’s sarcoma (KS) is a highly vascularized tumor supporting large amounts of neo-angiogenesis. The major cell type in KS tumors is the spindle cell, a cell that expresses markers of lymphatic endothelium. KSHV, the etiologic agent of KS, is found in the spindle cells of all KS tumors. Considering the extreme extent of angiogenesis in KS tumors at all stages it has been proposed that KSHV directly induces angiogenesis in a paracrine fashion. In accordance with this theory, KSHV infection of endothelial cells in culture induces a number of host pathways involved in activation of angiogenesis and a number of KSHV genes themselves can induce pathways involved in angiogenesis. Spindle cells are phenotypically endothelial in nature, and therefore, activation through the induction of angiogenic and/or lymphangiogenic phenotypes by the virus may also be directly involved in spindle cell growth and tumor induction. Accordingly, KSHV infection of endothelial cells induces cell autonomous angiogenic phenotypes to activate host cells. KSHV infection can also reprogram blood endothelial cells to lymphatic endothelium. However, KSHV induces some blood endothelial specific genes upon infection of lymphatic endothelial cells creating a phenotypic intermediate between blood and lymphatic endothelium. Induction of pathways involved in angiogenesis and lymphangiogenesis are likely to be critical for tumor cell growth and spread. Thus, induction of both cell autonomous and non-autonomous changes in angiogenic and lymphangiogenic pathways by KSHV likely plays a key role in the formation of KS tumors.
KAPOSI'S SARCOMA AND KSHV
Kaposi's sarcoma (KS) was first described in the 1800s as a rare, fairly indolent tumor of specific populations. This form of KS, now referred to as classic KS, usually presents on the skin of the lower extremities of elderly men of specific European regions and religious origins. In the middle of the twentieth century KS became endemic in parts of Africa and is currently one of the most common tumors in parts of central Africa (Wabinga et al., 1993). In the late twentieth century KS became one of the first AIDS defining illnesses and is the most common tumor of AIDS patients world-wide. AIDS associated KS is generally far more aggressive than classic KS, arising on the skin in many parts of the body as well as in the oral cavity and can occur on internal organs where it is often fatal.
While they differ in aggressiveness, all forms of KS are relatively indistinguishable at the histological level. Grossly, the tumors have a characteristic red to purple hue, indicative of the high vascularization of the tumor. Histologically, the tumor exhibits large vascular slits lined by flattened endothelium; the slits are often, but not always, filled with blood cells. There are discernable levels of extravasated red blood cells and infiltrating lymphocytes in the tumor. While a number of cell types are present, the tumor is predominantly made up of elongated spindle cells. The spindle cells express endothelial cell markers on their surface including CD31 and CD34, but express low levels of factor VIIIRa (Russell Jones et al., 1995). Recent expression data and array studies have found that spindle cells most closely resemble lymphatic endothelium, expressing VEGF receptor 3 (VEGFR3), a specific marker of lymphatic endothelial cells (Jussila et al., 1998;Skobe et al., 1999;Weninger et al., 1999;Wang et al., 2004a). Other lymphatic endothelial cell specific markers, including LYVE-1, podoplanin, and Prox-1, are also expressed by the spindle cells (Carroll et al., 2004;Hong et al., 2004;Wang et al., 2004a).
Based on the epidemiology and the multicentric nature of the tumor, KS was predicted to have an infectious cause (Beral et al., 1990). In 1994, KSHV was discovered associated with all KS tumors and is now considered to be the etiologic agent (Chang et al., 1994). KSHV was the eighth human herpesvirus discovered and is subclassified as a gamma herpesvirus. Like all herpesviruses KSHV has both a lytic and latent lifecycle. During lytic replication all of the greater than 90 viral genes are expressed, the virus replicates rapidly, produces infectious virions and ultimately causes cell death likely due to a combination of host cell shut off and virus production. During viral latency in endothelial cells, a limited number of genes are expressed from a single locus. This locus includes LANA, vCyc, vFLIP, a family of proteins from a repeat region called the Kaposins, and 12 pre-microRNAs encoding 17 or more mature miRNAs. These latent genes are responsible for the maintenance of the latent viral episome as well as the survival of latently infected cells.
In later-stage KS tumors, all of the spindle cells maintain infection with KSHV (Boshoff et al., 1995;Staskus et al., 1997;Dupin et al., 1999). As expected the virus is predominantly found in the latent state in spindle cells where the limited number of latent genes and miRNAs are expressed (Staskus et al., 1997;Marshall et al., 2007). However, approximately 1-5% of the spindle cells express lytic viral genes and produce infectious virus. In addition to spindle cells, KSHV is also found in other cell types in the KS lesion including monocytes (Blasig et al., 1997). However, only very low levels of these cells are infected in the tumor. KSHV can only sporadically be detected in the endothelium lining the vascular slits in the KS tumor (Dupin et al., 1999). KSHV is also associated with two lymphoproliferative diseases, primary effusion lymphoma (PEL) a pleural cavity solid B-cell lymphoma, and plasmablastic multicentric Castleman's disease, a lymph node B-cell growth (Cesarman et al., 1995;Soulier et al., 1995).
Because KS is an angioproliferative disease and the KS tumors are highly vascularized even at early stages, it has been proposed that KSHV may directly induce angiogenesis. Angiogenesis is a tightly regulated process. Endothelial cells of the vascular system are normally maintained in a quiescent, non-proliferating state. However, during solid tumor formation, the secretion of proangiogenic cytokines by tumor cells can activate nearby endothelial cells to induce new blood vessel formation. Many of the vascular slits identified in histological sections of early stage KS lesions are lined by uninfected endothelium, suggesting they are formed by endothelial cells activated in a paracrine fashion (Dupin et al., 1999). These uninfected cells may later become infected, as KSHV has, in some cases, been detected in the cells surrounding the vascular spaces of later-stage nodular KS (Boshoff et al., 1995;Dupin et al., 1999).
Despite the evidence for paracrine activation of uninfected endothelial cells, KSHV also likely activates infected endothelial cells in an autocrine or cell autonomous fashion. Because KS spindle cells are endothelial in origin, induction of the KS tumor cell is similar to the processes of angiogenesis. Many of the characteristics of activated endothelial cells and angiogenesis are also associated with oncogenesis, including proliferation, migration, and metalloprotease expression. These same phenotypes are induced in KSHV-infected endothelial cells. This review discusses the recent evidence that suggests that (1) KSHV promotes continual neovascularization through paracrine factors and (2) KSHV may drive tumor cell growth through autocrine and cell autonomous activation of angiogenic phenotypes.
PARACRINE INDUCTION OF ANGIOGENESIS BY KSHV
The vascular endothelial growth factor (VEGF) family of cytokines plays a prominent role in regulation of angiogenesis (Breen, 2007). VEGF-A and its receptors are required for embryonic vascular development and are important for vascular permeability, proliferation, and survival of newly formed vasculature. Several studies have explored the role of VEGF-A in KS pathogenesis. VEGF-A expression is detected in spindle cells of KS lesions, and its secretion is known to be increased by inflammatory cytokines that are present in the KS lesions (Samaniego et al., 1998). VEGF-A is also expressed by KSHV-infected PEL cell lines and conditioned media from these cells is sufficient to induce capillary morphogenesis by endothelial cells (Liu et al., 2001;Akula et al., 2005;Subramanian et al., 2010). Infection of endothelial cells with KSHV directly leads to increased expression of VEGF-A (Masood et al., 2002;Sivakumar et al., 2008;Wang and Damania, 2008). Further, KSHV conditioned media has been shown to regulate angiogenic phenotypes in endothelial cells (Sharma-Walia et al., 2010). Therefore, KSHV induction of VEGF-A is likely to be critical for both the induction of angiogenesis as well as the activation of infected spindle cells.
Although the mechanisms by which KSHV induces VEGF-A expression and secretion are still unclear, several potential pathways have been uncovered. Hypoxia induced factor (HIF)-1α is a transcription factor that has been shown to be important for upregulation of VEGF-A (Sodhi et al., 2000;Shin et al., 2008). HIF-1α is readily degraded during normal oxygen conditions. However, during hypoxia, HIF-1α is stabilized and can induce expression of genes through hypoxia responsive elements, including VEGF-A. Interestingly, KSHV infection of endothelial cells induces the expression of HIF-1α during normoxia which leads to increased HIF transcriptional activity (Carroll et al., 2006). Other studies have shown that KSHV encodes proteins that can lead to increased stability of HIF-1. The KSHV latency associated nuclear protein (LANA-1) can stabilize HIF-1α, through both degradation of its suppressors, von Hippel-Lindau protein and p53 (Cai et al., 2006), and through direct interaction between HIF-1α and LANA-1 (Cai et al., 2007). Additionally, the virally encoded interferon response factor (vIRF3) can, like LANA-1, interact with and stabilize HIF-1α, leading to increased VEGF-A expression (Shin et al., 2008). The KSHV viral G-protein coupled receptor (vGPCR) is able to increase the activity of HIF-1α as a transcription factor through activation of the MAPK and p38 signaling pathways and subsequent phosphorylation of HIF-1α (Sodhi et al., 2000). These pathways are depicted in Figure 1. While induction of HIF mRNA expression by KSHV infection has been shown, stabilization of HIF directly by KSHV infection of endothelial cells has yet to be clearly shown.
Other host proteins have been shown to be involved in the induction of VEGF-A during KSHV infection of endothelial cells as well. For example, Emmprin is a membrane-associated glycoprotein that promotes matrix metalloproteinase expression. Its expression in KSHV-infected cells promotes cell invasiveness through activation of the PI3K/Akt and MAPK pathways (Qin et al., 2010;Dai et al., 2011). These pathways are also necessary for emmprin-induced VEGF-A expression. Further work is ongoing in multiple labs to determine the cellular pathways essential for KSHV induction of VEGF-A.
Several KSHV genes expressed during lytic replication have been implicated in regulation of VEGF-A expression ( Table 1). In BCBL-1 cells (a pleural effusion lymphoma cell line), glycoprotein B (gB) and K8.1 are required for enhanced VEGF expression (Subramanian et al., 2010). Treatment of these cells with siRNA or neutralizing antibodies to gB or K8.1 significantly reduced VEGF-A production. vGPCR is a constitutively active signaling receptor that has been linked to a variety of cell survival and pro-angiogenic signaling pathways (Arvanitakis et al., 1997; Bais Sodhi et al., 2000;Montaner et al., 2001;Shan et al., 2007). When injected into mice, NIH3T3 cells expressing vGPCR form highly vascularized tumors with some similarities to KS and this may be due, at least in part, to increased VEGF-A secretion (Bais et al., 1998;Guo et al., 2003). vGPCR upregulates VEGF-A through activation of MAPK and p38, which, as described above, promotes HIF-1α activity (Sodhi et al., 2000). Transgenic mice expressing vGPCR also form highly vascularized tumors that are reminiscent of KS tumors. However, cell lines derived from these tumors expressed the lymphatic growth factor VEGF-C, rather than VEGF-A (Guo et al., 2003). Increased VEGF-A expression in cells expressing vGPCR is associated with constitutive activation of its receptor, VEGFR2/KDR and downstream activation of PI3K/Akt, contributing to endothelial cell survival (Montaner et al., 2001;Bais et al., 2003). However, gB, K8.1, and vGPCR have only been detected in cells supporting lytic KSHV infection whereas the bulk of the tumor cells are latently infected. The KSHV glycoprotein K1 also induces increased VEGF-A expression in endothelial cells and is capable of immortalizing primary endothelial cells (Wang et al., 2004b(Wang et al., , 2006. While there is evidence that K1 is expressed at very low levels during latency, the majority of its expression occurs during lytic infection . In summary, the lytic phase of www.frontiersin.org KSHV infection might play a role in the paracrine induction of angiogenesis through increased secretion of VEGF-A into the tumor milieu. In addition to VEGF-A, KSHV-infected endothelial cells also express other angiogenic cytokines. Angiopoietin-1 and -2 are ligands for the receptor tyrosine kinase Tie2. Although less is known about the functions of angiopoietins and Tie2, their signaling is required for proper vascular development during embryogenesis (Dumont et al., 1994). Angiopoietin-1 is an agonist for the Tie2 receptor, promoting endothelial cell survival and stability. In contrast, Ang2 acts as an antagonist for Tie2, although its role is context-dependent. Ang2 is upregulated during pathologic angiogenesis and this expression is thought to destabilize endothelial cells, allowing them to be activated by other pro-angiogenic stimuli, such as VEGF, see Figure 1, circle 1 (Gale et al., 2002). Interestingly, Ang2 is expressed in KS lesions, and is upregulated in endothelial cells infected with KSHV (Brown et al., 2000;Wang et al., 2004a;Vart et al., 2007;Ye et al., 2007). This induction can be activated by the KSHV genes vGPCR and vIL-6, and can occur through a paracrine mechanism (Vart et al., 2007). Another study suggests that the MAPK pathway activation of transcription factors AP-1 and Ets-1 is involved . In addition to Ang2, cells transfected with the vGPCR gene expressed increased levels of angiopoietin-like 4, a member of the angiopoietin-like proteins that may play a role in vascular permeability and angiogenesis (Ma et al., 2010).
KSHV induces a number of other cytokines known to be involved in angiogenesis in other systems. These include interleukin 6 (IL-6), Monocyte chemoattractant protein-1 (MCP-1), PAX-1, and prostaglandin E2 (Schwarz and Murphy, 2001;Polson et al., 2002;Xie et al., 2005;Caselli et al., 2007;Fonsato et al., 2008). Cyclooxygenase enzymes catalyze the rate limiting step in the conversion of arachidonic acid into prostaglandins. Prostaglandins signal through g-protein coupled receptors to regulate a variety of functions, including metabolic, neuronal, and immune functions. Cyclooxygenase-2 (COX-2) expression is induced early during KSHV infection of endothelial cells and plays a role in the establishment of latency (Naranatt et al., 2004;Sharma-Walia et al., 2006). This expression is associated with increased secretion of prostaglandin E2, which promotes inflammatory cytokine expression, cell survival, and angiogenesis (Sharma-Walia et al., 2010). An additional cellular factor associated with angiogenesis, angiogenin, is induced in endothelial cells by the latent protein, LANA-1. Angiogenin was recently shown to aid in induction of angiogenesis by both VEGF and basic fibroblast growth factor (Sadagopan et al., 2009). KSHV-induced angiogenin was able to promote endothelial cell migration and capillary morphogenesis (Sadagopan et al., 2009). Since angiogenin is internalized by both infected and uninfected cells, the authors suggested angiogenin may work in both paracrine and autocrine fashions. In fact, all KSHV-induced cytokines that act on endothelial cells have the potential to promote angiogenesis-like phenotypes on the endothelial-derived spindle cells.
Regulation of angiogenesis involves coordinated expression of both pro-and anti-angiogenic factors. KSHV not only upregulates pro-angiogenic cytokines, it may also promote angiogenesis through repression of angiogenic inhibitors. The KSHV latent locus encodes for 17 miRNAs which may play a role in downregulation of angiogenic gene expression (Cai et al., 2005;Pfeffer et al., 2005;Samols et al., 2005). Expression of 10 of these miRNAs in 293 cells altered the expression of 81 genes . Interestingly, one of these genes is the natural angiogenic inhibitor thrombospondin-1. Thrombospondin-1 plays multiple roles in the repression of angiogenesis, however one of its main functions is activation of the anti-angiogenic growth factor transforming growth factor-β (TGF-β). This study found that thrombospondin-1 contains 34 putative miRNA binding sites, and can be downregulated by multiple KSHV miRNAs . Downregulation of thrombospondin-1 by KSHV miRNAs corresponds to decreased TGF-β signaling. Therefore, downregulation of anti-angiogenic factors may be an important way by which KSHV promotes continual neovascularization.
The KSHV genome itself encodes for cytokine and chemokinelike factors that activate endothelial cells and stimulate angiogenesis ( Table 1). Among these factors are three genes with homology to the cellular chemokine macrophage inflammatory protein, the vMIPs I-III (Boshoff et al., 1997;Stine et al., 2000). In addition to having chemoattractant properties, these proteins promoted neovascularization in the chick chorio-allantoic membrane angiogenesis assay (Boshoff et al., 1997;Stine et al., 2000). KSHV also encodes a viral homolog of interleukin 6 (IL-6), a pro-inflammatory and pro-angiogenic cytokine. This cytokine, when expressed in NIH3T3 cells, promoted secretion of VEGF-A (Aoki et al., 1999). Furthermore, when these cells were injected into mice, they gave rise to tumors more quickly than control cells and the tumors were more highly vascularized (Aoki et al., 1999). Expression of the vMIPs has been predominantly shown to occur during lytic infection. The viral IL-6 (vIL-6) is mostly detected in endothelial cells and spindle cells undergoing lytic replication but like K1 it has been shown to be expressed at very low levels in latently infected PEL cells and to be induced during latency only under specific conditions (Chatterjee et al., 2002;.
In summary, conditioned media from KSHV-infected cells can induce angiogenic phenotypes in uninfected endothelial cells as indicated by the red gradient in Figure 1. KSHV infection of endothelial cells induces expression of a number of cytokines that are capable of inducing angiogenesis in a paracrine fashion. Paramount among these is VEGF-A, an angiogenic cytokine that is induced by KSHV infection of endothelial cells. While the predominant viral mechanism of VEGF-A induction is unknown, a number of lytic KSHV genes are sufficient to induce VEGF-A when overexpressed. KSHV-infected cells also produce a number of other angiogenic cytokines of cellular and viral origin that likely play a role in the induction of angiogenesis. Taken together, all of the cytokines and induced pathways likely create a milieu that is beneficial to the induction of new blood vessels and play a significant role in the high vascularization of KS tumors.
KSHV INDUCTION OF ANGIOGENIC PHENOTYPES WITHIN THE INFECTED CELL
The predominant tumor cell of KS lesions is the endothelialderived spindle cell. Oncogenesis in endothelial cells and Frontiers in Microbiology | Virology angiogenesis have many phenotypes in common. Therefore KS tumor formation is likely to include increased angiogenic capacity of the spindle cells. There is growing evidence demonstrating the manipulation of host cell phenotypes by KSHV and the role of these changes in the promotion of angiogenesis related phenotypes. These infected cell phenotypes include increased stability of tubules formed by macrovascular endothelial cells, induction of capillary morphogenesis in low growth factor conditions, and enhanced migration and invasion Sadagopan et al., 2007Sadagopan et al., , 2009Wang and Damania, 2008;Couty et al., 2009;DiMaio et al., 2011). Additionally, KSHV induces the expression of VEGF receptors on the surface of infected endothelial cells as discussed below.
Endothelial cells lining the vasculature form coordinated junctions to maintain barrier function. Breakdown of these junctions is necessary for initiation of angiogenesis, immune cell extravasation, and tumor cell metastasis. Interestingly, several studies have evaluated the adherens junctions of KSHV-infected endothelial cells and found them to be perturbed (Mansouri et al., 2008;Qian et al., 2008;Guilluy et al., 2011). This may result from the degradation of VE-cadherin (Mansouri et al., 2008;Qian et al., 2008) as well as disruption of VE-cadherin/beta-catenin signaling (Guilluy et al., 2011). Therefore, KSHV infection can directly initiate a key angiogenic step, the breakdown of cell-cell adherence. While the direct mechanism of KSHV-induced disruption of adherens junctions during latency is not known, there are a number of candidate KSHV genes that could be involved ( Table 1). The KSHV-encoded ubiquitin ligase protein, K5, targets VE-cadherin for degradation (Mansouri et al., 2008). Overexpression of the KSHV vGPCR induces endothelial cell permeability and downregulation of cell surface VE-cadherin as well (Dwyer et al., 2011). K5 also targets other cellular proteins, including platelet/endothelial cell adhesion molecule-1 (PECAM-1, CD31), a transmembrane protein important for endothelial cell-cell communication, which could contribute to barrier dysfunction and increased permeability (Tomescu et al., 2003;Mansouri et al., 2006). K1, a primarily lytic protein that may also be expressed at low levels during latency was also shown to initiate signaling similar to that required for disruption of Cadherin signaling (Guilluy et al., 2011). While the exact viral mechanism of disruption of adherens junctions by KSHV infection is not known, the virus encodes multiple genes capable of altering Cadherin.
During angiogenesis, endothelial cells migrate from the preexisting vasculature toward the site of angiogenic stimulus. Endothelial cells exhibit enhanced migration and invasion following latent KSHV infection (DiMaio et al., 2011;Wu et al., 2011). This has been demonstrated by more rapid motility through transwell dishes. KSHV-infected cells also express increased levels of the matrix metalloproteinases MMP-1, -2, and -9 . MMP proteins break down the extracellular matrix supporting stable vasculature to allow for invasion and migration of endothelial cells during angiogenesis (Figure 1, circle 2). Expression of MMP proteins induced by KSHV allows for increased invasion of both infected and uninfected endothelial cells (Wang et al., 2004b;Qian et al., 2007;Shan et al., 2007). While these processes constitute one component of angiogenesis, they are also known to play roles in oncogenesis (Gialeli et al., 2011) indicating that KSHV activation of angiogenic phenotypes in endothelial cells may lead to enhanced oncogenesis as well.
Endothelial cells grown in three-dimensional culture will migrate and organize into capillary-like structures. This activity is dependent, at least in part, on growth factors and cytokines present in the matrix or growth media. KSHV-infected cells are able to undergo capillary morphogenesis in low growth factor conditions to a greater extent than uninfected cells (Wang and Damania, 2008). This could be partially due to increased cytokine secretion from KSHV-infected cells. In fact, when endothelial cells are cultured in the presence of conditioned media from KSHVinfected BCBL-1 cells, their ability to organize into capillary-like structures is increased (Wang and Damania, 2008). However, the effect of BCBL-1 conditioned media was greater on KSHVinfected endothelial cells than on mock-infected cells, suggesting that infected cells are more receptive to angiogenic growth factors. In addition, this same study found that capillary-like structures formed by KSHV-infected endothelial cells are more persistent than mock-infected cells, indicative of the promotion of cell survival and continual angiogenesis by KSHV (Wang and Damania, 2008 and our unpublished results).
KSHV latent infection of endothelial cells also induces VEGF receptor expression, which may allow infected cells to respond more robustly to VEGFs. There are three main receptors for VEGFs. VEGF receptors 1 and 2 play roles in angiogenesis while 2 and 3 play a role in lymphangiogenesis (described below). While KSHV infection has not been reported to alter the expression levels of VEGFR2 (KDR), VEGFR1 expression is significantly increased following KSHV endothelial cell infection (Carroll et al., 2004). Drugs that inhibited HIF-1 activation and signaling also inhibited VEGFR1 upregulation (Carroll et al., 2006). VEGFR1 has been described as both a positive and negative regulator of angiogenesis depending on the context. VEGFR1 mouse knockouts have higher levels of angiogenesis (Fong et al., 1995). However, in cell culture models, VEGFR1 has been shown to potentiate angiogenesis (Cao, 2009). More studies will be needed to determine the importance of increased VEGFR1 expression in KSHV infection and KS tumor formation. Interestingly, expression of VEGFR3, the main receptor for VEGF-C and D is also significantly increased by KSHV infection (Carroll et al., 2004;Hong et al., 2004). VEGFR3, a receptor specific to lymphatic endothelium and critical for lymphangiogenesis will be discussed below. Importantly, endothelial tip cells at the leading edge of new vascular protrusions are the only main adult cell type known to express both the blood endothelial cell receptor, VEGFR1, and the lymphatic endothelial cell receptor, VEGFR3 (Tammela et al., 2008): KSHV infection of endothelial cells directly induces expression of both of these receptors.
The mechanisms by which KSHV induces angiogenic phenotypes in latently infected cells are largely unknown. A number of angiogenic phenotypes are likely to be a direct result of the cytokine milieu of the infected cells. As described above, KSHV-infected cells secrete both viral and host cytokines that are sufficient to induce angiogenic phenotypes. These paracrine factors surely play a role in the induction of tumor cells. However, it is also apparent that some of the angiogenic effects seen in latently infected cells are cell autonomous, independent of either www.frontiersin.org paracrine or autocrine factors. As described above, conditioned media from PEL cells had stronger effects on tubule formation of KSHV-infected endothelial cells (Akula et al., 2005). We have also recently found that KSHV infection induces the pro-angiogenic integrin, integrin β3, during latent infection (DiMaio et al., 2011). Induction of integrin β3 leads to increased cell surface expression of the αVβ3 integrin heterodimer. We have further shown that latently infected endothelial cells become more adherent to the integrin ligands fibronectin and vitronectin, and are more migratory than mock-infected cells. These induced phenotypes require RGD-binding integrins, specifically integrin β3. Although both uninfected and infected cells organize in three-dimensional cultures in complete media, infected cells are more sensitive to inhibitors of integrin β3 and its downstream signaling molecules, such as Src kinase (DiMaio et al., 2011). This suggests that during latent KSHV infection there is a shift in endothelial cell signaling that results in a more angiogenic phenotype dependent on αVβ3 expression on the surface of the cell (Figure 1, center). Therefore, KSHV alteration of endothelial cell signaling pathways can dramatically affect how the cell responds to intra-and extra-cellular signals. These changes that lead to alterations in angiogenic properties are likely to play a role in the growth and cell-cell interactions of infected cells, thereby playing a role in KS tumor formation.
ANGIOGENESIS VS. LYMPHANGIOGENESIS
During development of the vascular system, a subset of endothelial cells in the cardinal vein begin to express markers of lymphatic differentiation, including the master regulatory gene, prox-1. These cells then bud from the cardinal vein, differentiate into lymphatic endothelial cells, and form the lymphatic vascular system (Wigle and Oliver, 1999). The mechanisms regulating lymphangiogenesis are in general less well understood when compared to angiogenesis. Immunohistochemistry of KS tumors showed that spindle cells express markers of lymphatic endothelium, suggesting these cells may arise from primary infection of lymphatic endothelial cells, rather than blood endothelial cells (Jussila et al., 1998;Skobe et al., 1999;Weninger et al., 1999;Pyakurel et al., 2006). An alternative hypothesis is that KSHV infection of blood endothelial cells drives differentiation toward a more lymphatic phenotype. This idea is supported by evidence that KSHV infection of blood endothelial cells promotes expression of lymphatic-specific genes, including prox-1, VEGFR3, podoplanin, and LYVE-1, effectively driving the reprogramming of blood endothelial cells to become lymphatic endothelium (Carroll et al., 2004;Hong et al., 2004;Wang et al., 2004a). Microarray analysis comparing KSHVinfected blood endothelial cells to blood and lymphatic endothelial cells indicate that KSHV-infected blood endothelial cells have gene expression profiles that align more closely to lymphatic endothelial cells than that of blood endothelial cells (Carroll et al., 2004;Hong et al., 2004;Wang et al., 2004a).
The mechanism by which KSHV induces lymphatic differentiation is not completely clear. The KSHV latent gene Kaposin B can directly promote the stability of Prox-1 mRNA (Yoo et al., 2010) leading to increased expression of Prox-1. However, this effect was not sufficient to induce Prox-1 expression in blood endothelial cells. We recently found that induction of blood to lymphatic endothelial cell reprogramming requires signaling through the cellular receptor gp130. Endothelial cells that are latently infected with KSHV have increased expression and signaling of gp130 (Morris et al., 2008). This leads to activation of the JAK/STAT3 pathway and the PI3K/AKT pathway leading to expression of lymphatic-specific genes starting with Prox-1. Inhibition of this pathway by siRNAs that target gp130 or AKT or pharmacological inhibitors that block PI3 kinase or Jak2/STAT3 signaling is sufficient to block lymphatic differentiation (see Figure 1, center). The cytokine responsible for activating gp130 is currently not known. KSHV vIL-6 is sufficient to induce gp130 activation and we recently found that vIL-6 is sufficient to induce lymphatic differentiation (Morris et al., 2012). However, KSHV lacking vIL-6 is still able to cause blood to lymphatic endothelial cell differentiation, indicating that KSHV has evolved multiple strategies to activate gp130 and induce blood to lymphatic endothelial cell differentiation (Morris et al., 2008).
Induction of lymphatic differentiation by KSHV is only part of the story, however. Despite the expression of lymphatic-specific genes, blood endothelial cells infected with KSHV retain expression of some blood specific markers (Wang et al., 2004a). Additionally, infection of lymphatic endothelial cells with KSHV induces expression of blood specific markers (Wang et al., 2004a). KSHV miRNAs were found to target the transcription factor MAF (Hansen et al., 2010). Downregulation of MAF in lymphatic endothelial cells by siRNA restored expression of blood endothelial markers, such as VEGFR1 and CXCR4. Thus, infection of blood or lymphatic endothelial cells by KSHV alters host gene expression to an intermediate state between the two cell types. As described above, this intermediate phenotype with both VEGFR1 and R3 expression is only present in the leading tip of endothelial cells involved in active neo-angiogenesis. In the KS lesions only LANA+ cells expressed Prox-1, indicating that this effect requires KSHV gene expression (Hong et al., 2004). This suggests that differentiation toward lymphatic endothelial cells may specifically allow the spindle cells to respond to lymphangiogenic growth factors. In fact, KSHV infection of endothelial cells induces both VEGF-A and VEGF-C (Sivakumar et al., 2008). VEGF-C is a key regulator of lymphangiogenesis. Therefore, induction of both VEGFR1 and R3 allow KSHV-infected cells to respond to key angiogenic and lymphangiogenic factors in the tumor environment. The direct role of KSHV reprogramming of blood endothelial cells to lymphatic in induction of angiogenic and lymphangiogenic phenotypes is still under investigation.
SUMMARY
The highly vascular nature of KS tumors and the large amounts of neo-angiogenesis in the tumor led to the proposal that the etiologic agent of the tumor might directly induce angiogenesis. In accordance with this hypothesis KSHV infection of endothelial cells, the main tumor cell type, induces host cell cytokines involved in angiogenesis. In particular, KSHV induces the expression of VEGF-A and -C and other cytokines as well as encoding angiogenic cytokines from its own genome (Boshoff et al., 1997;Aoki et al., 1999;Brown et al., 2000;Stine et al., 2000;Schwarz and Murphy, 2001;Masood et al., 2002;Polson et al., 2002;Wang et al., 2004b;Xie et al., 2005;Caselli et al., 2007;Vart et al., 2007;Ye et al., Frontiers in Microbiology | Virology 2007; Fonsato et al., 2008;Sivakumar et al., 2008;Wang and Damania, 2008;Sadagopan et al., 2009;Ma et al., 2010;Sharma-Walia et al., 2010). Therefore, KSHV may induce seeding of new blood vessels to the tumor milieu. Additionally, because the tumor cell is endothelial in nature, induction of angiogenic cytokines may also activate the tumor cells and aid in the growth of KS tumors. KSHV also induces angiogenic phenotypes directly in latently infected cells in a cell autonomous fashion, indicating that angiogenic activation of the infected endothelial cell may directly play a role in tumor formation.
While KSHV activates many growth-signaling properties and in general the induction of angiogenic phenotypes supports endothelial cell proliferation, in most cultures KSHV does not induce increases in endothelial cell proliferation. It is possible that the cell culture milieu simply does not match the tumor milieu. KS spindle cells are not fully transformed ex vivo and, except in very rare cases, have a limited life span indicating that factors in the tumor environment that come from other cells types could be necessary to maintain KS spindle cell growth. The increase in growth could also be masked by the fact that endothelial cells in culture are rapidly dividing and therefore do not need additional growth signals. Along those lines, mature endothelium in vivo is relatively quiescent. That being said, the endothelial cell transforming potential of KSHV in culture can be unmasked given specific conditions. Dermal microvascular endothelial cells that were immortalized with the E6 and E7 genes from papillomavirus are readily transformed by KSHV, including increased proliferation (Moses et al., 1999). Therefore, KSHV activation of endothelial cells can induce a proliferative advantage in the correct genetic environment. However, it is unknown if viral induction of angiogenic phenotypes is necessary for the growth in the E6/E7 immortalized endothelial cells.
In general, viruses do not evolve to cause cancer, as it is likely a dead end for transmission. KSHV likely evolved to activate the cell where it is maintained to ensure survival and spread of the virus. A major side effect of this activation may be providing an ideal environment for angiogenesis leading to increased vascularization of small tumor growths and expansion of KS tumors. While the study of viral induction of angiogenesis can lead to a better understanding of how KSHV causes endothelial cell tumors, information gleaned from the study of viral mechanisms of induction of angiogenesis and lymphangiogenesis will also lead to a better understanding of endothelial cell activation and tumor angiogenesis in general. Thus, the study of KSHV infection of endothelial cells provides a controlled system for analyzing the regulation and induction of angiogenic phenotypes that will likely shed light on the field of tumor angiogenesis. | 2016-05-04T20:20:58.661Z | 2012-01-12T00:00:00.000 | {
"year": 2012,
"sha1": "a18a08d644e4d2ae1e624cdb117ddfc0312afafa",
"oa_license": "CCBYNC",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2012.00102/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a18a08d644e4d2ae1e624cdb117ddfc0312afafa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
2075004 | pes2o/s2orc | v3-fos-license | Children and youth perceive smoking messages in an unbranded advertisement from a NIKE marketing campaign: a cluster randomised controlled trial
Background How youth perceive marketing messages in sports is poorly understood. We evaluated whether youth perceive that the imagery of a specific sports marketing advertisement contained smoking-related messages. Methods Twenty grade 7 to 11 classes (397 students) from two high schools in Montréal, Canada were recruited to participate in a cluster randomised single-blind controlled trial. Classes were randomly allocated to either a NIKE advertisement containing the phrase 'LIGHT IT UP' (n = 205) or to a neutral advertisement with smoking imagery reduced and the phrase replaced by 'GO FOR IT' (n = 192). The NIKE logo was removed from both advertisements. Students responded in class to a questionnaire asking open-ended questions about their perception of the messages in the ad. Reports relating to the appearance and text of the ad, and the product being promoted were evaluated. Results Relative to the neutral ad, more students reported that the phrase 'LIGHT IT UP' was smoking-related (37.6% vs. 0.5%) and that other parts of the ad resembled smoking-related products (50.7% vs. 10.4%). The relative risk of students reporting that the NIKE ad promoted cigarettes was 4.41 (95% confidence interval: 2.64-7.36; P < 0.001). Conclusions The unbranded imagery of an advertisement in a specific campaign aimed at promoting NIKE hockey products appears to have contained smoking-related messages. This particular marketing campaign may have promoted smoking. This suggests that the regulation of marketing to youth may need to be more tightly controlled.
Background
Large corporations use increasingly sophisticated marketing strategies to promote products to children, which includes marketing techniques that rely on imagery relating to lifestyle or social norms. Such forms of marketing are acknowledged more and more as important determinants of child health that need to be regulated. Several countries have implemented mechanisms to regulate marketing to children, especially with respect to the promotion of tobacco [1]. One of these countries is Canada, which has led the way in regulating tobacco marketing [2], particularly because the tobacco industry has used such marketing techniques so effectively with children [3][4][5][6][7]. Evidence also shows that targeting of children by the food industry may be fuelling the obesity epidemic [8,9]. Scant research has, however, considered whether marketing by other industries may influence the health of children. Large corporations that market heavily may be popular with children and can shape their thoughts and behaviours, possibly even when marketing laws are present. There is a need to evaluate how children perceive marketing campaigns as a first step towards understanding how to improve marketing policies in general.
The goal of the current study was to assess what youth perceive in the imagery of advertisements used in a specific marketing campaign by NIKE, a company that is popular with children and youth. We evaluated NIKE's LIGHT IT UP campaign, run from 2003-2005 in Canada to "inspire young hockey players" [10]. This particular campaign was selected because it was meant to promote hockey products yet appeared to include messages that could have inadvertently promoted smoking. This is of concern because it has been shown that the tobacco industry uses sports to promote its products [6,[11][12][13].
Study design and context
We used a randomised single-blind controlled trial design. Twenty grade 7 to 11 classrooms from two schools were allocated to receive either the exposure advertisement (10 classes, 205 students) or a neutral version of the ad (10 classes, 192 students). We consulted youth tobacco control experts who recommended an assessment without the swoosh logo to determine how youth perceived the internal imagery of the ads independent of the brand name. Hence, the Nike logo was removed from both the exposure and control advertisements. Approval for this study was obtained from the research ethics committee of the University of Montréal Hospital Centre.
Participants
We aimed for a sample of 538 students (269 for each condition) assuming a design effect of 1.11, with 5% of the exposed students and none of the control students perceiving tobacco-related messages (77% participation rate, two-sided α = 0.05, β = 0.10, intraclass correlation = 0.005 for students within classrooms). In February 2009, 522 students from one junior and one senior high school in the metropolitan area of Montréal, Canada were invited to participate. Voluntary signed consent was obtained for 401 students and from their parents three weeks prior to the test date. Three students were absent on the test date and one did not follow the protocol, leaving 397 participants (71.9% participation rate).
Procedures
The original campaign was Internet-based. Messages were promoted on NIKE's homepage in a web-based multimedia presentation providing links to photos and videos of children posing next to LIGHT IT UP ads, and to screensavers/wallpapers. Children were recruited to the web-site by NIKE representatives in arenas, tournaments, skating rinks, hockey practices and retail locations in Toronto, Montreal, Calgary and Vancouver. Ads contained a hockey net referred to as the "lonely talking net", and were available for download for non-commercial purposes.
We used an image containing the message FOLLOW ME and the slogan LIGHT IT UP as the exposure, without the logo (Figure 1a, © of the original image: NIKE, Inc). This image was taken from the multimedia presentation and resembled most of the other ads on the web-site, including those meant to be downloaded and used as computer background wallpaper by youth. In consultation with colleagues, we identified four parts of the image potentially containing tobacco-related imagery, including the 1) slogan, 2) ash-like appearance of the net center pole, 3) smoky appearance of the words in the center, and 4) unusual rectangular marks around the border that resembled cigarettes. We generated a neutral comparison ad as the control using Windows Paint software. In the control, we reduced possible tobacco-related content as follows: 1) the LIGHT IT UP slogan was changed to GO FOR IT; 2) the colour of the net center pole was changed to a uniform grey taken from the bottom of the net; 3) the FOLLOW ME was blackened; and 4) the rectangular marks in the outermost edges of the ad were removed ( Figure 1b). For simplicity, the unbranded NIKE image is hereafter referred to as the exposure advertisement, and the control as the neutral advertisement.
We designed a 3-part questionnaire to determine the types of messages perceived. Part 1 contained open-ended questions asking for impressions of the ad, thoughts on the slogan, thoughts on the ad's appearance, the product or service being promoted, and the type of company they thought had produced the ad (Additional file 1). No mention of tobacco was made in any of these questions. Parts 2 and 3 contained multiple-choice questions about tobacco and baseline covariates potentially related to perception (age, sex, grade, socio-economic status, interest in hockey, and smoking). Seventeen grade 7 to 11 youth tested the questionnaire for question comprehension [14]. We assessed socio-economic status with the Family Affluence Scale [15], and smoking status using Pierce's method (non-susceptible never smoker, susceptible never smoker, experimenter, established smoker) [5]. Questionnaires were administered in classrooms under the supervision of a teacher and a research assistant. Students responded to each part in sequence without discussion with peers, sealing each part in separate envelopes before moving on to the next.
The research assistant randomly assigned classes to either the exposure ad or neutral ad within grade levels. Students were told different ads were being tested but were blinded as to which version they had been assigned and to our interest in the smoking question. As well, none of the teachers or school administrators was aware of the tobacco-related hypothesis.
Outcome measures
Three primary outcomes, expressed as dichotomous variables, were defined as any report of pro-tobacco or sportsrelated messages in the 1) slogan, 2) ad's appearance, or 3) product being promoted. Reports were considered smoking-related when they included the specific terms "cigarette", "smoking", or "smoke". Reports were considered sports-related if they made any direct or remotely indirect reference to physical activity, physical fitness, health, skating, hockey or hockey product, or any other sport or game.
A trained research assistant extracted all tobacco-or sports-related messages from the first part of the questionnaire. Because the research assistant could not be blinded because of the slogan used in the questionnaire, a second assistant re-coded a 10% sample of questionnaires to determine whether similar results would be obtained. Percentage agreement between coders ranged from 96% to 100% for each outcome.
Exposure ad
Neutral (control) ad Figure 1 Images of the ads shown to students. Figure 1A. Exposure ad. Figure 1B. Neutral (control) ad. Arrows point to digitally modified areas (1 -central pole was coloured grey using a shade from the lower part of the pole; 2 -FOLLOW ME was blackened; 3 -rectangular marks on outmost edges were removed; 4 -LIGHT IT UP was replaced by GO FOR IT). © of the original image: NIKE, Inc.
Statistical analysis
Generalised estimating equations with a logit link were used to evaluate the association between the ad and reports of tobacco-or sports-related messages, accounting for classroom clustering. Finner P-values were computed with WinPEPI software to adjust for the multiple outcomes tested [16]. Relative risks (RR) with 95% confidence intervals (CI) were estimated. Multivariate models adjusting for sex, grade, socio-economic status, smoking status, test time and a test time-ad version interaction term were also run. Analyses were performed using SAS 9.1 software (SAS Institute Inc., Cary, NC, 2002).
Results
As shown in Table 1, the characteristics of students in both groups were highly similar although 11% more students shown the exposure ad were tested in the morning. The mean age of both groups was 14 years, with females, non-smokers, and ice hockey fans more frequently represented.
Tobacco versus sports content of ads
One third (37.6%, 77/205) of students viewing the LIGHT IT UP version thought the slogan referred to smoking compared with 0.5% (1/192) who viewed the GO FOR IT version (Table 2). Many more students also reported that the exposure ad relative to the neutral ad contained images of smoking-related products (50.7%, 104/205 vs. 10.4%, 20/192; RR 4.87, 95% CI 2.86-8.29). Examples of smoking-related reports were that that the centre pole resembled a cigarette, that smoke covered the central text of the ad, and that cigarettes were present in the ad's edge. Students also reported that the product being promoted by the exposure ad relative to the neutral ad was cigarettes (39.0%, 80/205 vs. 8.9%, 17/192; RR 4.41, 95% CI 2.64-7.36).
A lower than expected number of students reported the ads were sports-related (Table 3). For both ads, only one-third of students reported the slogan referred to sports and only half reported the ad had a sports-related appearance. However, students were more likely to report that the neutral ad promoted sports relative to the exposure ad (65.8%, 125/192 vs. 51.0%, 103/205; RR 0.78, 95% CI 0.68-0.89).
Accounting for age, sex, grade, socio-economic status, smoking status, being a hockey fan, and test time did not influence the relationship between ad version and perception of tobacco or sports messages. Inclusion of a test time*ad version interaction term in models did not change the results, nor did models run after excluding afternoon students. Hence, there is no evidence that senior students participating during afternoon test classes were biased by morning class students to whom they might have spoken (this issue did not apply to students from the junior high school who were all tested during one class period). Only one participant (0.5%) reported having seen similar ads in the past, and none reported this was a NIKE ad.
Written comments of students shown the exposure ad are provided in Table 4. Students from all grades reported that the exposure ad made them think "about cigarettes", that LIGHT IT UP meant "light up your cigarette", that the centre pole "really looks like a cigarette", that the ad was "obviously" promoting cigarettes and was about "smoking, disguised as a hockey ad", and even that an "illegal company or a cigarette company" had made the ad. A minority of students who reported tobacco messages for the neutral ad mainly said the centre pole resembled a cigarette (comments not shown).
Discussion
This randomized trial shows clearly that an ad image used by NIKE to associate its products with scoring in hockey was thought to promote smoking by one third of adolescents who saw it without the brand name. Though these results pertain to only one campaign, they nonetheless illustrate the potential for messages such as LIGHT IT UP to unintentionally promote tobacco to young people. This finding is important not only because tobacco is a leading cause of morbidity and mortality worldwide, but because smoking habits are formed during childhood and tobacco promotion is partly responsible [17]. Over a third of students (38%) reported the slogan LIGHT IT UP was related to smoking. We expected the youth in this study would associate these ads mainly with sports, as the ads were promoted by a well-known sports company that is expected to have carefully developed and tested their ads before going to market. Perception of sports-related messages was, however, no more frequent than perception of tobacco-related messages. A nearly similar ad that we designed to be equally vague but more tobacco neutral was significantly less likely to lead to reports of tobacco messages. Although the phrase LIGHT IT UP may have been intended to refer to lighting the scoreboard with a goal or to associate NIKE products with winning, even this is unclear: associations did not change with adjustment for being a hockey fan (hockey fans would be expected to understand the hidden meaning of LIGHT IT UP). Furthermore, the French version of the phrase (BRULE LA GLACE, or burn the ice), in no way alludes to scoring. Such messages may therefore be ambiguous and from a young person's viewpoint the interpretation may not be benign.
An important issue is that we removed the NIKE swoosh mark logo from the ads to determine whether students could correctly identify the category of product being promoted (i.e., sports) as it was not clear they would perceive tobacco messages in the first place, and to isolate the effects of the slogan and pictorial aspects of the ads. It is possible that fewer students might have reported tobacco-related messages with the NIKE brand logo kept in the ads, and future research using the same ads with the swoosh mark retained would be informative. Research on cigarette ads suggests that youth focus on the product being promoted rather than on the brand name, and that brand names may contribute little to the understanding of what product is being promoted [18]. Thus, the removal of the NIKE check mark in our study is not likely to fully account for so many students seeing tobacco in the ad. Furthermore, our procedure did not entirely differ from some of NIKE's own marketing behaviour, as the NIKE check mark was not visible in some of the ads shown to youth. It is not certain that inclusion of the logo in this study would have correctly represented the spectrum of ads shown to youth.
Without the logo, both the exposure ad and the neutral ad should have at least induced sports-related thoughts, given that an important goal of advertisements is to lead consumers to the correct product category. Students, however, reported that the neutral ad promoted sports more so than did the exposure ad. The small proportion of students who reported that the centre pole of the neutral ad resembled a cigarette is not unexpected because the grey shade is from the original ad.
Interestingly, a randomised study resembling ours showed that the text and colours of ads can change the perception of tobacco messages, but this research was done with adults and the ads were intentionally related to tobacco [19]. Our study shows that such factors may be important in ads targeting youth, and even important in ads not intentionally promoting tobacco. Some researchers have critiqued studies that evaluate the influence of the media by claiming that youth are less cognitively complex than adults, and hence less likely to pay attention to media messages [20]. In fact, the size of our sample was calculated assuming that few students would perceive tobacco imagery. Thus, our findings not only show that we should not underestimate the cognitive abilities of youth, but they also call into question theories that minimize the influence of advertisements by arguing that youth pay little attention to the media around them [20]. A related issue is how younger children would have perceived these ads. Although NIKE stated they surveyed 15 to 25 year olds before the campaign [10], according to images on the web-site elementary school aged children in particular were targeted. This study was designed for secondary school students, and further First impression of image "It makes me think of a cigarette commercial that is trying to influence young teens/adults to smoke." 7 "I saw the cigarette as the pole and I knew it meant smoking." 7 "It's about smoking and that smoking should be your goal." 8 "This ad first reminds me of cigarettes and smoking." 10 Meaning of LIGHT IT UP "I've heard that expression for lighting a cigarette." 7 "They're trying to use hockey as an image of fun, then they use a cigarette in the hockey net, then they add LIGHT IT UP. Therefore, they want you to start smoking." 8 "To light up your cigarette." 9 "It can either mean to light up a cigarette or drug and then you'll become successful or it can mean give the game all you got." 10 "I think about opening a lighter." 11 Meaning of FOLLOW ME "To smoke because your friends are doing it and if they offer you, say yes." 8 "FOLLOW ME would be 'drawing in' teens to smoke. The sign to me encourages young teens to start and develop a smoking habit, portraying it as a good thing." 10 "Follow the person smoking." 11 Appearance of FOLLOW ME "There is smoke within the black lettering." 7 "It looks like smoke." 9 Appearance of Centre pole of hockey net "It looks like a cigarette only without the orange thing at the butt of the cigarette." 8 "The centre pole really looks like a cigarette or drugs." 11 Appearance of Outermost edges of image "I see little cigarettes that look like they're already lit up." 7 "I can see cigarettes on all of the sides in a faded looking way." (*Student also drew a picture) 9 "Cigarette filters." 11 Product being promoted "It is promoting for sure 'cigarettes'." 7 "People are advertising smoking." 8 "Obviously cigarettes." 9 "Smoking, buying cigarettes, getting addicted so the company can continue making money." 9 "Promoting the use of cigarettes or other things you can smoke." 10 "Cigarettes through sports. A lot of people watch hockey and even though we're not aware of it our brain picks up on the message it's sending."
11
"Easy, it's smoking, disguised as a hockey ad. You know, get all the cool athletes to smoke so it looks cool to the younger kids." 11 Type of company that made the ad "Export (I think that's a cigarette brand)." 7 "A cigarette company, a drug company." 8 "An illegal company or a cigarette company." 9 "Any cigarette company, people that benefit from cigarette production." 10 "Du Maurier, Peter Jackson, cigarette companies." 11 * Quotes are mutually exclusive (i.e., no student appears more than once in the list).
research is necessary to determine whether younger children would also perceive tobacco-related messages. Our conclusions are limited by our inability to evaluate elements of the marketing campaign that were hidden to us. We could not enter the password-protected parts of the web-site that were only accessible to children who had registered at a sponsored promotional event, or who belonged to a hockey team sponsored by NIKE. We were not successful in attending such events and wrote to NIKE requesting a password to access the restricted sites but were refused. Thus, we could not determine the full spectrum of ads that were shown to children. The ads that we did find alluded to sexuality ("Guys think about scoring every six seconds", "Slip it between the legs"), risk-taking ("Some lines are meant to be crossed"), peer acceptance ("You are not alone", "Do you want in?"), and independence ("Are you ready to break free?"). These themes have successfully been used to market cigarettes to youth [6,11,21], with the tobacco industry finding innovative ways of encouraging repeated viewing of such ads (e.g., through contests) [6,22]. In the LIGHT IT UP campaign, NIKE ran a contest in which youth were required to repeatedly watch photos and videos of children posing next to the LIGHT IT UP messages [10]. We did not evaluate how these added factors could have influenced the perception of tobacco-related messages.
It is now established that tobacco advertising leads to smoking in youth [3][4][5]. Incidental pro-tobacco imagery on the Internet, in film and magazines is also increasingly linked to youth smoking [23][24][25][26]. Such messages shape social values about smoking and create environments where cigarettes are considered normal [24,25], and sports marketing campaigns not related to tobacco can potentially contribute to this process especially when the sport in question is popular [27]. NIKE also relied on hockey sponsorship to recruit children to the LIGHT IT UP campaign, and we do not know how their sponsorship strategies could have contributed. It is interesting to note that tobacco sponsorship per se was banned in 2003 by the Canadian Tobacco Act [2], just before the LIGHT IT UP campaign was run.
These issues are important because the tobacco industry has demonstrated that the combination of sponsorship, sports and tobacco works to promote cigarette smoking [6,[11][12][13]28], and celebrities or athletes [10] may be enhancing factors [25,29]. NIKE also donated hockey equipment from the LIGHT IT UP campaign to disadvantaged children, and disadvantaged children are already at greater risk of smoking [30]. More research is needed on campaigns such as LIGHT IT UP to determine what kind of influence the factors outlined above could have on inadvertent tobacco promotion in different settings.
Conclusions
We found that children and youth perceived smoking messages in a randomised trial testing unbranded ad imagery used in a NIKE marketing campaign run in four large Canadian metropolitan centers. Though these findings cannot be generalized to other marketing campaigns by NIKE or other sports companies, they nonetheless suggest that elements of a NIKE advertisement may have inadvertently promoted smoking among at least a small portion of the youths who were exposed, and that Canadian regulations for marketing to children and youth [2,31] may be inadequate when marketing relies on imagery with double meanings. Increasingly complex and hard-to-regulate marketing environments are emerging [8,9,32], and marketing regulations must keep pace with changing environments [33]. In particular, regulations for marketing on the Internet must be tightened. Large corporations must be accountable for their actions, including any inadvertent harmful effects of promotional efforts. Marketing must be transparent and easily accessible to adults. Until marketing regulations are improved and properly enforced, the public health and practising pediatric community needs to be vigilant regarding all marketing to children and youth.
Additional material
Additional file 1: Open-ended questions asked to students in Part 1 of the questionnaire. List of open-ended questions asked to students in Part 1 of the questionnaire | 2014-10-01T00:00:00.000Z | 2011-04-08T00:00:00.000 | {
"year": 2011,
"sha1": "0119acfc86f1af4c1d021b0e0719795523a1a10d",
"oa_license": "CCBY",
"oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/1471-2431-11-26",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0119acfc86f1af4c1d021b0e0719795523a1a10d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221266199 | pes2o/s2orc | v3-fos-license | A giant radio bridge connecting two clusters in Abell 1758
Collisions between galaxy clusters dissipate enormous amounts of energy in the intra-cluster medium (ICM) through turbulence and shocks. In the process, Mpc-scale diffuse synchrotron emission in form of radio halos and relics can form. However, little is known about the very early phase of the collision. We used deep radio observations from 53 MHz to 1.5 GHz to study the pre-merging galaxy clusters A1758N and A1758S that are $\sim2$ Mpc apart. We confirm the presence of a giant bridge of radio emission connecting the two systems that was reported only tentatively in our earlier work. This is the second large-scale radio bridge observed to date in a cluster pair. The bridge is clearly visible in the LOFAR image at 144 MHz and tentatively detected at 53 MHz. Its mean radio emissivity is more than one order of magnitude lower than that of the radio halos in A1758N and A1758S. Interestingly, the radio and X-ray emissions of the bridge are correlated. Our results indicate that non-thermal phenomena in the ICM can be generated also in the region of compressed gas in-between infalling systems.
INTRODUCTION
In the past two decades, the presence of diffuse and extended synchrotron sources with steep spectra (α > 1, with Sν ∝ ν −α ) in merging galaxy clusters has been confirmed by numerous radio observations (e.g. van Weeren et al. 2019, for a recent review). Radio halos (in cluster centers) and radio relics (in cluster outskirts) are among the largest (∼ Mpc-scale) and most common sources associated with the intra-cluster medium (ICM). Their origin is likely related to the process of cluster formation, where part of the energy dissipated into the ICM by turbulence and shocks can be channeled into non-thermal components, namely relativistic particles and magnetic fields (e.g. Brunetti & Jones 2014, for a review).
Highly sensitive observations at low frequencies with E-mail: botteon@strw.leidenuniv.nl LOw Frequency ARray (LOFAR) are providing many new insights into the study of non-thermal phenomena in galaxy clusters. Recently, Govoni et al. (2019) observed a ∼ 3 Mpc radio bridge connecting the pre-merging system Abell 399-401 (z = 0.07), showing that detectable non-thermal emission can be generated on scales larger than that of clusters. Abell 1758 (hereafter A1758, see Fig. 1) is a system located at z = 0.279 composed of two massive galaxy clusters separated by a projected distance of ∼ 2 Mpc: A1758N (in the north, the most massive one) and A1758S (in the south). X-ray observations suggest that the two clusters are gravitationally bound but have not interacted yet, that is, they are in a pre-merging phase (David & Kempner 2004;Botteon et al. 2018;Schellenberger et al. 2019). In addition, complex cluster dynamics and multiple sub-substructures are observed both in A1758N and A1758S, indicating that each of the two clusters is undergoing its own merger (e.g. Figure 1. Composite images of A1758 obtained from the superposition of an optical SDSS g,r,i mosaic with Chandra (blue) and with a LOFAR image at 144 MHz with a resolution of 7.6 × 5.4 (red) and a rms noise of 60 µJy beam −1 . Yellow and red regions indicate the mask used to subtract discrete sources and the regions adopted to measure the flux densities in the low-resolution images, respectively. used 144 MHz LOFAR observations to study the well-known radio halo in A1758N and also discovered a new radio halo and a candidate radio relic in A1758S. More importantly, at low resolution we found a hint (2σ) of a bridge of radio emission connecting the two clusters which required a further study with more sensitive observations. In this Letter, we report the results of an extensive campaign of deep, multi-frequency, radio observations of the radio bridge connecting the galaxy clusters in A1758. Here, we adopt a ΛCDM cosmology with ΩΛ = 0.7, Ωm = 0.3 and H0 = 70 km s −1 Mpc −1 , in which 1 arcsec corresponds to 4.233 kpc at the cluster redshift (z = 0.279) and the luminosity distance is DL = 1428 Mpc.
OBSERVATIONS AND DATA REDUCTION
We observed A1758 with the LOFAR Low/High Band Antenna (LBA/HBA) arrays, the upgraded Giant Metrewave Radio Telescope (GMRT), and the Jansky Very Large Array (JVLA). Details of our observations are summarized in Tab. 1. The data reduction procedures for each dataset are briefly described below. For all calibrated datasets, the final imaging has been performed using WSClean v2.8 (Offringa et al. 2014) with multi-scale multi-frequency deconvolution (Offringa & Smirnov 2017).
LOFAR
In Botteon et al. (2018), we analyzed the pointing closest to A1758 (offset by ∼ 1.1 • ) coming from the LOFAR Twometer Sky Survey (LoTSS; Shimwell et al. 2019). For this work, we exploit follow-up observations with LOFAR LBA (39 − 78 MHz) and HBA (120 − 168 MHz) in combination with four LoTSS pointings that lay within 3 • of A1758. For LBA, the target and calibrator were jointly observed for 8 hr using the multi-beam capability of LOFAR, while HBA observations followed the scheme of LoTSS, namely 8 hr runs book-ended by 10 min scans on flux density calibrators. Each observation was analyzed individually with the pipelines developed by the LOFAR Surveys Key Science Project team (prefactor, de Gasperin et al. 2019;killMS, Tasse 2014;Smirnov & Tasse 2015;DDFacet, Tasse et al. 2018) to correct for direction-independent effects and to perform a first round of direction-dependent calibration of the entire LO-FAR field-of-view before combination. The image quality towards A1758 was improved following the scheme that has been adopted in recent LOFAR HBA works (e.g., Botteon et al. 2019Botteon et al. , 2020van Weeren et al., in prep.), which consists of the subtraction of the sources outside the target region from the visibility data, phase-shifting to the center of the region, and correcting the LOFAR station beam towards this direction. Residual artifacts are reduced by means of phase and amplitude self-calibration loops in the small region con- taining the target. The same procedure was adapted with optimized parameters for the LBA data. We set conservative systematic uncertainities of 15% and 20% on 53 MHz (LBA) and 144 MHz (HBA) flux densities, respectively.
uGMRT
We have observed A1758 for 20 hr in band 3 (300 − 500 MHz) with the uGMRT. Data were recorded in 2048 frequency channels with integration time of 4 s in full Stokes mode. The dataset was split into six frequency slices with a bandwidth of 33.3 MHz centered from 317 to 482 MHz that were processed independently using the spam pipeline (Intema et al. 2009). In the final analysis we removed the highest frequency sub-band due to its lower quality and jointly deconvolved the remaining five slices to produce images with a central frequency at 383 MHz. The systematic uncertainity due to residual amplitude errors was set to 15%.
JVLA
The JVLA L-band (1 − 2 GHz) observations consist of two 1.1 hr runs performed with the C and D arrays. Each dataset was reduced following standard procedures, including removal of the radio frequency interference, calibration of antenna delays and positions, bandpass, cross-hand delays, and polarization leakage and angle. To optimize the image quality, we performed various iterations of self-calibration to calculate the calibration solutions. First, three rounds of phase and three rounds of amplitude calibrations are done on the N halo; α = 1.24 ± 0.06 S halo; α = 1.18 ± 0.08 S relic; α = 1.26 ± 0.10 Bridge Figure 3. Integrated spectra of the diffuse radio sources.
C and D array data separately. Then, another six rounds of self-calibration were performed on the combined C+D array to produce the final dataset with central frequency 1.5 GHz. The absolute flux scale calibration error was set to 5%.
RESULTS
We produced deep images of the A1758 system using the observations listed in Tab. 1. Our LOFAR HBA image in Fig. 1 is a factor of ∼ 2 deeper than that published in Botteon et al. (2018) as a result of the combination of different observations and recent improvements in the data reduction pipelines. In order to properly study the extended emission from the ICM and determine its flux density, a careful subtraction of the emission from discrete sources embedded in the cluster emission must be performed. For this reason, we followed the approach reported in Botteon et al. (2018), in which different models of point sources are subtracted from the visibilities to assess the accuracy of the procedure. Our discrete sources models were created by making 6 images with inner uv -cuts equally spaced in the range 1.0 − 3.5 kλ (equivalent to 873 − 249 kpc at the redshift of A1758) for the LOFAR HBA and LBA, uGMRT and JVLA datasets deploying custom cleaning masks (e.g. Fig. 1). In this phase, multiscale cleaning was switched off to minimize the amount of diffuse emission from the ICM picked up by the deconvolution algorithm. The aforementioned models were subtracted individually from the visibilities and six images of the diffuse emission were produced with the same WSClean parameters for each dataset. An example of low-resolution images from 53 MHz to 1.5 GHz with comparable restoring beams are reported in Fig. 2. In the following, quoted flux densities at a given frequency represent the median value of the six different source-subtracted images measured in the same regions (the relative standard deviation of the six images is typically 2%). The presence of diffuse radio sources in A1758N and A1758S is confirmed from 53 MHz to 1.5 GHz (Fig. 2). Most remarkably, we clearly observe a bridge of emission connecting the two clusters in the LOFAR image at 144 MHz, where the emission fills the region between A1758N and A1758S and has an integrated flux density of 24.2 ± 4.9 mJy. Low-significance patches of emission are observed both at 53 and 383 MHz, the most significant of which is a filamentary structure extending from the halo in A1758N towards A1758S detected above the 3σ level at 53 MHz. This region is encompassed by the white box shown in Fig. 2 (top-left panel) where we measure flux densities of S53 MHz = 60.2 ± 10.7 mJy and S144 MHz = 11.6 ± 2.4 mJy, leading to α = 1.65 ± 0.27. By assuming a limit at 3σ on the 53 MHz flux density, the average spectral index on the entire bridge results α < 1.84, while current uGMRT and JVLA data provide only a loose constraint of α > 0.4.
The flux densities and integrated spectra of the other diffuse sources in the system measured from the red regions shown in Fig. 1 (which were drawn to roughly follow the 3σ level contour of the LOFAR HBA image in Fig. 2) are reported in Fig. 3. Our measurements agree with the results of Botteon et al. (2018) and Schellenberger et al. (2019).
DISCUSSION
Radio bridges connecting pre-merging clusters are a recent discovery. So far, the cluster pairs A1758N-A1758S (at z = 0.279) and Abell 399-401 (at z = 0.07, Govoni et al. 2019) are the only two cases where a bridge of radio emission between two clusters has been observed. The two systems show remarkable similarities. First of all, each of the two main components of the pairs is a massive cluster, with M500 5 × 10 14 M (Planck Collaboration XXVII 2016). Second, both are pairs of dynamically disturbed clusters, with all four clusters undergoing mergers and hosting a radio halo (Murgia et al. 2010;Botteon et al. 2018). Conversely, recent LOFAR HBA observations failed to detect a radio bridge connecting the two clusters in the Lyra complex (Botteon et al. 2019), that are less massive and in that case only one of the two is a merging system/hosts a radio halo. This may suggest that radio bridges form from the dissipation of energy in dynamically active regions.
According to Brunetti & Vazza (2020), radio bridges may originate from second-order Fermi acceleration of electrons interacting with turbulent motions triggered by the complex dynamics in the overdense region between premerging clusters. The observation of infalling sub-groups onto A1758 (Haines et al. 2018;Schellenberger et al. 2019) would be in agreement with this scenario. One of the key predictions of this model is that the emission in radio brides is volume-filling especially at low frequencies. In this respect, the detection at 144 MHz combined with the X-ray observations provides important information. In Fig. 4 we compare the X-ray and radio surface brightness of the bridge extracted in 31 regions and along 4 transverse slices, finding a remarkably good correlation between the two. We observe fluctuations of similar magnitude and morphology in radio and X-rays, suggesting that thermal and non-thermal emissions are connected and originate from similar volumes. This connection is studied for the first time in a radio bridge and is in line with theoretical expectations. The spectrum of radio bridges also provides important information on their origin. Unfortunately, we were able to provide a spectral constraint only in a small region of the bridge, that may be not representative of the overall spectrum of the emission. Deeper observations are required for this kind of study. Based on the red regions in Fig. 1, we assume oblate spheroid geometries for the halos and a cylindrical volume for the bridge to compute the average radio emissivity of the diffuse sources. The mean radio emissivity of the bridge is los. We note that the emissivity of the bridge in A1758 is also similar (a factor of 2 lower) to that reported for the bridge in Abell 399-401 (Govoni et al. 2019), marking another similarity between the two pairs.
Regions between clusters give us the possibility to study the most diluted regions of the ICM that are accessible with current instruments (Vazza et al. 2019). They are dynamically very young, i.e. their dynamical age is comparable to the eddy turnover time of the turbulent eddies generated in these environments. Under these conditions, the media are characterized by a ratio of thermal to magnetic pressures reasonably > 100 (in clusters we believe that P th /PB = 10 − 100), providing unique laboratories to study magnetic field amplification and particle acceleration in new regimes (see Brunetti & Vazza 2020).
CONCLUSIONS
We have confirmed the presence of a ∼ 2 Mpc radio bridge connecting the two galaxy clusters in A1758. A standalone detection could be claimed only at 144 MHz, where the radio and X-ray emissions are correlated, suggesting that they share similar emitting volumes. Only hints of radio emission are observed at 53 and 383 MHz, making uncertain the determination of its spectral index; deeper observations are required to provide a robust estimate of its value.
Only two giant intra-cluster radio bridges have been detected to date. These are among the most giant structures observed in the Universe so far, and their origin is likely related to the turbulence (and shocks) generated in the ICM during the initial stage of the merger, which boost both the radio and X-ray emission between the clusters. These detections demonstrate the existence of non-thermal components at large distances from cluster centers with important implications for the models of magnetic field amplification and particle acceleration in the most diluted regions of the ICM.
ACKNOWLEDGMENTS
ABot, RJvW, and EO acknowledge support from the VIDI research programme with project number 639.042.729, which is financed by the Netherlands Organisation for Scientific Research (NWO). GDG acknowledges support from the ERC Starting Grant ClusterWeb 804208. ABon acknowledges support from the ERC-Stg DRANOEL n. 714245 and from the MIUR FARE grant "SMS". GB, RC, FG, and MR acknowledge support from INAF mainstream project "Galaxy Clusters Science with LOFAR" 1.05.01.86.05. VC acknowledges support from the Alexander von Humboldt Foundation. LOFAR (van Haarlem et al. 2013) is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefited from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Université d'Orléans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland; The Istituto Nazionale di Astrofisica (INAF), Italy. This research made use of the Dutch national e-infrastructure with support of the SURF Cooperative (einfra 180169) and the LOFAR e-infra group. The Jülich LO-FAR Long Term Archive and the German LOFAR network are both coordinated and operated by the Jülich Supercomputing Centre (JSC), and computing resources on the supercomputer JUWELS at JSC were provided by the Gauss Centre for Supercomputing e.V. (grant CHTB00) through the John von Neumann Institute for Computing (NIC). This research made use of the University of Hertfordshire highperformance computing facility and the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1], and of the Italian LO-FAR IT computing infrastructure supported and operated by INAF, and by the Physics Department of Turin University (under an agreement with Consorzio Interuniversitario per la Fisica Spaziale) at the C3S Supercomputing Centre, Italy. We thank the staff of the GMRT for support. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author. | 2020-08-20T10:12:28.515Z | 2020-08-21T00:00:00.000 | {
"year": 2020,
"sha1": "09bf29385c295c195bc5599d2543e400a3104e98",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/mnrasl/article-pdf/499/1/L11/33779940/slaa142.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4e87675cd7c09219e26e16876533eceb4a7e39cd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270258447 | pes2o/s2orc | v3-fos-license | Population Transformer: Learning Population-level Representations of Intracranial Activity
We present a self-supervised framework that learns population-level codes for intracranial neural recordings at scale, unlocking the benefits of representation learning for a key neuroscience recording modality. The Population Transformer (PopT) lowers the amount of data required for decoding experiments, while increasing accuracy, even on never-before-seen subjects and tasks. We address two key challenges in developing PopT: sparse electrode distribution and varying electrode location across patients. PopT stacks on top of pretrained representations and enhances downstream tasks by enabling learned aggregation of multiple spatially-sparse data channels. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how it can be used to provide neuroscience insights learned from massive amounts of data. We release a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability, and code is available at https://github.com/czlwang/PopulationTransformer.
Introduction
Building effective representations of neural recordings is an important tool in enabling neuroscience research.We are particularly interested in modeling intracranial recordings, which rely on probes placed within the brain to provide high temporal resolution recordings of local neural activity [1,2].Because of its dispersed placement within the brain volume, intracranial recordings suffer from data sparsity.Moreover, there is often significant variability in probe placement across subjects [1,2], leading to high variability in input channel meaning.Historically, constructing decoders from intracranial data has relied on supervised learning [3,2,[4][5][6], but this requires experimenters to collect annotated data, which is scarce due to patient availability and labor-intensive labeling.
To improve decoding data-efficiency, self-supervised pretraining on unannotated data can be employed to first learn generic representations of the recordings.This means that the model does not have to use valuable annotated samples to learn how to do feature extraction before it can do classification, improving the reach of neuroscientific research.
In this paper, we are interested in developing generic representations of multi-channel intracranial recordings that enable efficient adaptation to a wide range of downstream decoding tasks.Prior work has shown how to pretrain subject-specific [7] or channel-specific [8] models of intracranial data, but such techniques ignore inter-channel relationships or commonalities that might exist across subjects.
Related Work
Self-supervised learning on neural data Channel independent pretrained models are a popular approach for neural spiking data [11], intracranial brain data [8,12], and general time-series [13].Additionally, in fixed-channel neural datasets, approaches exist for EEG [14][15][16], fMRI [17][18][19], and calcium imaging [20] datasets.However, all of this work do not learn population-level interactions across datasets with different recording layouts due to the single-channel focus or the ability to assume fixed-channel setups.Several works pretrain spatial and temporal dimensions across datasets with variable inputs [21][22][23][24][25], but most simultaneously learn the temporal embeddings with the spatial modeling, which make them challenging to interpret and computationally expensive to train.As far as we know, we are the first to study the problem of building pretrained channel aggregation models on top of pre-existing temporal embeddings trained across datasets with variable sampling of input channels, allowing for modeling of high quality (>2kHz sampling rate) intracranial data.
Modeling across variable input channels Modeling spatial representations on top of temporal embeddings have been found to be beneficial for decoding [3,7,26], but prior works use supervised labels, so do not leverage large amounts of unannotated data.The brain-computer-interface field has been studying how to align latent spaces [27][28][29][30][31] which either still requires creating an alignment matrix to learn across datasets or only provides post-training alignment mechanisms rather than learning across datasets.Other approaches impute missing channels or learn latent spaces robust to missing channels [32][33][34], but these are more suited for the occasional missing channel rather than largely varying sensor layouts.We directly learn spatial-level representations using self-supervised learning across datasets to leverage massive amounts of unannotated intracranial data.
Figure 1: Schematic of our approach.The inputs to our model (a) are the combined neural activities from a collection of intracranial electrodes in a given time interval.These are passed to a frozen temporal embedding model, which produces a set of time-contextual embedding vectors (yellow).The 3D position of each electrode (red) is added to these vectors to produce the model inputs (orange).The PopT produces space-contextual embeddings for each electrode and a [CLS] token (blue), which can be fine-tuned for downstream tasks.During pretraining, (b) the PopT is trained on two objectives simultaneously.In the first, the PopT determines whether two different sets of electrodes (orange vs brown) represent consecutive or non-consecutive times.In the second objective, the PopT must determine whether an input channel has been replaced with activity at a random other time that is inconsistent with the majority of inputs.
Population Transformer Approach
In order to learn a subject-generic model of intracranial activity that can handle arbitrary configurations of electrodes, we design a self-supervised training scheme that requires the model to learn representations of individual electrodes as well as groups of electrodes.One component of our self-supervised loss requires the model to identify which channels have been swapped with activity from the same channel, but at a different time point.To do this task, the model must build a representation of the channel's activity that is sensitive to the context of all the surrounding channels.The other component requires the model to discriminate between randomly selected subsets of electrodes to determine if their activity has occurred consecutively in time or not.This requires the same sensitivity to context, but at the ensemble level.One can think of this swap and discriminate objective as exposing the model to many in-silico ablations of the brain, and asking the model to learn the connections between regions in the presence of these ablations.
A key aspect of our method is the fact that our objective is discriminative, rather than reconstructive, as is often the case in self-supervision [35,8].We found this to be necessary, because in practice, the temporal embeddings often have low effective dimension (see [8]), and reconstruction rewards the model for overfitting to "filler" dimensions in the feature vector (see Section 5).
We take additional steps to make our model subject and configuration generic.We provide the absolute position of every electrode to the model, which allows the model to learn a common position embedding space across subjects.We also vary the size of the subsets we select in our sampling procedure to ensure that the model can handle ensembles of differing number, which is important for neuroscience applications, in which experiments have varying number of electrodes, and analysis may be done on the electrode, wire, region, or brain level.Finally, we select that subsets are disjoint, to ensure that the model does not learn to solve the task by trivial copying.
Architecture A schematic of our Population Transformer (PopT) approach is shown in Figure 1 1: Pretraining PopT is critical to downstream decoding performance.We test on a variety of audio-linguistic decoding tasks (see Section 4) with either a single channel (row 1) or 90 channels (rows 2-5) as input.Shown are the ROC-AUC mean and standard error across subjects.We see that all aggregation approaches (rows 2-5) outperform single-channel decoding with BrainBERT [8] (row 1).Pretraining PopT and then fine-tuning it for downstream decoding results in significantly better performance (bold) compared to non-pretrained aggregation approaches (rows 2-4).This gain cannot be explained by simply providing more temporal embeddings, as evidenced by the performance of Linear and Deep NN (rows 2 and 3) that take the concatenated raw temporal embeddings as input.
Neither can the gain be attributed to simply using a Transformer architecture, as is shown by a comparison with a non-pretrained PopT (row 4).
Figure 2: Pretrained PopT downstream performance scales better with ensemble size.Increasing channel ensemble size from 1 to 30 (x-axis), we see pretrained PopT (green) decoding performance (y-axis) not only beat non-pretrained approaches (orange, purple, pink), but also continually improve more with increasing channel count.Shaded bands show the standard error across subjects.
through a temporal embedding model T , in our case BrainBERT, to obtain a representation of each channel's temporal context.
Before being inputted to the PopT, each channel's 3D position is added to this embedding, so the final input is X B = {T (x) + pos(i) + N (0, σ)|x ∈ X}.Here, we add Gaussian fuzzing to prevent overfitting to a particular set of coordinates.Spatial location is given by the electrode's Left, Posterior, and Inferior coordinates [36]; see [8] for details on how these were obtained.Each coordinate is encoded using sinusoidal position encoding [9].And the three encodings are concatenated together to form the position embedding pos(i) = [e left ; e post.; e inf ].
The core of PopT consists of a transformer encoder stack (see Appendix A: Architectures).The output of the PopT are spatial-contextual embeddings of the channels Y = {y i } as well as an embedding of the CLS token y cls .During pretraining, the PopulationTransformer additionally is equipped with a linear layer head for the [CLS] token output and separate linear layer heads for all other individual token outputs.These produce the scalars ỹcls and ỹi and respectively, which are used in the objective (see Figure 1a).
Pretraining Our pretraining objective has two components: channel-wise discrimination and next brain state discrimination, which is a group-level objective (see Figure 1b).First, we describe the next brain state discrimination task.Two different subsets of channels S A , S B ⊂ C are chosen with the condition that they be disjoint S A ∩S B = ∅.During pretraining, the model receives the activities from these channels at separate times The objective of the task is then to determine whether these states X A and X B have occurred consecutively in time or are separated by some further, randomly selected interval.Given the output of the classification head, the objective is the binary cross entropy: ).Next we describe our channel-wise discriminative learning.The token level objective is to determine whether a channels activity has been swapped with activity from a random time.Precisely, activity from each channel i is drawn from a time t i .All channels are drawn from the same time t i = T , and then 10% of the channels are randomly selected to have their activity replaced with activity from the same channel, but taken from a random point in time t i ̸ = T .Then, given the token outputs of the Population Transformer, the objective function is the binary cross entropy: Fine-tuning During fine-tuning, the [CLS] intermediate representation, ỹcls of the pretrained PopT is passed through a single layer linear neural network to produce a scalar ŷcls .This scalar is the input to binary cross entropy loss for our decoding tasks (see Section 4).
Experiment Setup
Data We use the publicly available subject data from [8].Data was collected from 10 subjects (total 1,688 electrodes, with a mean of 167 electrodes per subject) who watched 26 movies while intracranial probes recorded their brain activity.The movie transcripts were aligned to the brain activity so that features such as volume, pitch, etc. could be associated with the corresponding sEEG readings.19 of the sessions are used for pretraining.7 of the sessions are held-out for evaluation.
Decoding We evaluate the effectiveness of our pretrained PopT model by fine-tuning it on the four downstream decoding task used in the evaluation of [8].Two of the tasks are audio focused: determining whether a word is spoken with a high or low pitch and determining whether a word is spoken loudly or softly.And two of the tasks have a more linguistic focus: determining whether the beginning of a sentence is occurring or determining whether any speech at all is occurring.
Our approach enables decoding on any arbitrary size of ensemble.We verify that our model is able to leverage additional channels for improved decoding performance that scales the number of inputs.To test this, we first order the electrodes by their individual linear decodability per task, and we increase the number of channels available to the model at fine-tuning time.
Baselines We want to determine whether the information about spatial relationships learned during pretraining was useful at fine-tuning time.For comparison, we concatenate the raw BrainBERT embeddings and train a linear and deep NN on the decoding task.This sets a baseline for how much improvement is achievable from simply having more channels available at once.To determine whether our performance can be attributed to using a more powerful architecture, we also fine-tune a PopT without pretraining, i.e. with randomly initialized weights.
Results
Decoding performance Compared to trying to decode from the bare BrainBERT embeddings or from a non-pretrained PopT, the PopT both achieves better decoding performance (see Table 1) and does so with steeper scaling per added channel (Figure 2).
To verify that the weights of the pretrained PopT capture neural processing well even without finetuning, we also train a linear-encoder on top of the frozen PopT [CLS] token and find the same trends ( Figure 10: Frozen scaling -Figure 10).This point in particular is important in building confidence in the results of our interpretability studies (see Section 6), in which we use the frozen pretrained weights to analyze connectivity.
Sample and compute efficiency Our PopT learns spatial relationships between channels, in a way that makes downstream supervised learning more data and compute efficient (see Figure 3 and Figure 4).Compared to the non-pretrained baseline models, fine-tuning the pretrained PopT achieves more decoding performance from fewer samples.At only 200 examples, the pretrained PopT has already surpassed the performance achieved by the non-pretrained model on the full dataset, for the volume, sentence onset, and speech vs. non-speech tasks Figure 3.The number of steps required for each model to converge is also greatly reduced by starting with the pretrained PopT.We see that fine-tuning the pretrained PopT consistently requires 500 steps or fewer steps to reach its converged performance Figure 4, whereas the non pretrained baselines may require 2k or more steps.
Generalizability To test whether our pretrained weights will be useful for subjects not seen during training, we conduct a hold-one-out analysis.We pretrain a model using all subjects except for one, and then fine-tune and evaluate on the model downstream.We find that missing a subject from pretraining does not significantly affect the downstream results (see Figure 5).This raises our confidence that the pretrained weights will be useful for unseen subjects and for researchers using new data.
Scaling with number of pretraining subjects
To investigate the effect of scaling pretraining data on our model, we pretrain additional versions of PopT using only 1, 2, or 3 subjects.We find a consistent improvement in downstream decoding when we increase the number of pretraining subjects available across all our downstream decoding tasks Figure 6.A significant improvement is found with just 1 pretraining subject already, potentially due to adaption to the temporal embeddings used.The decoding performance using all our pretraining data is significantly higher in most decoding tasks than with just 1 or 2 subjects in the pretraining data, suggesting the potential for our framework to continue scaling with more subjects.
Ablation of loss components and position information An ablation study confirms that both the network-wise and channel-wise component of the pretraining objective contribute to the downstream performance (Table 2).We also find that including the 3D position information for each channel is critical for decoding.These findings also hold when the PopT is kept frozen during fine-tuning Figure 5: Gains in decoding performance are available to new subjects.To test whether our pretrained PopT weights will be able to yield decoding benefits for unseen subjects, we run a holdone-out analysis in which we exclude one subject from pretraining and then evaluate on that subject during fine-tuning (Held-out).We compare this with our full PopT model that has seen all subjects during pretraining (All).A minimal decrease in downstream decoding performance is found if the subject is held-out from pretraining (Held-out vs All).This is in stark contrast to the achievable downstream performance with a non pretrained PopT (Non-pretrained PopT).
(see Appendix G: Frozen ablation -Table 4).Additionally, we find that the discriminative nature of our loss is necessary for decoding.Attempting to add an L1 reconstruction term to our pretraining objective results in poorer performance, perhaps because the model learns to overfit on low-entropy features in the embedding.Our discriminative loss requires the model to understand the embeddings in terms of how they can be distinguished from one another, which leads the model to extract more informative representations.
Figure 6: Pretraining with more subjects leads to better downstream performance.We pretrain PopT with different number of subjects (colors) and test on our decoding tasks (x-axis).Bars indicate mean and standard error of performance across channel ensembles 5-30 on test subject 3. Model descriptions: 0 subjects (non-pretrained), 1 subject (pretrain w/ subject 4), 2 subjects (pretrain w/ subjects 4, 8), 3 subjects (pretrain w/ subjects 4, 8, 10), All subjects (pretrain w/ all 10 subjects).Pretraining with one subject gives a considerable benefit compared to no pretraining (red to yellow), but the addition of more subjects to pretraining consistently improves performance (yellow → green).2: PopT ablation study.We individually ablate our losses and positional encodings during pretraining then decode on the resulting models.Shown are ROC-AUC mean and standard error across subjects.The best performing model across all decoding tasks uses all three of our proposed components, showing that they are all necessary.Removing our positional encoding during pretraining and fine-tuning drops the performance the most, indicating that position encoding is highly important for achieving good decoding.Additionally, we attempt adding a reconstruction component to the loss as a regularizing term, but find that this leads to poorer performance (see section 5).
Interpreting Learned Weights
Our final analysis are two interpretability studies of the Population Transformer's learned weights.In the first, we use the PopT weights to uncover connectivity maps of the channels, and in the second, Figure 7: Probing the pretrained model for inter-channel connectivity Traditionally, connectivity analysis between regions is done by computing the coherence [37], i.e. cross-correlation, between electrode activity (left).We propose an alternative analysis based on how channels matter to each other in the context of our pretraining objective.Iteratively, we select an electrode, mask out its activity, and then plot the degradation in the channel-wise objective function of the pretrained PopT objective for the unmasked electrodes.Plotting the values of this delta (right) recovers the main points of connectivity, purely based off of the relationships learned during pretraining.Shown here is a plot for a single subject; plots for all test subjects can be seen in Appendix E: Connectivity.
we use the attention weights of the fine-tuned PopT to identify candidate functional brain regions per decoding task.
Connectivity For identifying connectivity per region, traditional neuroscience analyses typically use cross-correlation as a measure of channel connectivity [37].Our PopT allows for an alternative method of determining connectivity, based on the degree to which channels are sensitive to each other's context.We go through our channels, masking one channel and then evaluating the model's performance on the pretraining channel-wise objective for the remaining unmasked channels.We take the degradation in performance as a measure of connectivity.We can construct plots as in Figure 7, that recapitulate the strongest connectivity of the cross-correlation maps.Note that while some approaches for modelling brain activity explicitly build this into their architecture [25] we recover these connections purely as a result of our self-supervised learning.
Candidate functional brain regions from attention weights Next, we discuss the possibility of uncovering functional brain regions from the attention weights.After fine-tuning our weights on a decoding task, we can examine the attention weights of the [CLS] output for candidate functional brain regions.We obtain a normalized Scaled Attention Weight metric across all subjects to be able to analyze candidate functional brain regions across sparsely sampled subject datasets Figure 8.The Scaled Attention Weight is computed from raw attention weights at the [CLS] token passed through the attention rollout algorithm [38].The resulting weights from each channel are then grouped by brain region according to the Destrieux layout [39].Additional details available in Appendix D.
The resulting weights reveal expected functional brain regions related to the tasks decoded Figure 8.
For our low-level perceptual auditory tasks (Volume and Pitch), we see that our model learns to attend to the primary auditory cortex.For our higher-level language distinction tasks (Speech vs. Non-speech and Sentence onset), we see higher attention is placed at language areas like Wernicke's area.Given the massive pretraining PopT undergoes, these scaled attention weights provide a valuable a new tool for discovering candidate functional brain regions.
Discussion
We presented a self-supervised learning scheme for learning effective representations of intracranial activity from temporal embeddings.We find that pretraining the PopT results in better channel efficiency at fine-tuning time.This can reduce the number of electrodes needed in future experiments, which is critical for an invasive recording modality such as sEEG.We showed that self-supervised Figure 8: Attention weights from a fine-tuned PopT identify candidate functional brain regions Candidate functional maps can be read from attention weights of a PopT fine-tuned on our decoding tasks.For the Volume and Pitch tasks, note the weight placed on the primary auditory cortex (black arrows), but not in Wernicke's area.For the Speech vs Non-speech and Sentence onset tasks, note the weight placed on regions near Wernicke's area (black arrows).Center brain figure highlight regions related to auditory-linguistic processing such as language production area Broca's area, language understanding Wernicke's area, and the primary auditory cortex (adapted from [40]).
pretraining imbues our model with knowledge of spatial relationships between these embeddings and improved downstream decoding that scales with the number of available channels.As an aside, we note that the tasks we evaluate necessitate wide coverage of the brain.This is evidenced by the fact that performance scales with the number of input channels.With future collection of high quality intracranial data, we can continue scaling PopT and uncover exciting new data-driven findings for neuroscience.
By decoupling temporal and spatial feature extraction, we are able to leverage existing temporal embeddings to learn spatiotemporal representations efficiently and with a smaller number of parameters.Our approach also leaves open the possibility for independent improvement in temporal modeling.If future approaches introduce better time-series representations, are approach will be able to incorporate these advantages directly.Finally, we note that our method can serve more generally as a representation learning approach for any ensemble of sparsely distributed time-series data channels.
Limitations and Future Work
As far as we know, no large public sEEG dataset that are of the same level of quality as ours (2048 Hz sampling rate, aligned electrode coordiantes, multimodal stimulus) are available, so direct comparison with existing approaches is difficult.Additionally, existing sEEG test datasets that have been used by existing deep learning models [21] focus on the artifact and seizure detection tasks [41], which are less interesting at a network-level due to the dependence on human labeling while looking at the time-series sEEG data [42].
Given the high sampling rate of our sEEG data (10x of prior work [21,25]), training an end-toend spatio-temporal model on our data would not have been computationally feasible, lending to the benefits of learning spatial representations on top of learned temporal embeddings.With the development and acquisition of compute resources, it would be a valuable future work to compare our approach with end-to-end approaches.
Conclusion
We introduced a pretraining method for learning representations of arbitrary ensembles of intracranial electrodes.We showed that our pretraining produced considerable improvements in downstream decoding, that would not have been possible without the knowledge of spatial relationships learned during the self-supervised pretraining stage.We showed that this scheme produces interpretable weights from which connectivity maps and candidate functional brain regions can be read.Finally, we release the pretrained weights for our PopT with BrainBERT inputs as well as our code for plug-and-play pretraining with any temporal embedding (see attached supplemental materials).
A Architectures and training
Pretrained PopT The core Population Transformer consists of a transformer encoder stack with 6 layers, 8 heads.All layers (N = 6) in the encoder stack are set with the following parameters: d h = 512, H = 8, and p dropout = 0.1.We pretrain the PopT model with the LAMB optimizer [43] (lr = 1e − 4), with a batch size of n batch = 256, and train/val/test split of 0.98, 0.01, 0.01 of the data.We pretrain for 500,000 steps, and record the validation performance every 1,000 steps.Downstream evaluation takes place on the weights with the best validation performance.We use the intermediate representation at the [CLS] token d h = 512 and put a linear layer that outputs to d out = 1 for fine-tuning on downstream tasks.These parameters for pretraining were the same for any PopT that needed to be pretrained (hold-one-out subject, subject subsets, ablation studies).
Non-pretrained PopT
The architecture for the non-pretrained PopT is the same as the pretrained PopT (above).However, no pretraining is done, and the weights are randomly initialized with the default initializations.
Linear The linear baseline consists of a single linear layer that outputs to d out = 1.The inputs are flattened and concatenated BrainBERT embeddings d emb = 756 from a subset of channels S ⊂ C. Thus, the full input dimension is Deep NN The inputs are the same as above, but the decoding network now consists of 5 stacked linear layers, each with d h = 512 and a GeLU activation.
Downstream Training
For both PopT models, we train with these parameters: AdamW optimizer [44], lr = 5e −4 where transformer weights are scaled down by a factor of 10 (lr t = 5e −5 ), n batch = 256, a Ramp Up scheduler [45] with warmup 0.025 and Step LR gamma 0.99, reducing 100 times within the 2000 total steps that we train for.For Linear and DeepNN models, we train with these parameters: AdamW optimizer [44], lr = 5e −4 , n batch = 256, a Ramp Up scheduler [45] with warmup 0.025 and Step LR gamma 0.95, reducing 25 times within the 17,000 total steps we train for.For all downstream decoding, we use a fixed train/val/test split of 0.8, 0.1, 0.1 of the data.
Compute Resources
To run all our experiments (data processing, pretraining, evaluations, interpretability), one only needs 1 NVIDIA Titan RTXs (24GB GPU Ram) and up to 80 CPU cores (700GB memory) if running sequentially.Pretraining PopT takes 4 days on 1 GPU.Our downstream evaluations take a few minutes to run each.For the purposes of data processing and gathering all the results in the paper, we parallelized the experiments on roughly 8 GPUs and 80 CPU cores.
B Decoding tasks
We follow the same task specification as in Wang et al. [8], with the modification that the pitch and volume examples are determined by percentile (see below) rather than standard deviation in order to obtain balanced classes.[8].The number of uncorrupted, electrodes that can be Laplacian re-referenced are shown in the second column The average amount of recording data per subject is 4.3 (hrs).
D Interpretation Methods
Connectivity analysis We start with a pretrained PopT.To test a particular channel's contribution to connectivity, we mask it with all zeros.Then, we consider the remaining unmasked channels and ask, how much increase do we see in the pretraining channel-wise loss?Recall that this objective is to determine whether or not a channel has had its inputs swapped with random activity.If the increase in loss is large, then we infer that the masked channel provided important context for this task.Using this delta as a measure for connectivity, we can then average across regions, as provided by the Desikan-Killiany atlas [46] and produce a plot using mne-connectivity [47].
Scaled Attention Weight First, we obtain an attention weight matrix across all trials which includes weights between all tokens.Then, we perform attention rollout [38] across layers to obtain the contributions of each input channel by the last layer.We take the resulting last layer of rollout weights for all channels, where the target is the [CLS] token, normalize within subject, and scale by ROC AUC to obtain the Scaled Attention Weight per channel.Finally, we plot the 0.75 percentile weight per region, as mapped by the Destrieux atlas [39] using Nilearn [48].
E Connectivity
Figure 9: Full connectivity for our 7 test subjects.We compare between traditional connectivity analaysis performed via coherence (top row in each section) and the analysis based on our PopT pretrained weights (bottom row in each section).We note that our analysis usually recovers the strongest points of connectivity fromt the traditional analysis.Coherence was computed using scikit-learn's [49] signal.coherence.4: An ablation study of the components of our approach for the frozen PopT.During pretraining, we alternate using either only the CLS or token contrastive component of the loss.We fine-tune these weights on all subjects.We find that both components contribute to the full model's performance.
Figure 3 :
Figure 3: Pretrained PopT is more sample efficient when fine-tuning.Varying the number of samples available to each model at train time (x-axis), we see how the pretrained PopT is highly sample efficient, requiring only a fraction of samples to reach the full performance level of non pretrained aggregation approaches (dashed lines).Bands show standard error across test subjects.Stars indicate performance of the model trained on the full fine-tuning dataset.
Figure 4 :
Figure 4: Pretrained PopT is consistently more compute efficient when fine-tuning.Number of steps required for each model to reach final performance during fine-tuning (dashed lines).We find that pretrained PopT consistently requires fewer than 750 steps (each step is training on a batch size of 256) to converge, in contrast to the 2k steps required for the non pretrained PopT.Linear aggregation can be similarily compute efficient, but occasionally benefits from more training steps depending on dataset size (Speech vs. Non-speech).Bands show standard error across test subjects.Stars indicate fully trained performance.
Figure 10 :
Figure10: Pretraining is critical to frozen PopT performance that scales with the number of channels.As in Figure2, we see that pretraining results in better downstream decoding and better scaling with the number of added channels.However, unlike in Figure2, the PopT weights are frozen during fine-tuning, and only the linear classification head is updated.Bands show standard error across subjects.Results are shown for a frozen PopT with BrainBERT inputs.
. Consider a given subject with N c channels indexed by C = {1, ..., N c }. Activity from channel i at time t can be denoted by x t i .The PopT takes as input an interval of brain activity X = {x t i |i ∈ C} from a given time t and a special [CLS] token.Per channel, each interval of brain activity is passed
Table 3 :
Pitch The PopT receives an interval of activity and must determine if it corresponds with a high or low pitch word being spoken.For the duration of a given word, pitch was extracted using Librosa's piptrack function over a Mel-spectrogram (sampling rate 48,000 Hz, FFT window length of 2048, hop length of 512, and 128 mel filters).For this task, for a given session, positive examples consist of words in the top-quartile of mean pitch and negative examples are the words in the bottom quartiles.VolumeThe volume of a given word was computed as the average intensity of root-mean-square (RMS) (rms function, frame and hop lengths 2048 and 512 respectively).As before, positive examples are the words in the top-quartile of volume and negative examples are those in the bottom quartiles.Speech vs non-speech Positive examples are intervals of brain activity that correspond with dialogue being spoken in the stimuli movie.Negative examples are intervals of activity from 1s periods during which no speech is occurring in the movie.Sentence onset Negative examples are as before.Positive examples are intervals of brain activity that correspond with hearing the first word of a sentence.Subject statistics Subjects used in PopT training, and held-out downstream evaluation.Tabletaken from | 2024-06-06T06:45:11.794Z | 2024-06-05T00:00:00.000 | {
"year": 2024,
"sha1": "d10a77035efb0d3cb140f93e815bdb4475c05ffa",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "ArXiv",
"pdf_hash": "d10a77035efb0d3cb140f93e815bdb4475c05ffa",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Biology"
]
} |
253226100 | pes2o/s2orc | v3-fos-license | Impact of perioperative blood transfusion on long-term survival in patients with different stages of perihilar cholangiocarcinoma treated with curative resection: A multicentre propensity score matching study
Background & aim The association of perioperative blood transfusion (PBT) with long-term survival in perihilar cholangiocarcinoma (pCCA) patients after surgical resection with curative intent is controversial and may differ among different stages of the disease. This study aimed to investigate the impact of PBT on long-term survival of patients with different stages of pCCA. Methods Consecutive pCCA patients from three hospitals treated with curative resection from 2012 to 2019 were enrolled and divided into the PBT and non-PBT groups. Propensity score matching (PSM) was used to balance differences in baseline characteristics between the PBT and non-PBT groups. Kaplan–Meier curves and log-rank test were used to compare overall survival (OS) and recurrence-free survival (RFS) between patients with all tumor stages, early stage (8th AJCC stage I), and non-early stage (8th AJCC stage II-IV) pCCA in the PBT and non-PBT groups. Cox regression analysis was used to determine the impact of PBT on OS and RFS of these patients. Results 302 pCCA patients treated with curative resection were enrolled into this study. Before PSM, 68 patients (22 patients in the PBT group) were in the early stage and 234 patients (108 patients in the PBT group) were in the non-early stage. Patients with early stage pCCA in the PBT group had significantly lower OS and RFS rates than those in the non-PBT group. However, there were with no significant differences between the 2 groups with all tumor stages and non-early stage pCCA. After PSM, there were 18 matched pairs of patients with early stage and 72 matched pairs of patients with non-early stage. Similar results were obtained in the pre- and post-PSM cohorts: patients with early stage pCCA in the PBT group showed significantly lower OS and RFS rates than those in the non-PBT group, but there were no significant differences between the 2 groups for patients with all tumor stages and non-early stage pCCA. Cox regression analysis demonstrated that PBT was independently associated with worse OS and RFS for patients with early stage pCCA. Conclusions PBT had a negative impact on long-term survival in patients with early stage pCCA after curative resection, but not in patients with non-early stage pCCA.
Introduction
Cholangiocarcinoma accounts for 3% of all gastrointestinal tumors and represents 10~25% of all primary hepatic malignancies globally (1,2). Perihilar cholangiocarcinoma (pCCA) is the most common type of cholangiocarcinoma, accounting for approximately 60% of these cases (3). The only treatment that can result in long-term survival for patients with pCCA is curative resection (4,5). However, the complicated nature of the surgical procedure which includes bile duct resection and reconstruction, hepatectomy, perihilar dissection, vascular resection and reconstruction if necessary, as well as coagulopathy due to preoperative jaundice, make the possibility of intraoperative bleeding and perioperative blood transfusion extremely likely (6).
Perioperative blood transfusion (PBT) plays an essential role in perioperative safety of pCCA patients. However, the impact of PBT on long-term survival in pCCA patients treated with curative resection has been controversial. Müller et al. indicated that allogeneic blood transfusion did not affect longterm survival after curative resection for advanced cholangiocarcinoma (7). However, Kimura et al. indicated that PBT was a poor prognostic factor for hilar cholangiocarcinoma treated with curative resection (8). Both these two studies focused on long-term survival in cholangiocarcinoma patients following curative resection, they reached completely different conclusions. In fact, allogeneic blood transfusion has been demonstrated to have immunosuppressive effects, which are associated with a higher chance of tumor recurrence and a poor long-term prognosis in patients with malignancies (9, 10). There are two possible explanations for the different results obtained in the above two mentioned studies. First, both these studies were single-centre studies with small sample sizes, and the results were of low-level of medical evidence. Second, the conclusions drawn based on the total cohort did not apply to an individual, as the patients had tumors of different stages. Previous studies on hepatocellular carcinoma showed PBT to have different impact on long-term survival in different tumor stages (11,12). However, the impact of PBT on long-term survival has not been studied in patients with different stages of pCCA.
Ethical reasons do not allow clinical researchers to conduct a randomized controlled trial on PBT. To improve the level of medical evidence, 302 patients from 3 institutions were identified from a multicentre database to be included to conduct this first study by using propensity score matching (PSM) analysis to study the impact of PBT on long-term survival in patients with different stages of pCCA treated with curative resection.
Methods Patients
From February 2012 to February 2019, consecutive pCCA patients treated with curative resection at three hospitals (Southwest Hospital, Sichuan Provincial People's Hospital, Jiulongpo District Second People's Hospital) were enrolled in this study. Tumors originating from common hepatic duct, junction of common hepatic duct, and left/right first-order hepatic ducts were all grouped as pCCA. All diagnoses were confirmed by postoperative histopathology. The exclusion criteria were patients with (1): recurrent pCCA; (2) loss to follow-up; (3) lack of data for essential variables; and (4) death within 30 days after curative resection. This study complied with the Declaration of Helsinki and was approved by the Ethics Committees of the 3 participating hospitals. Due to its retrospective nature and because all data were deidentified, informed consent was exempted.
Surgical procedure
Curative resection was defined as resection resulting in microscopically clear margins. Curative resection included bile duct resection, biliary reconstruction, hepatectomy, lymph node dissection, and vascular reconstruction for vascular invasion as previously reported (13)(14)(15). Curative resection was performed by experienced surgeons in hepatobiliary surgery in the 3 institutions.
Patients were divided into two groups according to the upper or lower limits of normal of each preoperative laboratory variable. Specifically, the following thresholds were employed: ALT and AST: 40 U/L, INR: 1.15, ALB: 35 g/L, HGB: 120 g/L, and CA 19-9: 37 U/L (13,14,17). All postoperative histopathological variables were confirmed by postoperative histopathological examination of tumor or nontumor tissues. Preoperative jaundice was defined as a preoperative total bilirubin higher than 37 mmol/L. Extent of hepatectomy was divided into major hepatectomy (three or more resected Couinaud liver segments) and minor hepatectomy (two or less resected Couinaud liver segments). In previous studies, pCCA patients with a tumor size > 3 cm showed poor long-term survival (13,14). As a consequence, 3 cm was used to divide patients into 2 groups. Both portal vein invasion and hepatic artery invasion were considered as macrovascular invasion.
Perioperative blood transfusion
PBT was defined as transfusion of whole blood and/or packed red blood cells (PRBCs) either during surgery or within 7 days of surgery as determined from the surgical and postoperative medical records. PBT excluded autologous blood, allogeneic platelets, fresh frozen plasma, and cryoprecipitate. The need for intraoperative blood transfusions was determined by excessive intraoperative blood loss and/or hemodynamic instability. Postoperative blood transfusions were administered if the patient's hemoglobin level was below 70 g/L or the patient was hemodynamically unstable. Two units were the standard for transfusion (one unit of PRBCs refers to the red blood cells isolated from 200 ml of whole blood).
Survival outcomes and follow-up
The main outcomes were overall survival (OS) and recurrence-free survival (RFS). OS was defined as the interval from curative resection to death or the last follow-up. The definition of RFS for patients with recurrence was the interval from curative resection to recurrence, and for patients with no recurrence as the interval from curative resection to death or last follow-up. This study was censored on February 28, 2022. After discharged from hospital, patients were followed-up once every 1-2 months for 2 years after curative resection, once every 3-4 months for 3-5 years and then once every 6 months for 5 years.
Contrast-enhanced ultrasonography, contrast-enhanced computed tomography, and/or magnetic resonance cholangiopancreatography were performed at each follow-up. Conservative therapy, systemic chemotherapy, or repeat surgical resection were performed if patients were confirmed to have relapsed.
Statistical analysis
Continuous variables with normal distributions were presented as means and standard deviations (SDs) and were compared using the Student's t test, whereas continuous variables with non-normal distributions weare presented as medians with interquartile ranges (IQRs) and were compared using the Mann−Whitney U test. Categorical variables were presented as frequencies and percentages and were compared using the Pearson's chi-square test. All patients were divided into two groups according to whether PBT was given. All the baseline characteristics of the two groups were compared. To overcome the influence of selection bias, PSM was used to balance the differences in the baseline characteristics between the PBT and non-PBT groups. Tendency scoring system was used for PSM to integrate all observed variable information, in order to balance variable and reduce the bias. Potential variables which might affect PBT were included into the propensity model, including preoperative jaundice, ASA grade, INR, ALB, HGB, tumor size, cirrhosis, and extent of hepatectomy. Propensity scores for pCCA patients who received PBT or not were created using logistic regression estimation. A one-to-one match between the two groups was then performed using the nearest-neighbor matching method with a caliper width equal to 0.2 of the standard deviation of the logit of the propensity score. Kaplan-Meier curves were used to calculate the OS and RFS rates of patients, and the log-rank test was used for comparisons. Variables with a significance level of P < 0.1 in univariate analysis were included in multivariate analysis using the Cox regression model to determine independent predictors of OS and RFS. In addition, using the 8th AJCC staging system, all patients were divided into the early stage (AJCC stage I) group and the non-early stage (AJCC stage II-IV) group. Subgroup analysis was used to investigate the impact of PBT on OS and RFS for patients with different tumor stagings. SPSS ® version 26.0 (IBM, Armonk, New York, United States) was used for all statistical analyses. A P value (two-sided) < 0.05 was considered statistically significant.
Characteristics of all pCCA patients
Of 364 pCCA patients treated with curative resection during the study period, 62 patients were excluded according to the exclusion criteria, resulting in 302 pCCA patients being included in this study (Supplement Figure 1). There were 198 (65.6%) males, and 125 (41.4%) patients were more than 60 years old. The median follow-up time was 22.5 months. The PBT group had 130 patients (43.0%), and the non-PBT group had 172 patients (57.0%). Before PSM, baseline characteristics showed the PBT group to have significantly more patients with preoperative jaundice, ASA grade > II, INR > 1.15, ALB < 35 g/L, HGB < 120 g/L, tumor size > 3 cm, 8th AJCC stage II-IV disease, major hepatectomy, blood loss > 500 mL and operation time > 360 min than the non-PBT group. After PSM with 90 matched pairs of patients were analyzed, baseline characteristics of the PBT group still showed significantly more patients with the 8th AJCC stage II-IV disease than the non-PBT group (Table 1).
Long-term survival of all pCCA patients
On follow-up, before PSM, the 5-year OS rates for all pCCA patients treated with curative resection were 18.9% in the PBT group and 29.4% in the non-PBT group, respectively, while the 5-year RFS rates were 10.6% in the PBT group and 19.5% in the non-PBT group, respectively. After PSM, the 5-year OS rates for all pCCA patients treated with curative resection were 22.7% in the PBT group and 27.8% in the non-PBT group, respectively, while the 5-year RFS rates were 11.4% in the PBT group and 18.0% in the non-PBT group, respectively (Supplement Table 1). Both before and after PSM, Kaplan-Meier curves revealed that there were no significant differences between the PBT and non-PBT groups in OS and RFS ( Figure 1).
Characteristics of patients with early stage pCCA
68 patients with early stage (AJCC stage I) pCCA were treated with curative resection. Among these patients, 22 patients (32.4%) were in the PBT group, and 46 patients (67.6%) were in the non-PBT group. Before PSM, baseline characteristics showed the PBT group to have significantly more patients with an ALB < 35 g/L, HGB < 120 g/L and blood loss > 500 mL than the non-PBT group. After PSM with 18 matched pairs of patients being analyzed, there were no significant differences in baseline characteristics between the PBT group and the non-PBT group ( Table 2).
Long-term survival of patients with early stage pCCA
On follow-up, before PSM, the 5-year OS rates of patients with early stage pCCA treated with curative resection were 32.6% in the PBT group and 62.2% in the non-PBT group, respectively, while the 5-year RFS rates were 13.2% in the PBT group and 47.9% in the non-PBT group, respectively. After PSM, the 5-year OS rates of patients with early stage pCCA treated with curative resection were 20.6% in the PBT group and 72.6% in the non-PBT group, respectively, while the 5-year RFS rates were 23.0% in the PBT group and 60.7% in the non-PBT group, respectively (Supplement Table 2). Both before and after PSM, Kaplan-Meier curves revealed that in patients with early stage pCCA, the OS and RFS rates in the PBT group were significantly lower than those in the non-PBT group (Figure 2). After PSM, multivariable analyses revealed that for patients with early stage pCCA, PBT and tumor size >3 cm to be independently associated with worse OS (Table 3) and RFS ( Table 4).
Characteristics of patients with non-early stage pCCA 234 patients with non-early stage (AJCC stage II-IV) pCCA were treated with curative resection. Of which, 108 patients with AJCC stage II, pCCA 103 patients with AJCC stage III pCCA, 23 patients with AJCC stage IV pCCA were treated with curative resection. The PBT group had 108 patients (46.2%), and the non-PBT group had 126 patients (53.8%). Before PSM, baseline characteristics showed the PBT group to have significantly more patients with preoperative jaundice, chronic hepatitis, ASA > II grade, INR > 1.15, ALB < 35 g/L, blood loss > 500 ml, and operation time > 360 min than the non-PBT group. After PSM, with 72 matched pairs of patients were analyzed, there were no significant differences in baseline characteristics between the PBT group and non-PBT group (Table 5).
Long-term survival for patients with non-early stage pCCA
On follow-up, before PSM, the 5-year OS rates for patients with non-early stage pCCA treated with curative resection were 15.6% in the PBT group and 17.0% in the non-PBT group, respectively, while the 5-year RFS rates were 10.6% in the PBT group and 8.5% in the non-PBT group, respectively. After PSM, the 5-year OS rates for patients with non-early stage pCCA treated with curative resection were 17.7% in the PBT group and 13.3% in the non-PBT group, respectively, while the 5-year RFS rates were 12.1% in the PBT group and 3.0% in the non-PBT group, respectively (Supplement Table 3). Both before and after PSM, Kaplan-Meier curves revealed in patients with non-early stage pCCA, there were no significant differences between the PBT and non-PBT groups in OS and RFS (Figure 3).
Discussion
PBT has been shown to be associate with perioperative safety of patients with hepatobiliary diseases. However, in some hepatobiliary diseases, such as hepatocellular carcinoma and colorectal liver metastases, immunomodulation brought on by PBT has been shown to associate with cancer recurrence (18-23). There have been very few studies reported on the association of PBT with long-term survival in pCCA patients. Radical resection of pCCA requires bile duct resection and reconstruction, hepatectomy, perihilar dissection, vascular resection and reconstruction if necessary, and it is a more complex, demanding, and high risk operation than resection for hepatocellular carcinoma or colorectal liver metastases. As a consequence, radical resection of pCCA has a greater need for PBT.
The association between PBT and long-term survival following resection for pCCA, to our knowledge, has only been studied in three previously published studies (8,24,25). Liu et al. observed a significant association between PBT and poor survival in 40 patients who underwent surgical resection for pCCA. However, blood transfusion could not be identified as an independent predictor in multivariate analysis of this study (24). In contrast, Young et al. demonstrated through multivariate analysis that PBT was a significant independent predictor of poor survival following surgery in a study of 83 patients with pCCA (25). Similarly, Kimura et al. retrospectively analysed the clinical data of 66 patients with pCCA who underwent surgical resection and found PBT to be an independent risk factor for poor OS and disease-free survival (8). The controversial results of the above three studies may well be due to differences in patient baseline characteristics, timings of the studies, tumor stagings and surgery types. However, in our opinion, the differences may be associated more with small sample sizes and selection biases of the studies. First, all these three studies had sample sizes of less than 100 patients coming from a single institution. The validity of the results of these studies could be improved by expanding the sample size and enrolling patients from multicenters. Second, as conducting a randomized controlled trial for PBT is not feasible due to ethical issues, PSM analysis can be used to minimize selection bias when randomized controlled studies cannot be carried out (26) in the same way as studies investigating the association between PBT and long-term survival of hepatocellular carcinoma patients using PSM analyses (11,12).
To our knowledge, our study is the first study using PSM analysis and a multicenter database to investigate the impact of PBT on OS and RFS in patients with different stages of pCCA treated with curative resection. Of the 302 pCCA patients from three institutions included in this study, univariable analysis indicated that PBT did not adversely affect long-term survival of pCCA patients treated with curative resection. Two commonly used tumour staging systems or classifications were evaluated at the outset of this study, including the 8th AJCC staging system and the Bismuth classification to divide these patients into an early stage group and a non-early stage group to study the longterm survival of patients with different stagings of pCCA. The Bismuth classification was more relevant for choosing surgical procedures rather than classifying the degrees of tumor invasion. To better reflect the extent and location of tumor invasion, this study chose the 8th AJCC staging to group these patients. After grouping, the PBT rate of patients in the early stage group was significantly lower than that in the non-early stage group (32.4% vs. 46.2%). On long-term survival analysis, multivariable Cox regression analysis showed that PBT was independently associated with decreased OS and RFS rates in pCCA patients in the early-stage group treated with curative resection. However, in patients with non-early stage pCCA treated with curative resection, univariable analysis suggested that PBT had no significant effect on OS and RFS. These exciting and interesting results can be explained by the conclusions drawn from the following reported studies. Blood transfusion has been well documented to increase immunosuppression in the host to promote cancer recurrence and metastasis. Blood transfusion in basic and clinical studie ahs been shown to decrease host immunity by reducing natural killer cell activity and cytotoxic T-cell function, increase suppressor T-cell activity, and decrease helper/suppressor (T4/ T8) lymphocyte ratios (27,28). In addition, normal physiological ageing and metabolic processes result in leaching of biologically active substances from cells into stored blood products. These leached bioactive substances have immunomodulatory effects that promote cell growth and angiogenesis and may therefore have a direct effect on tumor growth (29). The immunosuppressive impact of blood transfusion may therefore have a significant influence on recurrence of malignant tumors. A recent study by Goeppert et al. indicated that presence of both intratumoral T and B cells to be associated with prolonged survival in patients with cholangiocarcinoma and that prognosis was associated with inflammation (30). These findings provide a strong foundation in understanding the biological significance of inflammatory infiltrates in cholangiocarcinoma, as well as for further functional and clinical investigations on regulation of inflammatory responses in cholangiocarcinoma patients (30). Although immunosuppression may influence recurrence and survival in cholangiocarcinoma patients, the deleterious consequences of blood transfusion on host immunity remain unknown. Kaplan−Meier curves of overall survival (A) and recurrence-free survival (B) between the PBT and non-PBT group among patients with early stage (8th AJCC stage I) pCCA treated with curative resection before PSM. Kaplan−Meier curves of overall survival (C) and recurrence-free survival (D) between the PBT and non-PBT group among patients with early stage (8th AJCC stage I) pCCA treated with curative resection after PSM. CI, confidence interval; HR, hazard ratio; PBT, perioperative blood transfusion; PSM, propensity score matching. In our study, all patients were staged using the 8th AJCC staging system, and patients with early stage disease had tumors confined to the bile ducts. On the other hand, for individuals with non-early stage disease, their tumors had exhibited at least one of the following characteristics: invasion into surrounding adipose tissues, invasion into adjacent liver, invasion into one (or more) portal vein branches hepatic artery/common hepatic artery, lymph node invasion, and distant metastases. We hypothesize that the difference between the impact of PBT on prognosis of patients with early stage and non-early stage pCCA are the results of the difference in biological behaviors of the tumors in the 2 groups. PBT had detrimental effects on prognosis of patients with early stage disease, but its impact on prognosis of patients with more advanced diseases was obscured by the invasive and/or metastatic behavior of the tumors.
For pCCA patients who received PBT, the effects of postoperative adjuvant therapy remain to be further studied, as such a treatment way improve long-term survival. At present, immune checkpoint inhibitors have achieved remarkable results in biliary tract cancer, and some immune checkpoint inhibitors have achieved breakthroughs in clinical studies (clinical trial information: NCT03875235 and NCT03875235) (31). Since PBT could lead to immunosuppression in tumor patients who underwent radical surgery, it is worth studying whether such patients should receive adjuvant immunotherapy after surgery.
This study has several limitations. First, this retrospective study has its inherent defects, PSM analysis was used in this study to minimize selection bias. Second, there was only a small sample size of patients with early stage pCCA. However, as pCCA is a highly malignant tumor and it has no specific symptoms in the early stages, most patients in this study were already in the non-early stage at diagnosis. Patients enrolled in this study were much higher than those in other studies which investigated the association between long-term survival of pCCA patients with PBT. Third, patient selection and surgical procedures were not standardized among the three institutions in this study. For a multicenter study, such a bias cannot be completely be avoided. Despite this, the surgery was all performed by surgeons with rich experience in hepatobiliary surgery.
In conclusion, PBT was demonstrated in this study to be independently associated with worse long-term survival in patients with early stage pCCA treated with curative resection, but not in patients with non-early stage diseases. To improve the long-term survival of pCCA patients treated with curative resection, particularly those with early stage disease, PBT should be avoided if technically possible.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
This study complied with the Declaration of Helsinki and was approved by the Ethics Committees of the 3 participating hospitals. The patients/participants provided their written informed consent to participate in this study. Kaplan−Meier curves of overall survival (A) and recurrence-free survival (B) for the PBT and non-PBT groups among patients with non-early stage (8th AJCC stage II-IV) pCCA treated with curative resection before PSM. Kaplan−Meier curves of overall survival (C) and recurrence-free survival (D) for the PBT and non-PBT groups among patients with non-early stage (8th AJCC stage II-IV) pCCA treated with curative resection after PSM. CI, confidence interval; HR, hazard ratio; PBT, perioperative blood transfusion; PSM, propensity score matching. | 2022-10-31T14:53:37.973Z | 2022-10-31T00:00:00.000 | {
"year": 2022,
"sha1": "9e16455768f544739172dd827a23d64d8c66a062",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "9e16455768f544739172dd827a23d64d8c66a062",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
40539512 | pes2o/s2orc | v3-fos-license | Recycling of Mine Wastes as Ceramic Raw Materials: An Alternative to Avoid Environmental Contamination
Solid waste management has moved to the forefront of the environmental agenda. Nations are considering restrictions on packaging and controls on products in order to reduce solid waste generation rates. Local and regional governments are requiring wastes to be separated for recycling, and some have even established mandatory recycling targets. However, industrial and everyday activities continue discarding vast amounts of material, some of which contains toxic and environmentally harmful substances. Such substances are not always disposed of in a manner with the avoidance of environmental contamination. Despite the existence of environmental standards, and in spite of the ethical implications of such actions, negligence, cost-cutting and accidents cause contamination of the soil, sediments, water and air. In the last few years, numerous industrial sectors have been mentioned as sources of environmental contamination and pollution due to the enormous quantity of wastes they generate. Mineral extraction and processing are good examples of waste production.
Introduction
Solid waste management has moved to the forefront of the environmental agenda. Nations are considering restrictions on packaging and controls on products in order to reduce solid waste generation rates. Local and regional governments are requiring wastes to be separated for recycling, and some have even established mandatory recycling targets. However, industrial and everyday activities continue discarding vast amounts of material, some of which contains toxic and environmentally harmful substances. Such substances are not a l w a y s d i s p o s e d o f i n a m a n n e r w i t h t h e avoidance of environmental contamination. Despite the existence of environmental standards, and in spite of the ethical implications of such actions, negligence, cost-cutting and accidents cause contamination of the soil, sediments, water and air. In the last few years, numerous industrial sectors have been mentioned as sources of environmental contamination and pollution due to the enormous quantity of wastes they generate. Mineral extraction and processing are good examples of waste production.
On the other hand, reuse and recycling of waste materials after their potentialities have been detected is considered an activity that can contribute to diversify products, reduce production costs, provide alternative raw materials for a variety of industrial sectors, conserve non-renewable resources, save energy, and especially, improve public health.
The insertion of waste materials into an alternative productive cycle might represent an alternative recovery option, which is interes t i n g f r o m b o t h a n environmental and an economical perspective. Recovery and recycling is the best environmental solution to save raw materials and to reduce the amount of industrial waste materials produced, and consequently the contamination of environment.
Considerable research work of our research group has recently focused on the recovery and safe, useful application of waste materials originating from the mining and mineral processing industry. Most of these studies report that such waste can be considered important alternative raw materials to the ceramic industry.
Traditional ceramics, such as bricks, roof and floor tiles, other constructions materials, and technical ceramics, such as porcelain and mullite bodies, are usually highly heterogeneous due to the wide compositional range of the natural clays used as raw materials. Therefore, there is a great incentive to use large amounts of suitable waste products as raw materials. Today it is a well-known fact that some waste materials are similar in composition to the natural raw materials used in the ceramic industry and often contain materials that are not only compatible but also beneficial in the fabrication of ceramics. In view of the huge amounts of non-renewable mineral resources that the ceramic industry consumes, this similarity is of even greater significance.
Studies conducted by the authors observed that kaolin processing wastes and granite sawing waste present a high potential to use as raw material for building materials. Kaolin is an important raw material in various industries, such as the ceramic, rubber, plastic, ink, chemicals, cement, and paper industries. However, the kaolin mining and processing industry generates large amounts of waste. The kaolin industry, which processes primary kaolin, produces two types of waste materials. The first type derives from the first processing step (separation of sand from ore), which represents about 70% of the total waste produced, and is also known as china clay sand. The second type of waste is resulted from the second processing step, which consists of wet sieving to separate the finer fraction and purify the kaolin. Granite processing industry produces large amounts of waste materials worldwide. Granite-sawing waste contains feldspar, quartz and mica as major constituents and metallic dust and lime (used as abrasive and lubricant, respectively) as residual materials, and various studies have studied its recycling in ceramic industry. In this scenario, this chapter will address the use of these mine wastes as raw material for the production of ceramic materials as building materials, ceramic brick and tile, roof tiles, mortars, etc.
Recycling and mining wastes
The pollution of ground and surface waters began as soon as industry began producing manufactured goods and wasting liquids and solid matter simultaneously. In the 1930s, industries began to be aware of the eventual danger of their wastes when sent untreated into waterways. It was natural for industry at that time to follow the lead of municipalities in using similar treatments to attempt to resolve their pollution problems. In the World War II there is an accelerated industrial production activity (Nemerow, 2006), and two developments in the post-World War II era led to significant escalation in the problems of managing waste. First, a new phenomenon called "consumerism" emerged. A long period of prosperity, combined with improvements in manufacturing methods led to rapid growth in the number and variety of consumer goods. In addition, new marketing and production practices were introduced, such as planned obsolescence and "throw-away" products. The growth of advertising, along with the electronic media, played an important role in the evolution to our society's current level of overconsumption. The end result was a dramatic increase in the amount and variety of consumer goods-and, hence, wastes (The waste crisis). The second development was the birth of the "chemical age," which resulted in a dramatic change in the composition of the waste stream. The petrochemical industry has grown explosively since that time, yielding a vast array of new synthetic organic compounds, a kind of pollution that had never existed before entered the environment, exhibiting toxicity as well as non-biodegradability (Tammemagi, 1999;Nemerow, 2006).
www.intechopen.com
Radioactivity, petrochemical, and synthetic organic chemicals were largely developed and surfaced in the environment in the 1940s and 1950s. During this period, major environmental problems surfaced with rapid and serious consequences. Hence was born the advent of what was to become the pollution problems of the twentieth century (Nemerow, 2006).
Historically, waste was simply dumped in depressions, ravines, and other handy locales that were close to the population centers producing the waste. Even though recycling was commonly practiced by all households during pre-industrial ages, large-scale recycling programs did not arise until the twentieth century. The first organized programs were created in the 1930s and 1940s, when a worldwide depression limited people's ability to purchase new goods and the outbreak of World War II dramatically increased demands for certain materials. Throughout the war, goods such as nylon, rubber, and various metals were recycled and reused to produce weapons and other materials needed to support the war effort. However, after the War there was a drastically decrease in the recycling efforts (Miller, 2010).
It was not until the environmental movement of the 1960s and 1970s that recycling once again emerged as a popular idea. This movement began in 1962 with the publication of Rachel Carson's book Silent Spring, detailing the toxic effects of the chemical DDT on birds and their habitats. The book raised the consciousness of many people about the dangers to the environment from chemicals and other toxins produced by modern industries (Miller, 2010). Thereafter, the increase in the environmental awareness and consciousness required industry to meet tighter environmental standards on a global basis. In many countries, such requirements generally cannot be met by using conventional disposal of residual solid wastes in landfills (Wang et al., 2010). Accordingly, much more emphasis has to be placed on waste reduction and recycling technologies as a necessary first step to reduce to a minimum the extent of the waste treatments to be provided.
In recent years there has been growing concern about the negative impacts that industry and its products are having on both society and the environment in which we live. The concept of sustainability and the need to behave in a more sustainable manner has therefore received increasing attention. With the world's population growing rapidly the consumption of materials, energy and other resources has been accelerating in a way that cannot be sustained (Hester, R. E. & Harrison, 2009).
In this scenario, solid waste management has moved to the forefront of the environmental agenda, with the amount of related activities and concern by citizens and governments worldwide reaching unprecedented levels. Nations are considering restrictions on packaging and controls on products in order to reduce solid waste generation rates. Local and regional governments are requiring wastes to be separated for recycling, and some have even established mandatory recycling targets. Concerns about emissions from incinerators and waste-to-energy plants have resulted in imposition of state-of-the-art air pollution controls. Landfills are being equipped with liners, impervious caps and leachate collection systems, and gas and groundwater is being routinely monitored. As a result, the costs of solid waste management are increasing rapidly (Goumans et al., 1994).
In this context, arise the industrial ecology. Industrial ecology is now a branch of systems science for sustainability, or a framework for designing and operating industrial systems as sustainable and interdependent with natural systems. It seeks to balance industrial production and economic performance with an emerging understanding of local and global ecological constraints (handbook of industrial and hazardous). The idea of industrial ecology is that waste materials, rather than being automatically sent for disposal, should be regarded as raw materials-useful sources of materials and energy for other processes and products (Wang et al., 2006).
Waste management strategies that focus on source reduction and resource recovery, reuse and recycling have proven to be more cost effective over the long run, and they are less damaging to the environment simply because they prevent or minimize waste generation at the source. Disposal and treatment technologies require major long-term investments in capital equipment and have ongoing costs. But in addition, the waste and pollution that are treated and disposed of still persist, posing continuous and future threats to the public and environment (Cheremisinoff, 2003).
Recycling of waste materials will conserve decreasing resources and avoid the environmental and ecological damages caused by their disposal in the environment. Recycling saves energy, preserves natural resources, reduces greenhouse-gas emissions, and keeps toxins from leaking out of landfills.
Successful research and development on using wastes as raw material, is a very complex task. This task comprehends a multidisciplinary approach involving knowledge from different areas, such as materials science, marketing development, performance evaluation and environmental sciences. As a rule, the best application for the waste is the one that will use its true characteristics and properties to enhance the performance of the new product and minimize environmental and health risks. Waste applications should not be made on a preconceived basis. This requires creativity and a wide range of both scientific and technical knowledge and for the best results will require the collaborative work of a multidisciplinary team (Woolley et al., 2000).
However, attention should be given to environmental contamination risk evaluation due to leaching of hazardous components is mandatory. New product must satisfy toxicity leach test criteria. But it is not sufficient. Other environmental impacts like greenhouse gases emission, human toxicity, acidification, energy use, etc. are also important and good technology for recycling frequently allows significant reduction on these impacts (Woolley et al., 2000;Rao, 2006).
On the other hand, public are became more accepting of purchasing manufactured goods with recycled content. Manufacturers recognized this acceptance, that using recycling content in their products developed more innovative ways to use waste material. Manufacturers learned that recycled content yielded economic and marketing benefits, and consumers realized they could buy recycled-content products with confidence (Winkler, 2010).
Mining, alongside agriculture, represents one of man's earliest activities, the two being fundamental to the development and continuation of civilization. In fact, the oldest known mine in the archaeological record is the Lion Cave in Swaziland, which has a radio carbon age of 43 000 years. There Paleolithic humans mined hematite, which they presumably ground to produce the red pigment ochre. Moreover, the dependence of primitive societies on mined products is illustrated by the terms Stone Age, Bronze Age and Iron Age, a sequence of ages www.intechopen.com that indicate the increasing complexity of the relationship between mining and society (mining and its impact). With time, the use of minerals has increased in both volume and variety in order to meet a greater range of purposes and demand by society, and the means of locating, working and processing minerals has increased in complexity. Today, society is even more dependent on the minerals industry than in the past (Bell & Donnelly, 2006).
Mining is first and foremost a source of mineral commodities that all countries find essential for maintaining and improving their standards of living. Mined materials are needed to construct roads and hospitals, to build automobiles and houses, to make computers and satellites, to generate electricity, and to provide the many other goods and services that consumers enjoy. In addition, mining is economically important to producing regions and countries. It provides employment, dividends, and taxes that pay for hospitals, schools, and public facilities (Committee on Technologies for the Mining Industries et al., 2002).
The consequence of the importance of the mining industry to the world economy is not only the large volume of materials processed but also the large volume of wastes produced. Mine wastes represent the greatest proportion of waste produced by industrial activity. In fact, the quantity of solid mine waste and the quantity of Earth's materials moved by fundamental global geological processes are of the same order of magnitude -approximately several thousand million tonnes per year (Lottermoser, 2007).
Mining, and associated mineral processing and beneficiation, does impact on the environment. Unfortunately, this frequently has led to serious consequences. The degree of impact can vary from more or less imperceptible to highly intrusive and depends on the mineral worked, the method of working, and the location and size of the mine. The environmental impact of mining industry is strongly felt in two areas. The first is the volume of industrial waste, effluents, tailings and sludge. The second serious environmental concern is the emission of carbon dioxide, a major green house gas, which has been implicated in gradual climate change round the world.
Operations of the mining industry include mining, mineral processing, and metallurgical extraction. Mining is the first operation in the commercial exploitation of a mineral or energy resource. Mineral processing or beneficiation aims to physically separate and concentrate the ore mineral, whereas metallurgical extraction aims to destroy the crystallographic bonds in the ore mineral in order to recover the sought after element or compound. All three principal activities of the mining industry produce wastes. Mine wastes are defined as solid, liquid or gaseous by-products of mining, mineral processing, and metallurgical extraction (Lottermoser, 2007).
Mining wastes include overburden and waste rocks excavated and mined from surface and underground operations. Mining wastes are heterogeneous geological materials and may consist of sedimentary, metamorphic or igneous rocks, soils, and loose sediments. As a consequence, the particle sizes range from clay size particles to boulder size fragments. The physical and chemical characteristics of mining wastes vary according to their mineralogy and geochemistry, type of mining equipment, particle size of the mined material, and moisture content (Lottermoser, 2007).
Mineral processing encompasses unit processes for sizing, separating and processing minerals, including comminution, sizing, separation, dewatering, some types of chemical processing. Processing wastes the portions of the crushed, milled, ground, washed or treated resource deemed too poor to be treated further. The physical and chemical characteristics of processing wastes vary according to the mineralogy and geochemistry of the treated resource, type of processing technology, particle size of the crushed material, and the type of process chemicals.
Metallurgical wastes are the residues of the leached or smelted resource deemed too poor to be treated further, and are generated by hydrometallurgical extraction and electro-and pyrometallurgical processes.
Mine wastes result from the extraction of metalliferous and non-metalliferous deposits. In the case of metalliferous mining, high volumes of waste are produced because of the low or very low concentrations of metal in the ore. In fact, mine wastes represent the highest proportion of waste produced by industrial activity, billions of tonnes being produced annually. Such wastes can be inert or contain hazardous constituents but generally is of low toxicity. The chemical characteristics of mine waste and waters arising from them depend upon the type of mineral being mined, as well as the chemicals that are used in the extraction or beneficiation processes. Because of its high volume, mine wastes historically have been disposed of at the lowest cost, often without regard for safety and often with considerable environmental impacts (Bell & Donnelly, 2006).
Usually the metalliferous and non-metalliferous wastes (from mining and processing operations) are placed in the postmining topography. In the case of mountain top removal and contour mining methods, waste materials are often used to fill adjacent canyons or hollow areas. When associated with canyon fills, these anthropogenic land forms may be flat or gently sloping on top, but often have steep side slopes and tend to be very erosive. Also, because of the nature of the material (i.e., unconsolidated, non-homogeneous) water penetration can cause instability thus enhancing mass wasting and the formation of seeps containing high levels of various elements that could impact down slope sites (Marcus, 2007).
Another very important concern to the mining industry is particulate matter, which is emitted in relatively large amounts in almost all aspects of mining operations (mining environmental). Particulates can affect human health adversely, as well as damage animals and crops. At high enough levels, particulates can contribute to chronic respiratory illnesses such as emphysema and bronchitis and have been associated with increased mortality rates from some diseases. In addition, particulate matter may cause irritation of the eyes and throat, and it can impair visibility.
As processing technologies move toward finer and finer particle sizes, dust and fine particles produced in the mineral industry are becoming an important consideration Fine particles and dust can represent a health hazard, an environmental concern, and an economic loss. The amount of waste dust and fine particles is increasing significantly as more rock is mined and processed. Research should be focused on minimizing the generation of unwanted fine particles and dust or on using these materials as viable byproducts.
Moreover, large volumes of water slurries containing fine particles are produced by all types of mining facilities. The management of these slurries as they are dewatered and disposed of can present significant environmental issues. Whether slurries are produced as tailings from milling operations, spoils in coal mining, or as clay slimes in the phosphate www.intechopen.com industry, they are often slow and difficult to dewater and dry because of their colloidal nature (Committee on Technologies for the Mining Industries et al., 2002).
There are other problems faced by the mining industry, as closure and reclamation of dump-leaching and heap-leaching operations and tailings impoundments. Upon the cessation of production, dump-leaching and heap-leaching piles and tailings impoundments must be closed in an environmentally sound manner. Depending on the chemical characteristics of the wastes and reagents used, as well as on atmospheric precipitation rates, piles and tailings may contribute poor-quality seepage or runoff to surface and/or groundwater through the release of residual solution or from infiltration of or contact with atmospheric precipitation. The released solution may be acidic or may contain cyanide or other contaminants, such as selenium, sulfates, radionuclides, or total dissolved solids (Committee on Technologies for the Mining Industries et al., 2002).
However, not all mine wastes are problematic wastes and require monitoring or even treatment. Many mine wastes do not contain or release contaminants, and pose no environmental threat. In fact, some waste rocks, soils or sediments can be used as raw materials for a series of industries. , and a few are suitable substrates for vegetation covers and similar rehabilitation measures upon mine closure.
Therefore, the development of innovative, environmentally friendly technologies will be extremely important. Minimizing waste generation and using wastes to produce useful byproducts while maintaining economic viability must be a goal for new technologies. For instance, the processing of metallurgical wastes and recovery of valuable components and in some cases converting them into useful compounds not only help to reduce pressure on ponds and landfills but also it, at least in part, offsets the cost of environmental protection (Rao, 2006). Sustainable development is a concept which attempts to shape the interaction between the environment and society, such that advances in wellbeing are not accompanied by deterioration of the ecological and social systems which will support life into the future. The management of mining and minerals processing wastes is therefore a fundamental sustainable development issue.
The reduction, reuse, recycling and treatment of mining and minerals processing waste are increasingly receiving greater research and development attention for their contribution to improving the sustainability of the minerals industry (Franks et al., 2011) Recent trends outlined the new mining culture: management of resources on a local basis (water, building materials, etc.), prevention of dissipative uses by improved recycling, and promotion of efficiency in production processes (better recovery means less pollution), etc.
It is very important to be aware of the fact that almost the same kind of material, depending on its characteristics, could be regarded as a waste (that may have to be treated) or a secondary raw material (environmental aspects of construction). This implies that close cooperation between society players (in both public and private sectors) dealing with waste recycling technology to replace raw materials by mining wastes.
The quantity of industrial minerals recycled or reused in some way is still minor compared to the total global consumption of industrial minerals obtained by mining and quarrying (resource recovery). However, manufacturers began to look at recycled waste as a more reliable and cost-effective supply source for raw material, and altered existing products or developed new ones to better use recycled products (Winkler, 2010). In this sense mining wastes have being used in resin cast products, glass, ceramics, glazes as well as building and construction materials.
Thus, mining waste can no longer be considered solely a useless material, but, it is must first analyzed (scientifically and technically) the potential usefulness of the wastes as alternative raw materials, reducing the demand for virgin materials and the environmental impacts associated with extraction and processing of virgin resources.
Granite sawing waste and kaolin processing waste: Alternative ceramic raw materials
The industry of the ornamental granite is included in the natural stone industry sector, more specifically in the sub-sector of the ornamental rocks comprising the extraction and processing of rocks for ornamental applications. This economical activity represents an important sector in the worldwide economy. However, a less known aspect of the exploration of the ornamental rocks is the great volume of produced residues, specifically, solids (generated during the extraction) and sludges (produced during the transformation process) (Torres et al., 2009).
After mining, the granite blocks are submitted to primary dressing, which consist of sawing ( Figure 1 depicts the sawing operations), to obtain semi finished pieces such as plates and strips. This is followed by secondary dressing in which the sectioned pieces undergo polishing and surface finishing. During primary dressing, an estimated loss of 20-35% of the blocks occurs in the form of powder. This leftover powder is removed in a mixture with water and other residual materials such as metallic dust and lime, used as abrasive and lubricant, respectively, producing a sludge.
This material has been deposited as dry particles with large particle size range or as fine particles in aqueous environment, generally deposited by sedimentation. It is also common to deposit filter-pressed sludge in surface landfill. Although not considered dangerous, incorrectly planned deposition of these residues can cause accidents and environmental impact like, for instance, the increase of the turbidity of the courses of water. The dried mud is easily dragged by the wind and becomes harmful to humans and animals through its inspiration or to plants when deposited on their leaves. The sludge generated during the processing of stone have no specific practical applications and have been managed as waste. Figure 2.
Thus, the search for new recycling technologies is of high technological, economic and environmental interest. In this regard, interesting opportunities are found in the traditional ceramics industry, particularly the sector devoted to the fabrication of building products. Natural raw materials used in the fabrication of clay-based ceramic products show a wide range of compositional variations and the resulting products are very heterogeneous. Granite sludge is a mixture of debris residue of cut granite rocks with wear remains of cutting steel blades, abrasive metallic shot, and hard materials from the polishing bricks. In the common granite cutting practice, the abrasive metallic shot is dispersed in Ca(OH) 2 aqueous slurry for cooling. This slurry is continuously pumped and wets all around the granite block slits (Figure 1). Therefore, such products can tolerate further compositional fluctuations and raw material changes, allowing different types of wastes to be incorporated into ceramic tiles and bricks. Various studies (Menezes et al., 2002a(Menezes et al., , 2005(Menezes et al., , 2007(Menezes et al., , 2008a have demonstrated the viability of using granite sawing waste in the production of building materials.
Studies of our research group demonstrated that is possible to incorporate high amounts granite sawing waste in ceramic formulations for the production of ceramic bricks and roof tiles. Figure 3 illustrate the mechanical behavior of ceramic bodies, fired at 1000 o C, with the rise of waste content. Small additions of these wastes, up to 20%, improved the mechanical performance of the bodies. Granite wastes present a non-plastic character and, therefore, it was also observed that they can play an important role as plasticitycontrollers during fabrication. Figure 4 displays ceramic bodies produced incorporating granite sawing.
For the production of ceramic bricks, the predominant raw material used is mineral clay. Any good brick clay should have low shrinkage and low swelling characteristics, consistent firing color, and a relatively low firing temperature, but at the same time produce an adequately dry and fire-strength brick. The guiding rule of choice on wastes and byproducts must rest on their compatibility with the original (host) raw material being used, whereas they must not degrade the final product by focusing simply on making it a repository for wastes (Insam & Knapp, 2011). Granite sawing waste reaches all the requirements to be a versatile raw material for the production of ceramic brick and roof tiles.
Granite sawing waste can also be used in the production of soil-lime bricks. Figure 5 shows a soil-lime brick containing granite waste and a construction developed using this kind of brick.
In ceramic technology there is a large range o firing temperatures. After 1100 o C granite sawing waste can be classified as fluxes, as they have the potential to act as glassy phase formers during the sintering process, improving the sinterability of the clay material. The effect of small additions of such rejects in compositions for the production of ceramic tiles has been investigated (Menezes et al., 2005(Menezes et al., , 2008a, and it was observed that the final properties of the fired products do not change drastically. According to the composition to be manufactured it is possible incorporate high amounts of waste, up to 50%. Figure 6 depicts the water absorption and mechanical behavior of ceramic tiles (produced with different composition) fast fired at 1175 o C, containing distinct amounts of different granite wastes. It can be observed that there is no a directly correlation between these properties and waste content. This is due to the fact that each granite wastes present particular characteristics, as amount of fluxes and particle size distribution. But also, because the influence of granite wastes will be close associated with the characteristics (and amounts) of the other raw materials used in the formulation of ceramic masses. Thus, it is possible incorporated 38% of waste and reached high mechanical performance, and in other situation using 21% of waste the modulus of rupture achieved was just above the strength limit required.
The current optimization procedure for developing ceramic compositions using waste materials consists of an experimental rather than a comprehensive approach. In general, the approach involves selecting and testing a first trial batch, evaluating the results, and then adjusting the mixture's proportions and testing further mixtures until the required properties are achieved. The conventional method of optimization is time consuming and does not allow the global optimum to be detected, particularly due to interactions among the variables. In contrast, statistical design methods are rigorous techniques both to achieve desired properties and to establish an optimized mixture for a given constraint, while minimizing the number of trials. This methodology when applied in the recycling of wastes in the ceramic technology has led to greater efficiency and confidence in the results obtained and has simultaneously optimized the content waste materials with a minimum of experiments. In the development and manufacture of ceramics using waste materials, the properties of fired bodies are basically determined by the combination of raw materials and process parameters. When the processing conditions are kept constant, a number of properties of dried and fired bodies are basically determined by the combination (or mixture) of raw materials. This is the basic assumption in the statistical design of mixture experiments to obtain a response surface using mathematical and statistical techniques. To this end, it is necessary first to select the appropriate mixtures from which the response surface might be calculated. Then, form the calculated response surface, the property value of any mixture can be predicted based on the changes in the proportions of its components. In this sense authors had applied this mathematical tool in several studies developing ceramic formulations containing high amount of granite wastes with great efficiency and a minimum of experiments. Kaolin is an important raw material in various industries, such as the ceramic, rubber, plastic, ink, chemicals, cement, and paper industries. However, the kaolin mining and processing industry generates large amounts of. The kaolin industry, which processes primary kaolin, produces two types of wastes. The first type derives from the first processing step (separation of sand from ore, generally by wet sieving). The other type of waste results from the second processing step, which consists of a wet sieving to purify the kaolin. Figure 8 display the second step of the kaolin processing, the sedimentation tank.
Traditionally, these wastes have been disposed of in landfills and often dumped directly into ecosystems without adequate treatment. This can seriously damage the environment through soil and water contamination, and is potentially harmful to flora, fauna and human health. Figure 9 depicts the dispose of these wastes directly in open air. Nowadays, because of more stringent environmental laws and the market's increasing demand for environmentally friendly products, manufacturers are concerned with developing studies aimed at reducing the environmental impact of these wastes. Thus, possible reuse or recycling alternatives should be investigated and implemented.
Physical and chemical characterizations of kaolin processing wastes can be found elsewhere (Menezes et al., 2007(Menezes et al., , 2008b. According to those reports, kaolin wastes are basically composed of kaolinite (Al 2 Si 2 O 5 (OH) 4 ), mica (KAl 2 (Si 3 Al)O 10 (OH) 2 ) and quartz (SiO 2 ) and has a very large particle size distribution. The composition and particle size distribution of the two types of waste are very different. The first one contain a high amount of quartz and coarse particle size (reaching particles of 5, 10mm), while the second one, presents a high amount of kaolinite and a large particle size distribution with a high mount of fine particles (because of this the second waste are known as fine waste and the first waste as coarse waste). Mineral fillers are used in a wide range of commodities such as paper, paint, plastics, membranes, ceramics, plasterboard, geo-textiles, rubber, pet food, chicken feed, electrical cables and several construction materials. Such fillers are marketed at a relatively high cost, as the total production costs. However many applications such as ceramics and some construction materials, do not require fillers of such a high grade. In this sense, the fine kaolin processing waste was studied by our research group to be used as filled in mortars for the construction industry. The waste replaced the lime and the cement in several mortars formulations presenting very interesting results. Figure 10 shows the compression strength of mortars, which had the lime replaced by kaolin processing waste (fine waste). Addition of waste improved the performance of the mortar due to the filler effect of the material. Kaolin processing waste after be fired display pozzolanic activity (capacity to react chemically with the lime and form hydrated calcium silicates, phases similar to those produced with the cement hydration). Thus, calcinations of this waste can improve its applicability and incorporation in construction materials. The compression strength of mortars that had part of cement replaced by fired kaolin waste (50% of coarse and 50% of fine) is depicted in Figure 11. This result illustrates the potential of this waste as agglomerate material after firing. The increase in the compression strength when using waste in natural is due to the filler affect. Because of the high fineness, their particles can fill the voids between the cement particles, increasing the soil density and strength.
Kaolin processing waste (fine waste) can also be used in other ceramic industries. Studies (Menezes et al., 2008b(Menezes et al., , 2009a(Menezes et al., , 2009b have pointed up its application in production of porous ceramics, mullite bodies and porcelains. The waste acts as alternative raw material replacing part of the kaolin and of the quartz used in the formulation. Bodies obtained presented high strength and excellent performance, similar to those of bodies produced using conventional raw materials. The outstanding performance of the produced bodies was close associated with the development of high amount of mullite, as a consequence of the presence of fluxes and kaolinite on the waste (Figure 12).
The potential use of granite waste in combination with kaolin processing waste to produce ceramic bodies was also investigated by our research group. Regression models used to optimize the waste content in ceramic compositions displayed that ceramic bricks containing up to 40% of wastes (kaolin waste + granite waste) can be manufactured without trouble and the final bricks presented physical and mechanical properties similar to those of conventional bricks. Figure 13 illustrates this potential using the surface plot projection onto the composition triangle. The area highlighted on the triangle indicates the possible compositions (according to limitations imposed) to be used for ceramic brick production. Use of kaolin waste in association with granite waste was also very efficient for the production of ceramic tiles containing high amount of wastes, at around 60%. Figure 14 displays the synergism of the wastes, while the granite waste improves the mechanical behavior of the body, the kaolin waste act in a manner similar to the clay. The combination of both wastes make possible improves the performance of the body and save clay material, using high amount of kaolin waste and granite waste. While recycling of low added-value residual materials constitutes a present day challenge in many engineering branches, attention has been given to low-cost building materials with similar constructive features as those presented by materials traditionally employed in civil engineering. Developing countries usually face grave housing deficits. Aiming at lowering costs, scientific attention has been given to non-conventional building materials with similar features as those presented by construction materials traditionally used in civil engineering. Quest for such surrogate materials can be two-fold interesting as (i) it may help to reduce dwelling deficits (particularly in developing countries) inasmuch as cheaper houses become economically feasible and (ii) it can be environmentally friendly as low-value wastes can be recycled or exploited. (Ashworth & Azevedo, 2009).
On the other hand, these results on recycling of granite sawing and kaolin processing wastes highlight that these wastes recycling contributes to diversify products, reduces production costs, provides alternative raw materials for a variety of industrial sectors, conserves nonrenewable resources, saves energy and, improves public health. Thus, upgrading these wastes to alternative ceramic raw materials has become interesting, not only technically, but also environmentally and socially.
Conclusion
Studies of our research group displayed that kaolin processing and granite sawing wastes can serve as alternative raw materials for the production of ceramic materials, and that not only construction materials (as ceramic bricks and tiles, roof tiles, mortars, etc.) but also ceramics like porcelain, mullite bodies, membranes, etc. The correct use of the wastes produces lower firing temperature or improves the performance of the final bodies. Correct characterization, physically and microstructurally, and application of mathematical tools permit incorporation of high amounts of waste in ceramic formulations, exceeding 50% of the raw material used in the composition. Our results highlighted that the use of mine wastes in the production of building materials can be successfully carried out, allowing the reduction of both the consumption of natural resources and the cost of waste disposal while protecting the environment. | 2017-10-31T10:11:50.463Z | 2012-02-29T00:00:00.000 | {
"year": 2012,
"sha1": "7f549f5196a1864f048367e7879a5ef9d5026996",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/29318",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5f1d2d2661a2ff746cfed539abe6c625b2ec3a00",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
5630316 | pes2o/s2orc | v3-fos-license | Animal Models in Cardiovascular Research: Hypertension and Atherosclerosis
Hypertension and atherosclerosis are among the most common causes of mortality in both developed and developing countries. Experimental animal models of hypertension and atherosclerosis have become a valuable tool for providing information on etiology, pathophysiology, and complications of the disease and on the efficacy and mechanism of action of various drugs and compounds used in treatment. An animal model has been developed to study hypertension and atherosclerosis for several reasons. Compared to human models, an animal model is easily manageable, as compounding effects of dietary and environmental factors can be controlled. Blood vessels and cardiac tissue samples can be taken for detailed experimental and biomolecular examination. Choice of animal model is often determined by the research aim, as well as financial and technical factors. A thorough understanding of the animal models used and complete analysis must be validated so that the data can be extrapolated to humans. In conclusion, animal models for hypertension and atherosclerosis are invaluable in improving our understanding of cardiovascular disease and developing new pharmacological therapies.
Introduction
Research animals are valuable tools for understanding the pathophysiology and in developing therapeutic interventions for a disease. These animals are used in basic medical and veterinary research. Various animals have been reported as useful models in studying diseases afflicting humans and animals. Research animals include mice, rats, rabbits, guinea pigs, sheep, goats, cattle, pigs, primates, dogs, cats, birds, fish, and frogs [1]. Concerns have been raised concurrently with the rise of the use of animals over the years. This increase is mainly attributed to the use of genetically altered animals [1]. The similarities and differences between models must be taken into consideration for every project. Careful consideration should be given in choosing the most appropriate animal model to answer the specific research question of the study. With increasing awareness of animal welfare and research ethics, it is important to obtain accurate results using suitable models while reducing wastage of animals used for testing.
Animals are used in biomedical research for the following reasons.
(i) Feasibility. Animal models are relatively easy to manage, as compounding effects of dietary intake and environmental factors including temperature and lighting can be controlled. Therefore, there is relatively less environmental variation compared to human studies. Blood vessels and cardiac tissues can be isolated for detailed experimental and biomolecular investigations. Animals typically have a shorter life span than humans. Hence, they make good models, as they can be studied over their whole life cycle or even across several generations [2,3].
(ii) Similarities to Human. Moreover, many animals are suitable due to their similarity in anatomical basis and physiological functions with humans. For example, chimpanzees and mice share about 99% and 98% of DNA with humans, respectively [4,5]. As a result, animals have the tendency 2 BioMed Research International to be affected by many health problems afflicting humans. Therefore, animals are good models for the study of human diseases.
(iii) Drug Safety. Preclinical toxicity testing, pharmacodynamics, and pharmacokinetics profile of drugs may be investigated on animals before the compounds or drugs are used in humans. This is vital, as prior to testing on humans, the effectiveness of a drug as potential treatment needs to be carried out on animals [6]. Interventions for diseases must be identified to eventually develop new medicines beneficial to humans and/or other animals. Drug safety profiles need to be determined in order to protect the animals, human, and environment. Harmful and detrimental effects of a drug need to be tested on a whole organism [6]. This can further ensure the dose to be employed in clinical trials, which do not cause fatality in the subsequent studies. The tested chemicals must also be safe for administration and avoid contaminating water, soil, and air. It is unethical to directly test drugs or chemicals on humans, thus warranting the need to use animals in the research, although this has been an issue debated by animal rights and welfare groups.
Before conducting research on animals, researchers must ensure that animals are essential for their experiments, with no viable alternatives. The use of 3Rs principle relating to animal research has been a practice since first introduced by Russell and Burch in 1959 [7]. The 3Rs refer to replacement, reduction, and refinement. Replacement means conducting experiments using nonanimal models, such as in vitro method with cell culture as well as with computer model simulation (in silico), whenever possible. Nevertheless, the information obtained from in vitro is typically limited when compared to in vivo studies. Reduction refers to the need to reduce the number of animals, either from previous studies or by using calculation of size sample with a good experimental design. Refinement refers to efforts to minimize pain and suffering of test animals, taking into consideration animal handling and surgical procedures, housing environment and living conditions, and improvements in animal husbandry. These 3Rs are aimed at providing humane and scientifically improved research involving or avoiding the use of animal models [8]. Guidelines for reporting animal study are available to ensure the justification of using animals, such as the Animals in Research: Reporting In Vivo Experiments (ARRIVE) guidelines [9] and the Gold Standard Publication Checklist (GSPC) [10].
Even though animal studies have contributed much to our understanding of mechanisms of diseases, their value in predicting the effectiveness of treatment strategies in clinical trials has remained controversial [11][12][13]. Clinical trials are essential, as animal studies do not predict with sufficient certainty what will happen in humans. Hence, the findings from animal studies may not be deemed suitable for extrapolation to humans. A report by Williams et al. [11] suggested that a recurrent failure of interventions to translate the results obtained in animal studies to the clinical settings may be due to the ability to control genetic background in animal studies. Controlling the genetic background produces more consistent results. Additionally, it is possible that some of the genetic effects of the candidate loci are context-dependent. For example, the specific loci may play a significant role in sex (male versus female) or in age (young versus old) or in people of a specific body mass index or race [12,13]. Since these characteristics are not usually investigated or analyzed in many of the studies, there is a possibility that the failure to replicate is due to interactions between genes and environmental factors as well as to gene-gene interactions. If there are interactions between environmental risk factors and genotypes, the validity of extrapolation may become complicated [14].
Moreover, this failure may be explained in part by the methodological flaws in animal studies, eventually leading to a systematic bias which might generate incorrect conclusions about efficacy of a drug or a compound [11]. Per Bracken [15], reasons for the failure of animal experiments which may be translated into human trials include poor experimental design, execution, and analysis [16]. Systematic reviews provide information on whether animal studies are being properly carried out and published. However, systematic reviews are not able to resolve all queries regarding the applicability and relevance of animal studies to humans [13]. Selection biases affect how literature is selected and subsequently included in the systematic review, due to the criteria set by the different authors. The objectives of animal experiments are typically to discover new knowledge or make advances in understanding the diseases, instead of predicting the outcomes of human trials. Therefore, data obtained from animal studies may be unsuitable or too diverse for meaningful comparison with and prediction of the results of human trials. Nevertheless, systemic reviews ensure that all animal studies are published regardless of outcome, in order to avoid unnecessary duplication of expensive animal experiments [17]. Furthermore, systematic reviews may improve the quality and translational value of animal research to human trial [18].
Animal Models for Hypertension
Hypertension is one of the major risk factors for cardiovascular diseases. It has become a major public health issue in most developed and developing countries [19][20][21]. According to the Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation and Treatment of High Blood Pressure, high blood pressure (BP) is defined as systolic blood pressure (SBP) greater than 140 mmHg and/or diastolic blood pressure (DBP) greater than 90 mmHg [22]. Patients with SBP ranging between 120 mmHg and 139 mmHg, or DBP of 80 mmHg to 89 mmHg, are categorized as prehypertensive. They have a higher risk of developing hypertension and therefore require medical intervention [22].
Human essential hypertension is a complex multifactorial disease which is influenced by genetic and environmental factors. Various models of experimental hypertension have been primarily developed to mimic hypertensive responses observed in humans [23]. These models are beneficial in the pharmacological screening of potential antihypertensive drugs, in addition allowing researchers to have a better understanding of the etiology, development, and progression of hypertension [24]. Since animal models of hypertension are a mimicry of human hypertension, many of these models have been developed using the etiological factors which have been hypothesized to have a contributory role in human hypertension, such as excessive salt intake, hyperactivity of renin-angiotensin-aldosterone system (RAAS), and genetic predisposition [24]. One animal model is insufficient for explaining the antihypertensive effects of a particular drug, because many pathways are involved in the development of BP dysregulation. In another word, several animal models are required to examine particular cardiovascular changes in an effective study [25]. Therefore, it is advisable that each of the studied models explains a unique pathway in the development of hypertension.
Several criteria need to be considered in order to develop an ideal animal model for hypertension. These factors include the feasibility and size of the animals, the reproducibility of the model, the ability to predict the potential antihypertensive properties of a drug, the similarity to human disease (mode of the disease: slow on-set versus acute), and economical, technical, and animal welfare considerations [23,24]. In the past, dogs were mostly employed as a model to study hypertension. Currently, the preferred animal model is the rat. Along with rats, occasionally mice, monkeys, and pigs are also used as a model for experimental hypertension [26,27]. These species have not been studied extensively for both practical and financial reasons. In 1963, Okamoto and Aoki introduced an experimental hypertension model without the involvement of physiological, pharmacological, or surgical intervention [28]. This model is known as the spontaneously hypertensive rat (SHR), which is the genetic strain of hypertensive rat. SHR has become the animal of choice for the screening of antihypertensive agents and the cornerstone of medical research in experimental hypertension [29].
Several forms of murine genetic models, including SHR, have become the focus of hypertensive research. The short life span, small size, and relatively low cost of the animals enable the researchers to study the natural history, genetic factors, and pathophysiological changes in hypertension [29]. Other strains have been developed, including the New Zealand strain [30], Milan strain [31], Dahl salt-sensitive strain [32], Sabra strain [33], and Lyon strain [34]. Essential hypertension is the most frequently encountered human type of hypertension. It is also known as primary hypertension, contributing to 95% of incidences. Essential hypertension is associated with genetic influences. Among the many strains of rat models SHR is generally used, even though it represents only a particular type of hypertension [35].
In addition to the genetic type of animal models, renovascular hypertension is a commonly employed model of hypertension. RAAS plays pivotal role in this form of hypertension [36,37]. In 1934, Goldblatt et al. developed a hypertension model through partial constriction of the renal artery in dog [38]. This has led to other renal-induced hypertension model using rats, rabbits, sheep, and cats [39]. When the renal artery is ligated or constricted, RAAS and the sympathetic nervous system are activated [40]. Renin is secreted by the kidneys when sympathetic activity is enhanced. Angiotensinogen is converted to angiotensin-I (Ang I) in the presence of renin.
Angiotensin-converting enzyme (ACE) plays a vital role in the regulation of BP via hydrolysis of the inactive form of Ang I to the active form, angiotensin II (Ang II). ACE is mainly located on the surface of the endothelium and epithelium involved in the constriction of blood vessels, subsequently leading to elevation of BP. Ang II is a potent vasoconstrictor and affects cardiovascular homeostasis. Apart from the role in vasoconstriction, Ang II also stimulates the release of aldosterone, further increasing blood volume and BP due to water and salt retention [41].
Nitric oxide (NO) has been demonstrated to be a potent vasodilator, and its release from the endothelium may be triggered by vasoactive substances such as acetylcholine (ACh) [42]. The endothelium preserves its integrity through endothelium-relaxing dependent factor, which is the best to be characterized as NO [43]. Therefore, NO plays an important role in the regulation of BP [44]. The production of NO is catalyzed by nitric oxide synthase (NOS). Deficiency of NOS has led to a reduction in NO synthesis [45,46]. Impaired NO bioavailability will result in reduced endothelium-dependent vasorelaxation, eventually leading to hypertension. This NOdeficient model can be induced by oral administration of N -nitro-L-arginine methyl ester (L-NAME) up to eight weeks, resulting in a significant rise in both SBP and DBP, renal and hepatic markers, and inflammatory parameters in male Wistar rats [47]. Often, BP is elevated after four weeks of L-NAME treatment. Long-term administration of NOS inhibitors, such as L-NAME, provides a new form of hypertension with target organ damage. Studies have reported that L-NAME-induced hypertension has been associated with attenuated endothelium-dependent relaxations, cardiac and aortic tissue damage, renal vascular, and glomerular fibrosis [48][49][50]. Since the etiology of hypertension is different among the various animal models, it is imperative to make a rational choice for a specific model (Table 1). The choice will significantly affect the outcome of the study.
Soriguer et al. [51] conducted a study on cooking oils, reporting that repeatedly oxidized frying oil is an independent risk factor for hypertension. Hence, hypertension is related to the degradation of the dietary frying oil. Previously, adult male Sprague-Dawley rats aged 3 months were administered with 15% weight/weight (w/w) of repeatedly heated vegetable oils for 16 weeks [52] or 24 weeks [53][54][55][56]. Chronic consumption of heated oil diets causes an increase in BP. The BP-raising effect of the heated vegetable oils may be attributable to the diminished endothelium-dependent relaxation responses. Heated oil diet promotes oxidative stress, resulting in NO sequestration and inactivation. Furthermore, heated oil causes a significant increase in ACE activity and a reduction in heme oxygenase content. The thermal oxidation of vegetable oils promotes the generation of free radicals and may contribute to the pathogenesis of hypertension in rats. This heated oil-induced hypertension model employed male instead of female rats. Female hormones have been shown to have cardioprotective properties [57,58]. BP was measured using the conventional heating tail-cuff method. Even though invasive methods such as carotid arterial cannulation may provide more accurate readings, these may cause injury in the animals and further complicated the experiment. In addition, (i) SHR is developed by inbreeding Wistar rats (brother-to-sister) with the highest BP [28]. The BP increases at week 4 to week 6 and reach systolic BP of 180-200 mmHg [28]. SHR may develop cardiac hypertrophy, cardiac failure, renal dysfunction, and impaired endothelium-dependent relaxations [60][61][62].
(ii) Dahl salt-sensitive rats derived from Sprague-Dawley rats on the basis of administering high NaCl diet. Salt-sensitive rats become hypertensive when given normal salt diets; however these rats develop severe and fatal hypertension with high salt diet (8% NaCl) [32]. These rats may develop cardiac hypertrophy, severe cardiac failure, hypertensive nephropathy, impaired endothelium-dependent relaxations [63][64][65].
Endocrine hypertension (i) Administration of DOCA in a combination with high salt diet and unilateral nephrectomy [69].
(ii) Activation of sympathetic nervous system and RAAS may contribute to the initiation of stress-induced hypertension [75,76].
Pharmacological hypertension (i) Nitric oxide-deficient model by administering NOS inhibitors such as L-NAME [77].
(ii) Increase in BP was reported during long-term oral treatment with NOS inhibitors [78,79].
(iii) Development of endothelial dysfunction is gradually with increased of BP [80].
Renal hypertension (i) This includes two-kidney one-clip hypertension (2K1C; constriction of one renal artery while the contralateral kidney is left intact), one-kidney one-clip hypertension (1K1C; one renal artery is constricted and the contralateral kidney is removed), and two-kidney two-clip hypertension (2K2C; constriction of aorta or both renal arteries) [81,82].
(ii) In the two-kidney model, circulating renin and aldosterone levels are increased [83], which are most notably in the early phase of hypertension [84].
these studies were performed to compare and monitor the effects of heated oil diets among the experimental groups up to 24 weeks using large number of rats. Thus, the noninvasive tail-cuff method is more suitable for measuring BP for longterm studies [59].
Animal Models for Atherosclerosis
Atherosclerosis, or "hardening of the arteries, " is a chronic inflammatory disease characterized by endothelial dysfunction and disorganization of intimal architecture owing to the accumulation of lipid deposits, inflammatory cells and cell debris in the intima of elastic, and medium to large muscular arteries. It underlies many of the common causes of cardiovascular deaths, including stroke and heart attack [85]. Several modifiable (including advanced age, gender, and heredity) and nonmodifiable risk factors (including dyslipidemia, hypertension, sedentary lifestyle, tobacco smoking, and diabetes mellitus) have been identified for the development of atherosclerosis [86]. Many clinical and experimental attempts have been performed to understand the pathophysiology of the disease. Amongst them, animals have been used for more than a century to study atherosclerosis. The first evidence that experimental atherosclerosis could be induced in animals came into view as early as 1908 by Ignatowski, who demonstrated atherogenesis in the aortic wall of rabbits fed a diet enriched in animal proteins including meat, eggs, and milk [87]. Since then, numerous animal models have been used for understanding the mechanisms involved in both induction and regression of atherosclerotic lesions [88,89]. Rats, rabbits, dogs, pigs, and monkeys are well-established animal models for atherosclerosis, and thrombosis. Nonhuman primates, hamster, mouse, cat, and guinea pig have also been used, but with lesser extent [90]. Several studies documented a significant relationship between elevated levels of serum cholesterol and development of atherosclerotic plaques in experimental animals. High-fat diets such as the 1% or 2% cholesterol diet have been found to elevate serum low-density lipoprotein (LDL), inducing atherogenesis in certain animals such as hamsters [91] and guinea pigs [92]. Therefore, the use of high-fat diets in promoting atherosclerosis in animal models has been a valuable tool for studying pathogenesis, as well as for testing potential therapies in reversing the atherosclerotic process.
Overall, an ideal animal model should be representative of the human atherosclerosis and should be feasible and affordable. Although animal models have played a significant role in our understanding of induction of atherosclerotic lesions, they have some limitations (Table 2). Not all experimental animals, such as rats and mice, respond similarly to a given high-fat diet, due to inherent genetic differences. Rats and mice are not good models for atherosclerosis, because [95] (ii) Deficiency in hepatic lipase leads to hepatotoxicity following prolonged cholesterol feeding [96] Pigs (i) An anatomically and physiologically similar cardiovascular system compared to humans [97] (ii) Susceptible to spontaneous atherosclerosis [98] (iii) Comparable patterns of plaque distribution [99] (iv) High availability (for miniature pigs) (i) Large size with resultant management difficulties (ii) High maintenance cost Dogs (i) Easy to work with (ii) Ideal size (iii) High availability (i) Highly resistant to atherogenesis (ii) Status and anthropomorphic attitudes toward dogs (iii) Differences in important aspects of their cardiovascular system than humans [100] Hamsters (i) Low cost (ii) High availability (iii) Easy to handle and maintain (iv) Carry a significant portion of its plasma cholesterol in LDL particles and is therefore close to humans [101] (v) Sensitive to high-fat diets [102] (i) Inconsistency of lesion development and absence of advanced lesions [103] (ii) Require highly abnormal diets and/or treatment with a cytotoxic chemical agent, such as streptozotocin [104] Guinea pigs (i) Develop diet-induced atherosclerosis (ii) Most of cholesterol is transported in LDL particles [105] (iii) Ovariectomized guinea pigs showed a similar plasma lipid profile as in postmenopausal women [106] (i) Require constant supplementation with vitamin C, which potentially acts as an antioxidant to interfere with atherogenesis [107] Nonhuman primates (i) Genetically resemblance to humans (ii) Similar omnivorous diet (iii) Similar metabolism (iv) Develop metabolic syndrome as they age [108] (i) Expensive (ii) Low availability (iii) Live long (thus requiring lengthy experimental periods) (iv) Potential carriers of dangerous viral zoonoses [104] (v) Significant ethical issues Pigeon (i) Low cost (ii) Easy handling (iii) Susceptible to atherosclerosis (iv) Sufficient size (i) Nonmammalian (ii) Lipoprotein compositions and metabolism are different [109] (iii) Differences in arterial histology [110] Chicken (i) Low cost (ii) High availability (iii) Develop atherosclerosis naturally in aorta and coronary arteries, with cholesterol feeding accelerating the pathogenesis [111] (i) Nonmammalian (ii) Viral infection is associated with atherosclerosis [112,113] CETP: cholesterol ester transfer protein; HDL: high-density lipoprotein; LDL: low-density lipoprotein.
they are typically resistant to atherogenesis, and even diets as high as 10% w/w cholesterol are not usually sufficient to produce vascular lesions [121]. The lipid metabolism of a normal rat and a mouse is primarily based on high-density lipoprotein (HDL) rather than on LDL as in humans, which might be attributable to resistance to atherogenesis [122]. The use of other interventions, such as vitamin D 3 , to establish atherosclerotic calcification or aortic medical calcification, is often required [123]. Furthermore, a high-fat diet may represent a toxic proinflammatory stimulus rather than a low and chronic inflammatory state to animals [124]. Moreover, from a nutritional perspective, the dilution of a chow diet with lipids may increase the caloric density of the diet and reduce the ratio of essential nutrients to dietary energy, potentially leading to an imbalance in nutrient intake in animals consuming the atherogenic diet [125]. Table 3: Genetically modified animal models for atherosclerosis.
Experimental model Description
Apolipoprotein E knockout (ApoE −/− ) mice Apolipoprotein E (apoE), a constituent of lipoprotein responsible for packaging cholesterol and other fats and carrying them through the bloodstream, is inactivated by gene targeting. They exhibit a higher total plasma cholesterol concentration of 11 mM compared to 2 mM in their parent C57BL/6 mice [114].
LDL receptor knockout (LDLR −/− ) mice LDL receptor (LDLR) is a cell surface receptor in liver cells that mediates the endocytosis of apoE to clear cholesterol-abundant LDL particles from the circulation. Total plasma cholesterol levels increase twofold compared to those of wild-type, owing to a seven-to ninefold increase in intermediate density lipoproteins (IDL) and LDL without a significant change in HDL [115].
Scavenger receptor class B member 1 knockout (SR-BI KO) mice Scavenger receptor class B member 1 (SR-BI) functions in facilitating the uptake of cholesterol from HDL in the liver. It plays a key role in determining the levels of plasma cholesterol (primarily HDL). Heterozygous and homozygous mutants show 31% and 125% increase, respectively, in plasma cholesterol concentrations than wild-types [116].
db/db mice OB-R is a high affinity receptor for leptin, an important circulating signal for the regulation of feeding, appetite, and body weight. Fatty acid oxidation rates are progressively higher in db/db mice in parallel with the earlier onset and greater duration of hyperglycemia [117].
ob/ob mice A mutation results in a structurally defective leptin that does not bind to the OB-R. Mice that are ob/ob have no leptin action and exhibit obesity and endothelial dysfunction [118].
Fatty Zucker rats
A spontaneous mutant gene (fa or fatty) that affects the action of the leptin. They have high levels of lipids and cholesterol in their bloodstream and become noticeably obese by 3 to 5 weeks of age and over 40% lipid of their body composition by 14 weeks of age [119].
Cholesterol ester transfer protein (CETP) transgenic rats CETP inhibits HDL-mediated reverse cholesterol transport by transferring cholesterol from HDL to very low-density lipoprotein (VLDL) and LDL, promoting atherogenesis. The animals exhibit 82% increase in non-HDL cholesterol in addition to 80% reduction in HDL cholesterol when compared to wild-type rats [120].
A small, genetically reproducible, murine model of atherosclerosis has been long desired due to projections of relatively easy handling and breeding procedures as well as its low cost. Researchers have used genetic technology to produce a number of genetically modified murine models to overcome the many deficiencies of larger animals, particularly to allow studies of potential therapies that require large numbers of subjects. An exciting scientific breakthrough occurred in 1992, when Zhang et al. found that ApoEdeficient mice generated by gene targeting had five times higher plasma cholesterol level and developed foam cell-rich depositions in their proximal aortas by the age of 3 months [114]. This model was the very first line of genetically modified murine model for atherosclerosis studies introduced to the research community. Since then, further research has led to other genetically modified models that mimic important aspects of atherosclerosis, such as fatty streaks, deposition of foam cells, vulnerable and stable plaques, and related complications such as arterial calcification, ulceration, hemorrhage, plaque rapture, thrombosis, and stenosis. Fatty Zucker rats, cholesterol ester transfer protein (CETP) transgenic rats, LDL receptor-knockout (KO) mice, and db/db mice are a few of the genetically modified models developed over recent years ( Table 3). The development of techniques for direct genetic modification that have been previously restricted to murine species is promising to produce other new strains.
According to the oxidation hypothesis of atherosclerosis [126], oxidized LDL (oxLDL) plays a key role in the initiation of the atherosclerotic lesion as well as in almost every step of the atherogenic process, from the formation of cholesterol-laden foam cells in plaques to the functioning as chemoattractants for macrophages and vascular smooth muscle cells [126,127]. Since the etiology of atherosclerosis is multifactorial, the potential lipid-raising effect and lipid oxidation might contribute to atherogenesis together. The potential atherogenic effect of heated oils has been studied in experimental animals. Staprãns et al. [128] reported an increase of -very low-density lipoprotein ( -VLDL) fraction and the formation of fatty streak lesions in aortas in male New Zealand White rabbits fed a low-cholesterol (0.25%) diet containing 5% thermal-oxidized corn oil. Atherosclerotic lesions have also been observed in genetically modified murine models, that is, LDLR −/− and apoE −/− mice after chronic consumption of an oxidized cholesterol diet [129].
However, there are still limitations in the experimental animals used in the aforementioned studies. For instance, cholesterol diets in rabbits may lead to hepatic toxicity [96]. Furthermore, genetically modified mice are rather costly and may impose a substantial financial constraint to a research as well as limit the number of samples. Therefore, a more feasible and affordable alternative has been developed to induce atherosclerosis in rats. Adult female Sprague-Dawley rats were ovariectomized prior to 16-week administration of 2% cholesterol diet fortified with 15% w/w of heated vegetable oil [130][131][132]. Ovariectomy was performed to simulate a postmenopausal condition characterized by the absence of cardioprotective estrogen [133].
Although there was a trend of increasing total cholesterol (TC) in all oil-fed groups, only heated oil-treated rats showed BioMed Research International 7 significant increase in serum TC compared to the control [130,131]. There were pronounced focal disruptions in the aortic intimal layer of the rats fed heated oil. Moreover, mononuclear cells were also observed in the intimal layer [132]. Based on the findings, it is possible to overwhelm rats' natural resistance to atherosclerosis by removing the ovaries. A further attempt was made to induce atherosclerosis in male Sprague-Dawley rats, as the use of the previous ovariectomized models is confined to menopause-induced atherosclerosis. Rats fed with standard rat chow fortified with 15% w/w of heated oil for 16 to 24 weeks. Histological study of the heart revealed cardiac toxicity with the presence of necrosis in cardiac tissue [52]. The intimal layer was observed to be noticeably thickened due to a massive lipid accumulation in the subendothelial space [134]. Lectinlike oxidized low-density lipoprotein receptor-1 (LOX-1), an endothelial receptor for endocytosis of oxLDL, was significantly increased in heated oil-fed rats compared to the control [135]. There were significant positive correlations between LOX-1 and the expressions of vascular cell adhesion molecule-1 (VCAM-1) and intercellular adhesion molecule-1 (ICAM-1) in heated oil-fed rats [135]. We suggest that heated oil diet can be used to induce atherosclerosis in rat models. However, the atherosclerosis-inducing effect seems to be more prominent in the ovariectomized rats than in male rats, as a longer period of intervention is required in male animals. Though the male rat model requires a longer duration of diet treatment to develop atherosclerosis, it is free from any surgical intervention in contrast to female rats undergoing ovariectomy. The use of other interventions such as vitamin D 3 [123] may be helpful in the escalation of atherosclerotic plaque formation.
Conclusion
Progress in cardiovascular disease control requires understanding of the pathogenesis of the disease and testing of potential therapies, both experimentally and clinically. Experimental animal models, particularly murine species, have been a useful tool in this regard. The ideal animal model of cardiovascular disease should be representative to human conditions metabolically and pathophysiologically. The development of genetically modified animal models has enabled researchers to manipulate a specific target (either gene or protein), the role of which in pathogenesis may be subsequently established. This has led to the discovery of a vast spectrum of potential targets for ameliorative intervention. While the use of animal models has undeniably offered novel insights into different important aspects of a disease, still there are no species which are absolutely suitable for all studies, given the multifactorial nature of cardiovascular disease. Therefore, it is of utmost importance to choose an appropriate model to study different parts of cardiovascular disease. Otherwise, many exciting research findings may fail when translating into human studies. An agreement on appropriate experimental models for the study of different facades of cardiovascular disease would be a viable and effective strategy to further the advancement in this field. | 2018-04-03T04:31:37.612Z | 2015-05-03T00:00:00.000 | {
"year": 2015,
"sha1": "afcfc41c20ed52c763cf6a9d4ee860520a96e436",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2015/528757.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5db216e9bb011715586116d0426e1c33dcee02a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17716909 | pes2o/s2orc | v3-fos-license | Deep-sequencing identification of differentially expressed miRNAs in decidua and villus of recurrent miscarriage patients
Purpose MicroRNAs (miRNAs) are small non-coding RNA molecules that play critical roles in post-transcriptional gene expression regulation. The aim of this study was to identify differentially expressed miRNAs in decidua and villus of recurrent miscarriage (RM) patients. Methods Participants were recruited at the outpatient Department of Gynecology and Obstetrics, The Second Hospital of Tianjin Medical University, China. Decidua and villus tissues were collected by curettage from recruited RM patients and normal pregnant women with their informed consent. MiRNAs expression profiles in decidua or villus were respectively determined by the deep-sequencing analysis. The predicated target genes of these differentially expressed miRNAs were analyzed by miRWalk. The differential expressions of four miRNAs in decidua and four miRNAs in villus between the six pairs of RM patients and normal pregnant women were confirmed by qRT-PCR analysis. The expression patterns of two predicated target genes, Bcl-2 and Pten, in the same six pairs of decidual or villus tissues were detected by Western blotting analysis, respectively. Results Totally 18 RM patients and 15 normal pregnant women were recruited. Thirty-two miRNAs in decidua and four miRNAs in villus of RM patients were screened out to be significantly up-regulated compared to that of normal pregnant women, and five miRNAs in villus of RM patients were screened out to be remarkably down-regulated compared to that of normal pregnant women (P value < 0.05 and Fold change >2). These differentially expressed miRNAs were predicted to target a large number of genes that involved in cell apoptosis, p53 signaling pathway, cell cycle and other cellular bio-functions. Differential expressions of hsa-miR-516a-5p, -517a-3p, -519a-3p and -519d in decidua, as well as hsa-miR-1, -372, -100-5p and -146a-5p in villus, were validated by qRT-PCR analysis. In the decidual of RM patients, expression of hsa-miR-516a-5p, -517a-3p, -519a-3p and -519d were significantly up-regulated compared to normal pregnancy. In the villi of RM patients, expression of hsa-miR-100 and -146a-5p were significantly higher, while hsa-miR-1 and -372 were significantly lower compared to normal pregnancy. Furthermore, the expression of Bcl-2 and Pten, a predicated target gene of hsa-miR-1 or hsa-miR-372 respectively, was significantly up-regulated in the villi of RM patients. Conclusions These data suggested that the pathogenic process of RM might be associated with the alteration of miRNAs expression profiles in decidua and villus. Especially, the aberrant placental expression of hsa-miR-1 and -372 might be involved in the progression of RM, but need to be further investigated by larger studies in the future.
Background
Recurrent miscarriage (RM) has been defined as two or more consecutive pregnancy losses prior to the 20th week of gestation in human, and occurs in 1-2 % of pregnant women at reproductive age [1,2], and the etiology of 68 % RM cases are unexplained [3]. Although the female causes of RM have been attributed to uterine structural defects, abnormal development of embryo, defective immunologically regulation at the maternal-fetal interface and free radical metabolism imbalance [4], the exact pathogenic mechanisms underlying RM remain unclear.
MicroRNAs (miRNAs), as a class of small non-coding RNA molecules of 21-24 nucleotides, are widely expressed in mammals to participate in post-transcriptional regulation of gene expression [5]. Mature miRNAs are incorporated into the RNA inducing silencing complex, and target-specific messenger RNAs via imperfect base pairing for translational repression or mRNA cleavage [6]. It has been estimated that miRNAs account for *1 % of predicted genes in higher eukaryotic genomes, and 30 % of functional genes are potential targets of miRNAs [7]. Therefore, miRNAs are believed to play pivotal roles in many biological processes, including cell proliferation, differentiation and apoptosis [8]. Most miRNAs are conserved between different species, and approximately 30 % of miRNAs sequences are highly conserved between vertebrate and invertebrate genomes [9]. In particular, as they are stable and detectable in peripheral blood, miRNAs are emerging as biomarkers for clinical screening or diagnosis of human diseases [10].
In recent years, accumulating evidences indicate that abnormal expression of miRNAs is associated with multiple human reproductive disorders including endometriosis, preeclampsia, ectopic pregnancy and RM [11][12][13]. However, as more than 50,000 miRNAs are predicted to be presented in a mammalian cell [14,15], and one miRNA could target multiple genes while one gene could be targeted by several miRNAs, the network of miRNAs regulation of gene expression is clearly complex and sophisticated. Therefore, it is still necessary to screen and identify specific miRNAs that are involved in the pathogenic mechanisms of RM.
Recently, the deep-sequencing analysis, a highthroughput transcriptomic approach, has been developed and successfully applied to screening differentially expressed miRNAs [16,17]. Thus,the present study was undertaken to screen differentially expressed miRNAs in placental decidual or villi of RM patients by deep-sequencing analysis, with a view to provide new cues for the future studies on pathogenic mechanisms of RM, and to search for biomarker candidates that could be potentially used to predict adverse outcome of pregnancy at the early stage.
Patients and samples
All participants in this study were recruited from June 2013 to August 2013 at the outpatient department of Gynecology and Obstetrics, The Second Hospital of Tianjin Medical University, China. Trying to avoid the disturbance of confounding factors on subsequent analyses, all participants were recruited according to the same inclusion and exclusion criteria. Eighteen RM patients [age 29.61 ± 4.41 years and gestational age at sampling 8.33 ± 1.80 weeks (mean ± SD)], who had experienced at least two consecutive embryonic losses before the 12th gestational week and whose current pregnancy loss was objectively confirmed by transvaginal ultrasound exam, were recruited in the RM group. All clinical summaries about their personal history for thromboembolic disease and successful pregnancy or previous pregnancy losses were obtained. Classical risk factors such as abnormal parental karyotypes, uterine anatomical abnormalities, infectious diseases, luteal phase defects, diabetes mellitus, thyroid dysfunction and hyperprolactinemia were excluded by medical examinations. Meanwhile, 15 clinically normal pregnant women [age 29.33 ± 6.94 years and gestational age at sampling 7.33 ± 0.82 weeks (mean ± SD)], which were terminated for non-medical reasons and undergoing legal abortions around the 6-12th gestational week were recruited in the normal pregnancy group as the control. They were also checked for classical risk factor for pregnancy loss. After informed consent was obtained, decidua and villus tissues were collected by curettage from these 33 participants respectively. All the collected tissues were immediately minced into small fragments and stored in RNAlater Ò tissue collection solution (Invitrogen, Carlsbad, CA) at -80°C until further analyses in May 2014, namely, these tissues have been stored for 9-11 months. This study was approved by the Medical Ethics Committees of Shanghai Institute of Planned Parenthood Research (Ref # 2013(Ref # -7, 2013. Written informed consents were obtained from all patients who provided tissue samples, and we have also obtained consents to publish research data derived from these collected samples.
Small RNA deep-sequencing and data analyses
The total RNA used for deep sequencing was extracted from decidua and villus tissues using TRIzol reagent according to the manufacturer's protocol (Invitrogen), respectively. The concentration of the total RNA product was measured by NanoDrop (Thermo Scientific, Wilmington, DE, USA), and the RNA integrity was checked on Bioanalyzer2100 (Agilent, Santa Clara, CA). Two hundred nanograms (200 ng) total RNA product of each case was used for preparing small RNA libraries according to the manufacturer's instructions (Illumina, San Diego, CA), and RNase inhibitor (Invitrogen) was contained in the reverse transcription system. The cDNA libraries were sequenced on Illumina HiSeq 2000 instrument with 50-base pair single reads. Raw sequencing data was mapped to human miRNAs database (miRBase v21) using Bowtie2. Then, a Mann-Whitney test was performed to discover differentially expressed miRNAs between RM group and normal pregnancy group. These miRNAs with P value \0.05 and Fold change [2 were judged being significantly differentially expressed in the decidua or villus of RM patients compared to normal pregnant women. The predicted target mRNAs of these differentially expressed miRNAs were predicted by miRWalk (http://www.umm.uni-hei delbergde/apps/zmf/mirwalk/mirnatargetpub.html).
Quantitative RT-PCR
Total RNA samples of decidua and villus tissues were extracted using TRIzol according to the manufacturer's instructions (Invitrogen) respectively, and quantified using the NanoDrop ND-1000 (Thermo Scientific). Single-stranded cDNA was synthesized using a reverse transcription kit system (Applied Biosystems, Foster City, CA, USA). Realtime PCR was carried out using FastStart Universal SYBR Green Master (Roche Diagnostics, Welwyn Garden City, UK) and analysed using an ABI 7900 HT (Applied Biosystems). All miRNAs Assay primers used in this study were purchased commercially (RiboBio, Guangzhou, Guangdong Province, China). Primer efficiencies were determined by standard curve. Relative miRNAs expression was calculated by efficiency-corrected DDCt method, normalized to the endogenous control U6 snRNA. Each sample in RM group and normal pregnancy group was measured in triplicate and the experiment was repeated for at least three times.
Western blotting
The collected decidua and villus tissues were quickly frozen in liquid nitrogen, and granulated into fine powder. The tissue powder was homogenized in lysis buffer (Beyotime, China). The tissue lysate was centrifuged, and the supernatant was transferred into a new tube. Protein concentration was measured by the Bradford assay (Bio-Rad, Hercules, CA). Protein concentrations were determined using a standard Bradford assay, and 50 lg of total protein was separated on a 12 % acrylamide gel, and then transferred electrophoretically onto nitrocellulose membranes (Millipore, Darmstadt, Germany). Membranes were incubated overnight at 4°C with specific primary antibodies against Bcl-2, Pten and b-actin (Santa Cruz, Santa Cruz, CA), followed by incubation with appropriate secondary antibody. The blot was developed using the PhosphaGLO AP Substrate kit (KPL, Gaithersburg, MD) according to the manufacturer's protocol. Samples were subjected to Western blotting analysis, which was repeated in triplicates. To ensure accurate comparability between experiments, bands intensity was quantified by densitometry using ImageJ software (US National Institutes of Health, Maryland, USA) and normalized to internal control.
Statistical analysis
All values are presented as mean ± SEM. Statistical comparisons among groups were analyzed by one-way ANOVA followed by Student's t test using SPSS software package (version 19, SPSS Inc., Chicago, IL). A value of P \ 0.05 was considered significant.
Differentially expressed miRNAs in decidua and villus of RM patients
Human decidua and villus tissues were obtained from 18 RM patients and 15 normal pregnant women. No significant differences were observed in the age and gestational period between the two groups (Table 1). Total RNA product was respectively extracted from each tissue sample. After checking the RNA integrity, all of 18 RM decidua RNA samples and 15 normal pregnancy decidua RNA samples were qualified for subsequent analysis; however, only three RM villus RNA samples (4, 10 and 12 V) and four normal pregnancy villus RNA samples (C2 V, C4 V, C7 V and C10 V) were qualified to construct cDNA libraries for deep sequencing [RNA integrities A total of 32 miRNAs were screened to be significantly up-regulated in decidua of RM patients, whereas a total of nine miRNAs was differentially expressed in villus of RM patients, including four up-regulated and five down-regulated miR-NAs ( Fig. 1; Table 2). To validate the reliability of the deep-sequencing data, we selected four miRNAs from decidua (hsa-miR-516a-5p, 517a-3p, 519a-3p and 519d) and four miRNAs from villus (hsa-miR-1, 372, 100-5p and 146a-5p) to confirm their differential expressions in decidua or villus between RM and normal pregnancy group by qRT-PCR analysis. As the villous RNA sample sizes used for deep-sequencing were too small, another three RM villous RNA samples (1, 2, 3 V,) and two normal pregnancy villous (C1 V and C8 V) RNA samples, whose integrities were qualified for qRT-PCR analysis (RIN C 6 and 28 s/18 s C 0.7), were also chosen to be used in qRT-PCR analysis. The qRT-PCR results showed that the expression patterns of all eight selected miRNAs were in concordance with the deep-sequencing data, indicating the deep-sequencing data were reliable (Fig. 2).
Functional analysis of differentially expressed miRNAs in RM patients
A total of 252 putative target mRNAs of 32 differentially expressed miRNAs in decidua, as well as 1281 putative target mRNAs of nine differentially expressed miRNAs in villus were predicted by miRWalk. The functions of predicted target genes and molecular pathways they potentially constitute were assessed using the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses. In decidua of RM patients, the predicted targets were significantly enriched for cell death, apoptosis, cell proliferation and response to hormone stimulus (Fig. 3a), which known to be participated in decidual development during embryo implantation. The pathway analysis showed that the predicted targets gene were involved in cancer, ErbB signaling pathway, focal adhesion and p53 signaling pathway, etc. (Fig. 3b). In the villus of RM patients, the predicted targets were significantly enriched for several biological processes known to be involved in embryonic or fetal development, such as cell proliferation, anti-apoptosis, blood vessel development (Fig. 3c).
The pathway analysis showed that the predicted genes in villus were participated in apoptosis, p53-signaling pathway, cell cycle, et al. (Fig. 3d).
Differential expressions of Bcl-2 and Pten proteins in villus of RM patients
Given Bcl-2 mRNA was predicted as a target of miR-1, and Pten mRNA was the predicted target of miR-372, the expression levels of Bcl-2 and Pten proteins in villus of RM patients and normal pregnant women were measured by Western blot analysis. The results showed that, the expression levels of Bcl-2 and Pten in villus of RM patients were significantly increased compared to normal pregnancy women (P \ 0.05) (Fig. 4), in consistent with the down-regulated expressions of miR-1 and -372 in villus of RM patients.
Discussion
In this study, we observed the differentially miRNAs expression profiles in villus and decidua of RM patients compared to that of normal pregnant women by using deep sequencing analysis. A total of 32 miRNAs were screened out to be significantly up-regualted in decidua of RM patients, while nine miRNAs were identified to be differentially expressed in placental villi of RM patients, including four up-regulated (hsa-miR-191-5p, -24-3p, -100-5p and -146a-5p) and five down-regulated (hsa-miR-1, -372, -371a-5p, -376c-3p and -486-5p) miRNAs, compared to that of normal pregnant women. We further confirmed by qRT-PCR the up-regulation of hsa-miR-516-5p, -517a-3p, -519a-3p and -519d in decidua of RM patients, and hsa-miR-100-5p and -146a-5p in villus of RM patients; as well as the down-regulation of hsa-miR-1 and -372 in villus of RM patients. Furthermore, the increased expressions of Bcl-2, a predicted target of miR-1, and Pten, a predicted target of miR-372 were observed in villus of RM patients. Recurrent miscarriage is presently difficult to be prevented and treated for the lack of knowledge on molecular mechanisms of this disease. Thus, in order to identify novel potential targets for the clinical diagnosis or treatment of RM, we tried to establish the miRNAs expression profiles in the decidua and villus from RM patients in this study. Although we have successfully recruited 18 RM patients and 15 normal pregnant women, and all of the 33 total We thought that this might be resulted from a relatively long storage time of villous tissue samples (more than 9 months), and it seems that, under the same storage conditions, human decidua tissue samples should be much more stable than villus tissue samples. The size of villus samples was so small that we wonder whether or not the results of the deep-sequencing analysis were reliable. Six pairs of decidua and villus from RM patients and normal pregnancy controls were used for validation of eight selected miRNAs by qRT-PCR, and the results of qPCR were extremely consistent with deep sequencing data, indicating that the deep sequencing data were reliable.
Since each miRNA has been predicted to have a broad range of target mRNAs based on the degree of sequence homology, these 32 miRNAs in decidua and 9 miRNAs in villus could undoubtedly be involved in different cellular functions, and we wonder whether or not these cellular functions would be important for establishment and maintenance of pregnancy. The GO ananlysis provides a comprehensive source for functional genomics, and is an effective bioinformatics research tool to unify the representation of gene and gene product, and creates evidencesupported annotations to describe the biological roles of individual genomic products (e.g. genes, proteins, ncRNAs, complexes) [18]. Thus, we carried out the GO analysis for the 1533 predicted target genes (252 in decidua and 1281 in villus) of these differentially expressed miR-NAs. It was revealed that, in decidua, these predicted target genes are mainly participated in cell death, apoptosis, cell proliferation and hormone stimulus, and the major KEGG pathways were cancer, ErbB signaling, focal adhesion, p53-signaling and apoptosis. Meanwhile, in villus, these target genes are mostly involved in the regulation of cell proliferation, apoptosis, blood vessel development and angiogenesis; and the major KEGG pathways analysis were apoptosis, p53-signaling and cell cycle. The pathologies that lead to RM must ultimately, either directly or indirectly, affect the interaction between the maternal endometrial (decidual) and trophoblastic tissue [19]. According to the GO analysis results, the aberrant expression of these target genes in villus might affect trophoblast invasion and placentation, while aberrant expression of these target genes in decidua could adversely impact trophoblast invasion [20]. More interestingly, given the p53-signaling pathway plays critical roles in apoptosis, and an advisable apoptosis of decidual cells, trophoblast cells and decidual immune cells are thought to be essential for the establishment and maintenance of pregnancy [21,22], we speculated that the differentially expressed miR-NAs (such as hsa-miR-519a, hsa-miR-517a, hsa-miR-205, hsa-miR-1, hsa-miR-372 and has-miR-486) which target apoptosis related gene, might involved in the regulation of p53-signaling and apoptosis might participate in the pathogenesis of RM.
Bcl-2, a key regulator of cell apoptosis, was identified as a predicted target of miR-1, a down-regulated miRNA in villus of RM patients. MiR-1 is a member of the musclespecific miR-1 family (myomiRs) that currently consists of six members [23]. It has previously been reported that, miR-1 inhibits cell proliferation but promotes cell differentiation, and is involved in tumorigenesis as a tumor suppressor. Its expression is abnormally down-regulated in several types of cancers, including lung, prostate, colorectal cancers and rhabdomyosarcoma [24]. The abnormalities of Bcl-2 function have been implicated in many diseases including cancer, neurodegenerative disorders and autoimmune diseases [25]. It was observed in this study that, the expression level of Bcl-2 protein was significantly increased in RM villus, suggesting an inhibitory effect of miR-1 on Bcl-2, and the miR-1/Bcl-2 signaling might be involved in the progression of RM. We also noticed that, two out of six bands of Bcl-2 were higher expressed in normal pregnancy than in RM, this might be resulted from the individual differences or/and the gestational week variation. As the cell proliferation and death of trophoblast are active during the early placental development [26], the heterogeneity of Bcl-2 expression might be observed in different human villus tissues, calling for the validation by a larger sample size. Meanwhile, it is confusing that, the expression level of Bcl-2 in trophoblast cells increased during placenta development [27], and a reduced expression of Bcl-2 was in association with pregnancy loss [28]. This may be due to the multiple and complex effects of differentially expressed miRNAs on the microenvironment of decidua and villus of RM patients. In order to improve the clinical management of RM patients, great efforts have been made to search for biomarkers potentially could be used to predict adverse outcome of pregnancy at the early stage, and miRNAs present promising biomarker candidates of RM because the serum concentration of miRNA could be measured in maternal peripheral blood samples [1,29]. MiR-1 has been identified as potential diagnostic biomarker for colorectal cancer [30]. Thus, the serum concentrations of miR-1 in RM patients should be measured and compared with normal pregnant women to explore the possibility of their acting as biomarker candidate for RM. Pten, another important factor involved in p53-signaling pathway and apoptosis, was predicted as a target gene of miR-372, which was down-regulated in villus of RM patients. MiR-372 belongs to the mir-371-372 gene cluster, which is located on chromosome 19q13.42 [31]. Although the role of miR-372 itself in reproductive regulation has not been clear, it has been reported that, the expression level of miRNA-371 was increased from first trimester trophoblast cells to term trophoblast cells [32]. Further evidence reinforced that miR-371 cluster was up-regulated in the first trimester placentas compared to the third trimester placentas, indicating that miR-371 might play critical roles in placental development [33]. Meanwhile, it has been demonstrated that miR-372 regulated the cell cycle, apoptosis, invasion and proliferation in several types of human cancers [34], thus, it would be reasonable for us to speculate that miR-372 might also be involved in placental development. To study the correlation of miR-372 and Pten, we detected the expression of Pten in villus of RM patients, and found that the expression level of Pten was increased in RM villus, suggesting an inhibitory effect of miR-372 on Pten, and the miR-372/Pten signaling might be involved in the progression of RM. Consistently, it has been reported that, the villous expression of Pten was decreased as the pregnancy advanced, and an up-regulated expression of Pten was observed in early pregnancy loss [35,36]. It has been shown that circulating serum mir-372 could serve as testicular germ cell cancer biomarker [37]. So, miR-372 presents the another biomarker candidate for RM that is needed to be validated in larger size studies in the future.
In this study, we have screened out a number of intriguing miRNAs expression differences between RM and normal pregnancy. To further authenticate the association between these differentially expressed miRNAs and the pathologic process of RM, a case-control cohort of RM with a considerably large size (at least 100 pairs) should be established to collect the tissue and peripheral blood samples. The aberrant expressions of miRNAs might be linked to the abnormal cellular processes in RM patients, but the hypotheses about the roles of each specific miRNA in the progression of RM are needed to be further investigated. Also, pregnancy complications are notoriously hard to study in the laboratory because of the absence of appropriate models of human pregnancy. Furthermore, we found many changes in miRNAs expression potentially affecting many different processes, and it should to be note that, as the miscarried embryos were dead, necrosis or inflammation might have occurred in the dead embryonic tissues, whereas the embryonic tissue of induced abortion was fresh, therefore the differentially expressed miRNAs identified here might be the result of miscarriage but not the causes.
Conclusions
Collectively, a number of miRNAs were identified to be differentially expressed in decidua or villus tissues of RM patients, and these miRNAs might be involved in many bio-functions including p53-signaling and cell apoptosis. The aberrant placental expression of hsa-miR-1 and -372 might be involved in the progression of RM by targeting Bcl-2 or Pten respectively. Future studies will investigate the pathogenic roles of miR-1/Bcl-2 and miR-372/Pten pathways in RM, as well as the association of serum concentrations of miR-1, -372 and other differentially expressed miRNAs with the pathogenic process of RM. | 2017-08-02T19:42:41.665Z | 2016-02-15T00:00:00.000 | {
"year": 2016,
"sha1": "79693ac7055f0ab928c76f33bcc05101e7b48f45",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00404-016-4038-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "79693ac7055f0ab928c76f33bcc05101e7b48f45",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229719077 | pes2o/s2orc | v3-fos-license | Feasibility of In-line monitoring of critical coating quality attributes via OCT: Thickness, variability, film homogeneity and roughness
The feasibility of Optical Coherence Tomography (OCT) for in-line monitoring of pharmaceutical film coating processes has recently been demonstrated. OCT enables real-time acquisition of high-resolution cross-sectional images of coating layers and computation of coating thickness. In addition, coating quality attributes can be computed based on in-line data. This study assesses the in-line applicability of OCT to various coating functionalities and formulations. Several types of commercial film-coated tablets containing the most common ingredients were investigated. To that end, the tablets were placed into a miniaturized perforated drum. An in-line OCT system was used to monitor the tablet bed. This set-up resembles the final stage of an industrial pan coating process. All investigated coatings were measured, and the coating thickness, homogeneity and roughness were computed. The rotation rate was varied in a range comparable to large-scale coating operations, and no influence on the outcome was observed. The results indicate that OCT can be used to determine end-point and establish in-process control for a wide range of coating formulations. The real-time computation of coating homogeneity and roughness can support process optimization and formulation development.
Introduction
Motivated by a shift towards continuous manufacturing in the pharmaceutical industry and the demand for more automation, in-line process monitoring and real-time process data acquisition are becoming increasingly important. Process analytical methods have been demonstrated and implemented for a range of unit operations in solid dosage manufacturing (Fonteyne et al., 2015;Laske et al., 2017). As far as the coating processes are concerned, in-process controls still mainly rely on manual measuring of the diameter or weight gain of coating thickness. This procedure is tedious and time-consuming. Moreover, it is affected by the operator's influence, involves a low number of samples and cannot account for tablet core variability.
Thus, more sophisticated methods have been reported, comprising Xray computed tomography (XμCT) (Ariyasu et al., 2017;Radtke et al., 2019), broadband acoustic resonance dissolution spectroscopy (BARDS) (Fitzpatrick et al., 2012;Alfarsi et al., 2018), near infrared spectroscopy (NIRS) (Gendre et al., 2011;Hattori et al., 2018), Raman spectroscopy (Barimani and Kleinebudde, 2017;Kim and Woo, 2018), terahertz pulsed imaging (TPI) (May et al., 2011;Haaser et al., 2013;Lin et al., 2015a) and optical coherence tomography (OCT) (Markl et al., 2014(Markl et al., , 2015(Markl et al., , 2018Sacher et al., 2019). Although XμCT offers high resolution, it is time consuming and therefore mostly used as reference method. BARDS can evaluate the coating thickness and integrity of samples drawn from the process using a correlation model. The rest of the abovementioned methods are capable of measuring the coating properties inline. While calibration models are required for NIR and Raman spectroscopy, TPI and OCT can measure the coating thickness directly. In addition to offering high resolution, TPI and OCT can measure coating properties of single-dosage units (Lin et al., 2017a). Especially OCT, which is based on low coherence interferometry, is suitable for monitoring the coating quality in real time due to its high acquisition rate (Sacher et al., 2019). Coating homogeneity and coating roughness can be computed in real time. The drawback of OCT is a lower penetration depth compared to TPI (Lin et al., 2015b). Scattering particles can limit the penetration and hinder the coating layer detection. However, a wide range of functional coating formulations can be analyzed by means of OCT. Lin et al. (2017b) investigated the applicability of OCT to common coating materials in the off-line mode. Wolfgang et al. (2019) demonstrated the validity of OCT results for at-line and in-line applications using a polymer reference target as well as coated tablets and complementary methods.
However, all studies on the applicability of OCT for analysis of different coating formulations so far concentrated on off-line OCT technology. Therefore, in this study we investigate the feasibility of an in-line OCT system for analysis of common coating formulations. As a model setup, the coating process was experimentally simulated using a perforated rotating drum, which enables the same measurement conditions as an industrial scale pan coater. This drum is commercially used as an at-line monitoring system, as it mimics a modern drum coater in terms of tablet bed movement and monitoring characteristics. Several types of commercially-available coated tablets were investigated. In addition to coating thickness, for the first time homogeneity and roughness were computed for commercial tablet formulations based on data acquired in real time.
Tablet core and coating material
Commercial tablets with different coating formulations were purchased in a standard pharmacy. In addition, three types of tablets were supplied by a pharmaceutical company (hereinafter referred to as Pharm 1, Pharm 2 and Pharm 3 rather than by the brand name). The coatings of the investigated tablets contain mostly common polymers and represent most types of coating functionalities, i.e., cosmetic, delayed release (enteric), extended release and osmotic-controlled release. The cosmetic coatings contained hydroxypropyl methylcellulose (HPMC) as a filmforming polymer, and the delayed release coatings were Eudragitbased. Both Eudragit L and Eudragit L30D are co-polymers of methacrylic acid with methyl acrylate and ethyl acrylate, respectively. The osmotic release coatings in this study contained HPMC and cellulose acetate (CA) as a polymer, and the extended release coating contained hydroxypropyl cellulose in addition. The coating of two types of tablets (Pantoloc and Pantoprazol) consisted of two separate layers, which may pose a challenge to the monitoring technique. Only tablets without or with a low amount of scattering pigments were selected, as high pigment contents is a limitation for OCT (Lin et al., 2017b). The tablets had round biconvex and oval biconvex shapes and sizes from 7 mm to 19 mm in diameter or length.
Optical coherence tomography measurements
The tablets were filled into an at-line tablet sampling device (Phyllon, Austria) and presented to the OCT probe. This system contained a perforated rotating drum with variable rotation rate, which mimicked an industrial scale pan coater and had holes with a diameter of 2.8 mm and has the advantage that a few hundred tablets can be analyzed at the same time. An in-line OCT sensor (OSeeT Pharma 1D, Phyllon, Austria) was placed below the drum and measured the passing tablets through the holes in the drum in the same way as in an industrial application described in Markl et al. (2015). A front cover of Plexiglas made it possible to watch the tablets moving. Fig. 1 shows an image of the at-line sampling device in operation, filled with a bed of tablets. For better visibility of the tablets in the image, the Plexiglas cover of the device was dismantled.
The tablets were placed into the perforated drum. Either 200 or a minimum of 100 ml of tablets were filled depending on the tablet size in order to achieve a uniform tablet bed. For the pan diameter of 0.2 m, a rotation speed of 30 rpm was adjusted to achieve a circumferential speed of 0.31 m/s. This resembles the operating conditions in an industrialscale pan coater with pan diameters from 0.6 m to 0.8 m, which typically operates at 8-10 rpm (Wang et al., 2012).
For OCT measurements, the spectral-domain (SD) system described in Sacher et al. (2019) was used. It consists of a base unit (which houses the light source, the spectrometer and the electronics) and the sensor head, which divides the light beam into the measurement and reference beams. As light source a super luminescent diode (SLD, BLMS mini, Superlum Diodes Ltd., Ireland) with a central wavelength of 832 nm and spectral full-width at half-maximum (FWHM) bandwidth of 75 nm was used. The resulting axial resolution in air is 4 μm. The system acquires up to 100,000 single depth scans (A-scans) per second. The sensor head was placed into the at-line sampling device and connected to the base unit via an optical glass fibre. For image evaluation, the algorithm presented in Sacher et al. (2019) was applied. First, each A-scan is classified into the categories air, tablet and drum. Next, the interfaces between the air and the coating and between the coating and the core are detected via ellipse fitting. The coating thickness is computed as the shortest distance between the ellipse fits of air/coating and coating/core interfaces perpendicular to the top interface. To exactly identify an interface, three adjacent layers must be clearly visible in the OCT image. Taking the axial resolution of 4 μm into account, this results in a detection limit of 12 μm. The coating homogeneity is calculated as a ratio between the number of pixels with brightness above a defined value and the total number of pixels in the detected coating layer. Therefore, a more uniform coating layer with fewer dark pixels yields a higher homogeneity value. The coating roughness is defined as a root mean squared error (RMSE) between the real top interface peaks and the ideal top interface ellipse. All coating properties are computed based on the acquired depth profiles in real time. Fig. 2 shows how the thickness (distance between the ellipse fits in red), the coating homogeneity (calculated using all pixel data within the ellipse fits and the vertical lines) and the coating roughness (deviation between the upper ellipse fit and the true coating surface in the cross section) are defined. Fig. 1. At-line sampling device with a rotating drum and an in-line OCT sensor. The insert shows the moving tablet bed in the drum.
Reference analytics
The tablet coating layers were investigated off-line via light microscopic (LM) imaging. From each tablet type, 10 tablets were selected randomly for LM analysis from the tablets, which have also been measured in the at-line sampling device before. In the case of circular or oblong shape, the tablets were cut with a sharp knife on one convex side between the center and the outer edge. Capsule-shaped tablets were cut in the center on one side in the circumferential direction. After cutting off half of the tablet height, all tablets were broken manually. This procedure allowed an optimal preservation of the coating layer. The broken sides of the tablets were analyzed by means of a light microscope (Leica DM 4000 M from Leica, Germany) with a 50-fold magnification in the reflectance mode. The coating thickness was measured in 10 positions for each tablet using the software Leica Application Suite v4.9. An error of 10% was estimated based on the range of subjective identification of the layer boundaries. Images with 200-fold magnification were taken to acquire an optical impression of the coating homogeneity.
Results and discussion
The OCT images were acquired by means of in-line OCT system and the coating attributes were computed in real-time. A refractive index of 1.5 was assumed for all investigated coatings. Since coating polymers typically have refractive indices between 1.4 and 1.6 in the wavelength range of the applied OCT system (https://refractiveindex.info, 2020), the potential maximum deviation for the coating thickness is 7%, compared to the value obtained via the actual refractive index.
If OCT is employed in industrial applications, the actual refractive index should be used to obtain the most accurate coating thickness. Although the residual moisture content in the coating layer can influence the refractive index, it can be assumed to be constant for tablets at the outer region of the coater close to the drum, where an in-line OCT measurement is performed from outside the coater through the holes of the drum. However, for end-point determination, in-process control and closed-loop process control, establishing a relative increase in the coating thickness and the value related to the optimal drug dissolution performance is sufficient. In Fig. 3, OCT images of the investigated commercial tablet coatings, acquired via the in-line OCT system, are shown. The orientation of the coating layers in the images depends on the orientation of the tablet in front of the OCT sensor. The green areas represent the sections of the layer that are classified as coatings by the algorithm. Based on these image data, the coating attributes are computed. The scale bar shows the dimension in the direction of light beam in the z-axis. The dimension in the x-axes depends on the speed of the tablets passing in front of the light beam and on the pan speed. The coating layers of all investigated tablets were successfully detected and classified, from very thin (15 μm at RatioDolor) to very thick coating layers (158 μm at Pantoprazol).
Interestingly, even coating layers with lower amounts of talcum or iron oxides could be detected (see Table 1), while titanium dioxide disabled proper layer identification. Only the top layers of Pantoloc and Pantoprazol could be detected. In contrast, in the LM images of these tablets, two layers are visible. Fig. 4 shows light microscopic images of all investigated tablet coatings with a 200-fold magnification. For Pantoloc, both layers are in the same thickness range, while for Pantoprazol the inner layer is very thin. A potential reason could be that the amount of titanium dioxide in the inner layer of Pantoloc hinders proper detection of the second layer via OCT. Another cause could be overlap or penetration of the core and the isolation layer, making the detection of a distinct interface impossible. Table 2 summarizes the coating attributes and their relative standard deviations (RSD) based on 500 OCT detections. In addition, the mean coating thicknesses and RSDs from analysis of the LM images are provided, based on approximately 100 measurements in different positions on the tablets, including band and bend. The values for RatioDolor and Glucophage are missing since their coating layers were too thin for a reliable evaluation based on the LM images. Fig. 5 shows an example of the thickness measurements based on the LM images. The mean coating thicknesses of OCT and LM are in very good agreement for most of the tablets. Potential reasons for the relative thickness deviation between OCT and LM are the difference between the estimated and the actual refractive indexes and the challenge of manually selecting the correct interfaces in the LM images. However, it is below 5% for all tablets, except for Pharm 2 and Pharm 3, which show deviations in the range of 10%. Assessing the measurements in more detail reveals that the majority of thicker LM readings stem from positions far away from the tablet center. In contrast, OCT acquires its data more often from the top of the tablets due to their orientation in the drum. We thus, conclude that for these tablets (Pharm 2 and 3) the coating thickness is not evenly distributed. For Pantoloc and Pantoprazol only the top layer values are shown in Table 2.
The RSD of the coating thickness is below 15% for most tablet samples. It has been shown in the literature that the coating thickness can vary greatly between tablets (inter-tablet variability) (Sacher et al., 2019;Wahl et al., 2019;Lin et al., 2017a) due to the randomly distributed amount of spray being deposited on each tablet and, to a certain extent, to the uneven surface of tablet cores. For two tablets (Glucophage and Thrombo ASS), the RSD is higher. Since the formulations of these two tablets are different, yet similar to other tablets with lower RSD, their broad RSD rather seems to stem form the process. Due to a relatively small number of measurements, the RSD based on LM can only be used to estimate tendencies in the thickness variation. These are, however, comparable to the OCT results.
The coating homogeneity, which represents the number of dark pixels related to the overall number of pixels in the coating layer, varies greatly among the investigated tablets and seems to be related to the type of coating and ingredients. Products that contain HPMC have high homogeneity values, while the Eudragit-based coatings have low homogeneity (as reflected by the darker images in Fig. 3). The coatings of Ratiodolor and Glucophage are very thin. For a thin coating layer, the air-coating and coating-core interfaces, which also consist of dark pixels, can influence the computation of homogeneity. Therefore, these two tablets were excluded from the comparison. The homogeneity acquired via OCT is influenced by a change in the refractive index within the coating layer, which can be caused by any kind of inclusions and a different amount of residual solvent during drying and curing that are affected by both the formulation and the process. OCT has the potential to analyze and detect deviations in the coating homogeneity from a predefined set-point, and to support formulation development and process control. The dark pixels in the images of the osmotic release products Pharm 1 and Pharm 2 can be due to porosity in these coating formulations. Therefore, the homogeneity could also be used as an indicator of the layer porosity.
The LM images in Fig. 4 provide further information about the internal coating structure, although they show a very limited area. Pantoloc with the highest homogeneity computed via OCT has a very smooth and uniform coating layer, while Zinkorotat POS and Thrombo ASS with the lowest OCT homogeneity have the most irregular coating appearance. Thus, there is a clear correlation between visual appearance and the measured OCT homogeneity. In the LM images of Pantoprazol, some grains or inclusions can be found, which result in a lower homogeneity compared to Pantoloc. This is in agreement with OCT. However, the LM images of Pharm 1, Pharm 2 and Pharm 3 leave room for interpretation. While the OCT homogeneities of Pharm 1 and Pharm 3 are close to Pantoloc, the LM images indicate a layered coating structure similar to Zinkorotat. This may reflect layer-wise application and drying during the coating process. To the naked eye, the coating structure of Pharm 2 looks similar, although the homogeneity is much lower. A potential reason of grain-like or irregular areas of coating, which is not clearly related to the formulation or the process, is the preparation procedure that can induce delamination of the coating. Compared to the coating structure analysis via LM, OCT provides a more objective representation based on the number of pixels darker than a defined threshold. Nevertheless, the homogeneity value computed by OCT can be influenced by various sources (real change in homogeneity due to process or formulation, but also image artefacts due to broader dark areas at layer interfaces) and therefore, more research on this topic is needed.
The coating roughness is 3-4 μm for most of the investigated samples. Only the thin coating layers of Ratiodolor and Glucophage show less roughness. This is in good agreement with the observation that the coating roughness increases with increasing process time (Sacher et al., 2019;Seitavuopio et al., 2006). The roughness of Pantoprazol's very thick coating is still about 4 μm. This correlates with Seitavuopio et al. (2006), who found that the coating roughness increases in the beginning of the coating process and remains stable after some period of process time leading to no further increase for thick coatings. The absolute roughness values are within the same range as those obtained via contact profilometry (Markl et al., 2018) and laser scanning microscope (Dohi et al., 2016).
The OCT measurements were repeated at various rotation rates to investigate the influence of tablet speed on the quality of the results. As a lower limit, a rotation rate of 20 rpm was selected. At this speed, there is still enough movement in the tablet bed to induce sufficient exchange of tablets in front of the OCT sensor. The upper limit of 40 rpm still provides a bed of tablets, while a further increase in the drum speed leads to cascading and tumbling of tablets in the drum. Depending on the size and shape of tablets, slightly lower and higher drum speeds are possible. The investigated range also represents typical operation states of conventional coaters. Generally, faster movement of the tablets in front of the OCT sensor leads to distortions in the acquired images (Markl et al., 2014). This effect can be overcome by decreasing the time between two A-scan acquisitions and obtaining more information for each tablet. In this study, exactly the same settings of the OCT system were used for all rotation rates. The results in Table 2 indicate that a change in the rotation rate within the described limits does not influence the OCT readings. The measured coating thickness remained within a range of ±1 μm. Coating homogeneity and roughness are not affected significantly by an altered drum rotation either, meaning that the operator can run OCT at a wide range of coater speeds using the same settings.
Conclusion
When applying OCT for in-line monitoring of industrial coating processes, it is essential to establish which type of coatings can be measured. In this study, several types of tablets containing common coating polymers and plasticisers were investigated using an in-line OCT system. All tested materials could be measured and coating attributes computed in real time, from very thin coating layers of about 15 μm to very thick coating layers of over 150 μm. Although coatings with high pigment scattering were excluded from the study, it was possible to detect the coating layer interfaces for drug products with fewer pigments. In addition to the coating thickness, homogeneity and roughness could be computed based on the OCT images. Homogeneity is linked to the coating formulation, while roughness seems to be influenced mainly by the process. OCT has high potential to support formulation development and process optimization via coating quality monitoring. Compared to LM imaging, OCT can provide statistically more representative information about the coating structure much faster. However, deeper investigation of both of these coating attributes is required to understand the link between coating quality and effects such as drying and curing.
The results of this study are valid for coating processes on the industrial scale since the conditions in the miniaturized drum set-up mimicked large-scale drum coating. Therefore, OCT is applicable for end-pint determination and in-process monitoring of a wide range of pharmaceutical coatings. To maximize the use of OCT capabilities, future work must concentrate on implementation in industrial-scale processes and combination with process control systems. Development of closed-loop control based on a correlation between coating attributes monitored by OCT and critical quality attributes of the drug product will support manufacturing of precise coating applications.
Declaration of Competing Interest
The authors declare that they have no known competing finanicial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-12-24T09:08:35.749Z | 2020-12-17T00:00:00.000 | {
"year": 2020,
"sha1": "ee460b28056f96f0b0d650dbcf8e62e71236f456",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijpx.2020.100067",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e51fadb91b0388e28436ba29f32f9e63f03bc75",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
3003959 | pes2o/s2orc | v3-fos-license | Hypermethylation and down-regulation of DLEU2 in paediatric acute myeloid leukaemia independent of embedded tumour suppressor miR-15a/16-1
Background Acute Myeloid Leukaemia (AML) is a highly heterogeneous disease. Studies in adult AML have identified epigenetic changes, specifically DNA methylation, associated with leukaemia subtype, age of onset and patient survival which highlights this heterogeneity. However, only limited DNA methylation studies have elucidated any associations in paediatric AML. Methods We interrogated DNA methylation on a cohort of paediatric AML FAB subtype M5 patients using the Illumina HumanMethylation450 (HM450) BeadChip, identifying a number of target genes with p <0.01 and Δβ >0.4 between leukaemic and matched remission (n = 20 primary leukaemic, n = 13 matched remission). Amongst those genes identified, we interrogate DLEU2 methylation using locus-specific SEQUENOM MassARRAY® EpiTYPER® and an increased validation cohort (n = 28 primary leukaemic, n = 14 matched remission, n = 17 additional non-leukaemic and cell lines). Following methylation analysis, expression studies were undertaken utilising the same patient samples for singleplex TaqMan gene and miRNA assays and relative expression comparisons. Results We identified differential DNA methylation at the DLEU2 locus, encompassing the tumour suppressor microRNA miR-15a/16-1 cluster. A number of HM450 probes spanning the DLEU2/Alt1 Transcriptional Start Site showed increased levels of methylation in leukaemia (average over all probes >60%) compared to disease-free haematopoietic cells and patient remission samples (<24%) (p < 0.001). Interestingly, DLEU2 mRNA down-regulation in leukaemic patients (p < 0.05) was independent of the embedded mature miR-15a/16-1 expression. To assess prognostic significance of DLEU2 DNA methylation, we stratified paediatric AML patients by their methylation status. A subset of patients recorded methylation values for DLEU2 akin to non-leukaemic specimens, specifically patients with sole trisomy 8 and/or chromosome 11 abnormalities. These patients also showed similar miR-15a/16-1 expression to non-leukaemic samples, and potential improved disease prognosis. Conclusions The DLEU2 locus and embedded miRNA cluster miR-15a/16-1 is commonly deleted in adult cancers and shown to induce leukaemogenesis, however in paediatric AML we found the region to be transcriptionally repressed. In combination, our data highlights the utility of interrogating DNA methylation and microRNA in combination with underlying genetic status to provide novel insights into AML biology.
Background
Acute myeloid leukaemia (AML) is the third most common form of leukaemia in children, typically characterised by the rapid proliferation of primitive haematopoietic myeloid progenitor cells [1]. Paediatric AML is a highly heterogeneous disease, which presents a major barrier towards the development of accurate disease classification, risk stratification and targeted therapies within the clinic. The French-American-British (FAB) [2] and more recently World Health Organisation (WHO) [3] classifications of leukaemia take into account cell morphology, cytogenetic aberrations and common genetic lesions. However, not all patients fall into these well-defined categories. Additionally, the recurrent chromosomal and genetic lesions frequently found in AML fail to induce leukaemogenesis and do not explain the recognised clinical heterogeneity [4,5].
One of the hallmarks of nearly all human cancers is the disruption of the epigenetic profile, including gross aberrations in DNA methylation. Increasing evidence in adult AML has indicated that epigenetic events play critical roles in the onset, progression, and outcome of AML [6] and may help tailor disease treatment. However, the need for similar elucidations in childhood disease is paramount. Aberrant methylation of cytosine residues at palindromic CpG sites (often clustered in dense CpG 'islands') near gene promoter regions is widely studied in carcinogenesis and haematological malignancies [6,7]. It is now well established that elevated DNA methylation is an important mechanism of gene transcriptional inactivation [8,9] and genes such as ESR1, IGSF4 and CDKN2B/p15 are epigenetically silenced in adult leukaemia [6]. Previous studies have subdivided adult AML into 16 epigenetic sub-groups based on DNA methylation signatures, correlating with patient clinical outcome and distinct from both normal haematopoietic cells and normal stages of myeloid differentiation [4]. Despite such emerging findings in an adult context, the utility of individual DNA methylation disruptions in paediatric AML has yet to be fully evaluated [6].
MicroRNA (miRNA) represent an alternative epigenetic regulator, having been implicated in the regulation of critical gene expression networks in plants and animals. The role of miRNA in haematopoiesis, cancer and disease is also beginning to be appreciated [10,11]. The global influence of individual miRNA on the genome is difficult to dissect, as miRNA can modulate the expression of hundreds of genes, and each gene can harbour binding sites for several miRNA [12]. Human miRNA are initially transcribed (pri-miRNA), and processed by several complexes to form a 70 bp hairpin-loop (pre-miRNA) [13]. After successive enzymatic steps, a miRNA:miRNA* complementary duplex is formed where the 'functional' strand is combined with RISC (RNA Induced Silencing Complex) and Argonaute proteins to guide, and inhibit, specific target messenger RNA (mRNA) through base pair recognition [14,15]. However, the miRNA transcriptome is becoming increasingly complex, emphasised by Next Generation Sequencing (NGS) technologies. NGS has highlighted that alternate miRNA* transcripts, as well as miRNA sequence variants (isomiRs [16]) may play a biological role, similar to their canonical miRNA relatives [17,18].
Links between miRNA deregulation and cancer diagnosis were first identified in adult Chronic Lymphoblastic Leukaemia (CLL), where the loss or downregulation of tumour-suppressing miRNA cluster miR-15a/ 16-1 directly caused leukaemic transformation [19,20]. At present, no such association has been identified for childhood leukaemia. The expression of paediatric diseaseassociated miRNA has to date only identified a distinction between leukaemia of different lineages and the differentiation of rearranged AMLs within a limited number of cytogenetic subtypes [21,22]. Paediatric MLL can be distinguished from others by differentially expressed miR-126, miR-146a, miR-181a/b/d, miR-100, miR-21, miR-196a/b, miR-29 and miR-125b [21]. However concordance among studies is often low and the mechanism of deregulation is often unknown [22,23].
Genes encoding miRNA can be regulated epigenetically in a similar manner to protein coding genes [22]. Studies have demonstrated epigenetically regulated miRNA in adult AML, including hypermethylation and down-regulation of miR-124a and associated deregulation of target mRNA EVI1, CEBPA and CDK6 independent of diagnostic cytogenetic subtype (reviewed in [22]). Additionally, miR-193a targeting KIT, and miR-14b targeting CREB have been identified in adult investigations as specifically controlled by DNA methylation (reviewed in [22]). However, the identification of DNA methylation and miRNA expression connections in paediatric leukaemia is lacking.
Paediatric AML has distinct cytogenetic and clinical features relative to their adult counterparts [5,21,[24][25][26]. Therefore, there is a critical need to improve our understanding of the biology of childhood leukaemia as separate entities, distinct from adult disease. Cognisant of this, we aimed to identify differential DNA methylation within paediatric AML on a genome-scale using defined clinical subtypes and agematched controls. We identified a number of significantly altered DNA methylation loci, with associated gene and miRNA expression change, between paediatric AML and non-leukaemic counterparts. Specifically we describe here the epigenetic deregulation of DLEU2, which has associated alterations in downstream miR-15a/16-1 miRNA cluster expression.
Results and discussion
The DLEU2 gene is specifically hypermethylated and repressed in paediatric AML subtype M5 The FAB subtype M5 (monocytic/blastic leukaemia) is a distinct subtype with characteristic chromosomal abnormalities including t(8; 16), +8 and various translocations involving 11q23 and the MLL locus such as t(9;11), t(10;11), t (11;19) and others [27]. AML subtype M5 also has a high proportion of cytogenetically normal (CN-AML) patients [27], and those with complex karyotypes [28]. A combination of these factors add to the overall unfavourable outcome of paediatric M5 diagnosis [29]. Genome-scale methylation profiling of AML M5 bone marrow samples identified 3,352 significantly differentially methylated probes (DMPs) between paediatric AML FAB M5 (n = 20) and matching non-leukaemic (n = 17) samples. Applying more stringent feature selection criteria of an adjusted p-value <0.01 and Δβ of >0.4 reduced the number of DMPs to 137 (Additional file 1).
The list of DMPs included several localising to the long non-coding RNA DLEU2 and the embedded miRNA cluster miR-15a/16-1 [19,20], previously implicated in adult leukaemic [19,20,30]. To date, disruption of this region has not been observed in paediatric cancers and as such we chose to focus on DLEU2 in subsequent analysis. A total of three DLEU2 DMPs had an adjusted p-value <0.01 and Δβ of >0.4 (cg05394800, cg20529344, cg12883980). These probes were located within three CpG islands (Chr 13: 50, 690,000-50,708,000: UCSC human hg19 assembly) at the DLEU2/Alt1 transcriptional start site (TSS) and 'north shore' (Figure 1), a region up to 2 kb upstream from the DLEU2/Alt1 TSS CpG island under investigation [31]. Henceforth this will be referred to as the DLEU2 promoter.
We observed a significant down-regulation of DLEU2 gene expression in paediatric AML (0.07 Fold Change (FC); p = 0.014), and a significant inverse correlation between promoter DNA methylation and gene expression levels (p = 0.0001, Additional file 7). Recent studies in adult CLL have also identified a negative correlation between DLEU2 promoter methylation (DLEU2/Alt1) and gene expression [32]. Interestingly there was no change in gene expression for any other genes in this region (TRIM13 = 1.75 FC; DLEU1 = 1.05 FC. Figure 2A).
DLEU2 and embedded miR-15a/16-1 are regulated independently in paediatric AML
The miR-15a/16-1 cluster has been described to have potent tumour suppressor activity, targeting numerous oncogenic and cell cycle regulatory genes [19,33]. The cluster is embedded within intron 4 of DLEU2, and has been speculated that expression is driven by the DLEU2 promoter [20,30]. However, we found no correlation between DLEU2 expression and miR-15a/16-1 expression in paediatric AML, nor down-regulation of the miR-15a/ 16-1 miRNA cluster in relation to increasing DLEU2 promoter DNA methylation ( Figure 2B). In contrast to previous reports for adult leukaemia, no significant change in mature miR-16 expression was observed between paediatric AML and control samples, a result we have reported elsewhere [34]. These observations are independent of the homologous miR-16 cluster embedded in SMC4 on chromosome 3q26, which shows no significant expression or DNA methylation changes in association with AML (Additional file 8). Similar results have been reported for adult CLL [32]. Taken together this data suggests an alternate mode of regulation for the miR-15a/16-1 cluster, outside of the DLEU2 promoter region.
Interestingly, recent research has indicated the processing mechanisms of miRNA may be affected by cancer, such that mature miRNA expression becomes disassociated from precursor miRNA levels, and also from the levels of the host gene [35]. We found the miR-15a/16-1 primary precursor transcript is in fact expressed up to three-fold higher in AML patients compared to non-leukaemic counterparts ( Figure 2B). The increase in primary transcript appears to correspond to an increase in mature miR-16-1* (2.52 FC) and miR-15a* (2.2 FC) expression, and a moderate increase Here we interrogate the gene, miRNA and precursor miRNA expression in paediatric AML compared to non-leukaemic. This interrogation includes DLEU2 and embedded miR-15a/16-1 on chromosome 13q4. The leukaemic group refers to diagnostic bone marrow from paediatric patients. The non-leukaemic group consists of CD sorted cell populations (CD19+, CD33+, CD34+, CD45+) and patient remission specimens, and is represented by the dashed line at Y = 1. Fold Change (FC) is plotted using normalized data and the 2 -ΔΔCt method ± SD, and shows the fold change calculated from the means of each group. A. Gene expression including DLEU1, DLEU2 and TRIM13 in paediatric AML (n = 10) compared to non-leukaemic (n = 13) expression. DLEU2 shows a significant down-regulation in AML compared to non-leukaemic expression (0.07 FC, p = 0.014 represented by **), however there is no significant change in expression for TRIM13 or DLEU1. B. Mature microRNA expression, including primary precursor transcript (PRI) and alternate miRNA expression (*), from the miR-15a/16-1 miRNA cluster embedded within DLEU2 for paediatric AML (n = 28, including the 10 used in Figure 2A) compared to non-leukaemic specimens (n = 30, including the 13 used in Figure 2A). Here the miR-15a/16-1 PRI transcript is 3.03-fold higher in expression compared to non-leukaemic expression. Additionally, miR-16-1* (2.52 FC), miR-15a* (2.24 FC) and miR-15a (1.5 FC) also show increases in expression in paediatric AML. No significant change in expression was observed for miR-16 (0.94 FC) in paediatric AML compared to non-leukaemic expression.
in miR-15a (1.5 FC). However, individual patient samples show differential degrees of over-expression of one, or all, of the mature species despite the common precursor (Additional file 9).
Canonical miR-15a/16-1 microRNA species target numerous oncogenic and integral cell cycle regulatory genes such as BCL-2, MCL-1, CCND1, CDK6, BMI-1, RASSF5, IGSF4, c-MYB and WNT3A [33,[36][37][38][39][40][41][42]. To identify the potential significance of increasing alternate miR-15a/16-1 transcripts over their canonical counterparts in paediatric AML, we investigated miRNA target genes using prediction tools, and undertook gene ontology analysis. We found miR-16-1* is three-fold enriched for targeting components of the RNA processing and splicing machinery such as hnRNP regulators, SRRM1 and NUDT2. Down-regulation of these genes by increased expression of a targeting miRNA may be contributing to miRNA/host gene disassociation in DLEU2/miRNA-15a/ 16-1 expression (Additional file 10). One target of miR-15a* was found to be ADORA2A, an important G-protein receptor critical in tissue-specific and systemic inflammatory responses [43,44]. We have identified the down-regulation of ADORA2A expression within paediatric AML, not mediated through DNA methylation, but potentially through the up-regulation of the miR-15a/16-1 cluster (data not shown). Moreover, miR-15a* and miR-16-1* are 60-90 fold enriched for targeting genes involved in the intrinsic apoptotic pathway such as BCL-2 L11, PPIF and DNM1L (Additional file 10). The inhibition of these genes to act upon the cell, mediated through targeting miRNA, can potentially encourage continued cell growth and perpetuation of AML, through negating apoptotic signalling. The over expression of alternate mature miRNA in paediatric AML may also have consequences for integral cell cycle pathways, but through different mechanisms to their canonical counterparts.
DLEU2 interrogation identifies a novel subclass of paediatric AML
In addition to assessing the association of DLEU2 promoter DNA methylation and miR-15a/16-1 miRNA expression with paediatric AML, we also assessed correlations with distinct clinical and diagnostic variables. We found no significant association between DLEU2 methylation/expression with sex, age of disease onset or relapse status nor common gene abnormalities such as FLT3 or MLL, as has been found in a number of other association studies [6,45]. DLEU2 methylation clustering based on diagnostic FAB subtype revealed a wide range of values (M5a: 16-91%; M5b: 48-90%; 'Other' subtypes: 51-93%. Additional file 11), indicating mean methylation in isolation does not stratify paediatric AML subtype. Additionally, we found there to be no correlation between percentage of leukaemic blasts at patient diagnosis and the DNA methylation status at the DLEU2 promoter region (data not shown).
Cases carrying 11q rearrangements are the most heterogeneous of paediatric AML [27,29,47], linked to 50-104 translocation fusion partners to date [29,46,48]. Trisomy 8 is a frequently reported aberration in adult and paediatric AML [29]. However little is known about the gain of chromosome 8 in isolation and its relationship to disease onset. As such this has been speculated to be a disease modulating secondary event [49]. Lower DNA methylation in the t(11)/+8 subgroup may potentially confer a better prognosis, as it is well documented that hypermethylation of the DLEU2 region is involved in leukaemic transformation in adults [19,20,30]. Analysis of the t(11)/+8 subgroup in isolation also revealed no significant miRNA expression differences relative to non-leukaemic samples ( Figure 4A), and it is well documented that miR-15a/16-1 abnormalities are also associated with leukaemic transformation in adults [19,20]. We additionally identified a trend towards decreased risk of relapse in t(11)/+8 subgroup compared to other subtypes, and a trend towards better survival outcomes (Additional file 12). These analyses combined may elucidate a connection between DLEU2 promoter DNA methylation ( Figure 3B), miRNA expression ( Figure 4A) and prognostic outcomes for this subgroup of paediatric AML (Additional file 12).
The classification of t(11)/+8 cases as an independent subgroup revealed a significant increase in pri-miR-15a expression in traditional FAB subtype M5b from nonleukaemic (20.29-fold, p < 0.001; Figure 4C) and from M5a/M1/M2/M4 ( Figure 4B). The increase translates to a >2-fold increase in miR-15a, 4.87-fold increase in miR-15a*, and 9.86-fold increase in miR-16-1* expression. Based on DLEU2 DNA methylation status, in conjunction with miRNA expression analysis, we speculate that pri-miR-15a expression alone may be a useful biomarker to distinguish M5b FAB subtype from all other AML subtypes.
Conclusions
Previous research within paediatric AML has shown that well-defined cytogenetic subgroupings exhibit a wide range of genomic and epigenomic heterogeneity. Linking specific epigenetic features to clinical parameters has the potential to identify pathological drivers of disease and develop enhanced molecular approaches for diagnosis, prognosis and refinement of treatment. Profiling epigenetic regulators has provided clinically relevant biomarkers for adult cancers; however generally the same cannot be said of childhood cancers. Our analysis has identified hypermethylation induced down-regulation of the DLEU2 gene in paediatric AML. The related expression changes of the embedded miR-15a/16-1 microRNA cluster in paediatric AML has the potential to contribute to leukaemic transformation, with a switch from canonical to alternate miRNA family members, which in turn may modulate the expression of downstream regulatory genes. Treating paediatric AML model systems with epigenetic modifying drugs will allow a more comprehensive analysis towards the clinical applications and potential patient-focussed therapeutic interventions for children. Our results highlight the need for further specific interrogation of paediatric AML subtypes as distinctive biological entities, separate from adult disease. Further studies utilising larger patient cohorts are required to explore the complex interplay between the epigenetic regulation of genes harbouring microRNA, taking into account alternate miRNA transcript expression.
Samples
This study was approved by the Royal Children's Hospital (RCH), Melbourne, Ethics Committee (HREC reference #27138E). Samples used consisted of snap frozen bone marrow specimens and archived bone marrow biopsies taken at diagnosis, patient remission/follow-up or at relapse from paediatric acute myeloid leukaemia (AML) cases. The diagnosis of AML was established according to the criteria of the French-American-British (FAB) classification by standard morphological and cytological methods. The evaluation for mutations in MLL, FLT3, RUNX1 and WT1 genes were assessed in a small number of patients as a part of the initial assessment. No patient recorded any chromosome 13 deletions or abnormalities listed in their clinical diagnoses. The median percentage of leukaemic blasts at patient diagnosis is 88% (67-93 95% CI).
Patient samples used in our study consisted of archived, air-dried bone marrow smear slides. The utility of these samples for DNA methylation and miRNA expression analysis has been outlined previously [34,50]. Cryogenically Figure 3 Interrogation of paediatric AML by clinically defined cytogenetic and FAB subtypes alongside DLEU2 promoter methylation. Paediatric AML diagnostic cytogenetic and subtyping analyses were specifically investigated, including those with known gene abnormalities and CN-AML cases. We investigate here the DNA Methylation of DLEU2/Alt1 promoter region to 95% CI. DNA methylation values range from 0.0 (0% no detected methylation) to 1.0 (100% fully methylated). The leukaemic group refers to diagnostic bone marrow from paediatric patients. Non-leukaemic group consists of CD sorted cell populations (CD19+, CD33+, CD34+, CD45+) and patient remission specimens. A. DNA methylation for DLEU2 HM450 probes (cg12883980, cg20529344, cg5394800) according to cytogenetic type, showing heterogeneous DNA methylation outcomes. Of note, a subset of patients with observable chromosome 11 and trisomy 8 abnormalities have a reduced DNA methylation compared to all other AML abnormalities. B. A subset of DLEU2 DNA methylation results for paediatric AML chromosome 11 and trisomy 8 abnormality patients from Figure 3A frozen patient bone marrows were also used where available. All patients were <18 years of age. We chose to focus specifically on the FAB subtype M5 (a and b) as it is a common subtype. Our cohort included comparable ratios between males and females, and also similar numbers of children that relapsed or not. (Additional file 5). Control samples from bone marrow of unrelated and unaffected children were analysed in parallel, as well as multiple cell lines, including adult leukaemia (REH [51], CCRF-CEM [52]), paediatric AML (Kasumi-1 [53], THP-1 [54], MV-4-11 and AML-193 [55]) obtained from American Type Culture Collection (ATCC) and subjected to characterisation using the ATCC Proficiency Standard program. Fluorescent Activated Cell Sorted (FACS) isolated haematopoietic progenitor cell populations from unrelated and unaffected paediatric donors (CD19+, CD33+, CD34+ and CD45+ populations, from herein known as CD sorted cells), were also used in this study. Our 'non-leukaemic' group consisted of all individual CD sorted cell populations as well as non-leukaemic remission and followup slides and bone marrow from paediatric patients. We compared the DLEU2/Alt1 DNA methylation for all 'non-leukaemic' specimens, and found no significant differences across these non-leukaemic samples (Additional file 13). Figure 4 Gene, primary, miRNA and miRNA* expression for paediatric AML defined through clinical classification and DLEU2 Methylation subtyping. Mature microRNA expression, primary precursor transcript (PRI) and alternate miRNA isoform expression (*) from the miR-15a/16-1 miRNA cluster embedded within DLEU2 for paediatric AML patients (n = 26: 12 M5a, 5 M5b, 4 M1/M2/M4, 5 t(11)/+8 sub-group) all compared to non-leukaemic specimens (n = 30). Leukaemic groups refer to diagnostic bone marrow from paediatric patients. Non-leukaemic group consists of CD sorted cell populations (CD19+, CD33+, CD34+, CD45+)and patient remission specimens. Linear Fold Change (FC) is plotted using normalized data and the 2 -ΔΔCt method ± SD, and shows the fold change calculated from the means of each group. DLEU2 gene expression is down-regulated in all subtypes. A. Fold change in expression comparing non-leukaemic to t(11)/+8 subtype. No significant differences in RNA expression are observed between non-leukaemic specimens and this subgroup. B. Fold change in expression comparing patients from subtype M5a to M1/M2/M4. DLEU2 is down-regulated in all subtypes (as previously described in Figure 2), and primary precursor for miR-15a also appears down-regulated (non-significant). M1/M2/M4 groupings do not show any mature miRNA expression changes (defined as >2-fold difference from non-leukaemic). Subtype M5a shows a 2.1-fold (±0.8 SD) increase in miR-15a* and a 2.81-fold (±0.4 SD) increase in miR-16-1*. C. FAB subtype M5b shows up-regulation of miR-15a PRI compared to non-leukaemic expression (20.29-fold (±0.6 SD) p < 0.001), with miR-15a* (4.87-fold ±1 SD) and miR-16-1* (9.86-fold ±1 SD) also up-regulated. M5b additionally shows a significant up-regulation of miR-15a PRI compared to t(11)/+8 sub-group samples (p < 0.05), and also from M5a and M1/M2/M4 sub-groups (p < 0.001).
DNA extraction, quality control and methylation analysis
Genomic DNA was extracted using the phenol/chloroform method and bisulphite converted in accordance with manufacturer's protocols, as reported previously [23]. DNA extracts were checked for quality and quantity using a NanoDrop® ND-1000 spectrophotometer (Thermo Fisher Scientific Inc., Scoresby, Victoria, Australia). All DNA was stored at −80°C.
Bisulphite conversion of genomic DNA was performed using the MethylEasy Xceed Bisulfite Modification Kit (Human Genetic Signatures, Sydney, AUST). The converted samples were processed by the Australian Genome Research Facility (AGRF, Melbourne, Australia) and analysed using the Illumina HumanMethylation450 (HM450) Bead Chip arrays according to manufacturer's protocol. Illumina Genome Studio software was used to extract the raw M-values and probe intensities for downstream processing. Samples with a probe p-detection value of p <0.05 were retained. Data were quantile normalized using lumi [56] and analysed using LIMMA [57]. All analysis was performed using the R statistical software package on M-values that were converted to β-values (0 = unmethylated, 1 = fully methylated) for reporting and biological interpretation. To eliminate sex bias, probes hybridizing to sex chromosomes were filtered out, leaving 366,553 probes common to all samples in the final dataset. Further filtering was based on the degree of difference in β between sample groups, indicated as Δβ calculated using the formula: SEQUENOM MassARRAY® EpiTYPER® was used to measure locus-specific methylation. Sixty-one samples were analysed, including the 42 analysed on the HM450 platform (Additional file 5), an additional 3 cell lines, 8 leukaemic and 8 non-leukaemic whole bone marrow aspirate samples. Primers for analysis were designed using SEQUENOM EpiDesigner software (www.epidesigner.com) and the sequences for these are listed in Additional file 14. Gene ontology and pathway analysis of genes associated with significantly altered DNA methylation probes were analysed through the use of IPA (Ingenuity® Systems, www.ingenuity.com) and GOrilla [58].
RNA extraction, quality control and expression analysis
Small RNA extraction was performed using TRIzol® (Ambion®) for patient bone marrow samples, mononuclear cells and cell lines in accordance with the manufacturer's instructions, or the High Pure miRNA Isolation Kit (Roche) for archived slide samples as previously described [34]. Before RNA extraction of fresh bone marrow aspirates, samples were processed using Ficoll-Paque™ (GE Healthcare, Piscataway USA) to isolate the mononuclear cell population. This was immediately cryo-frozen or stored in RNAlater® (Ambion® by Life Technologies, Mulgrave, Victoria Australia) for later extraction. The concentration and purity of all RNA samples was assessed using the NanoDrop® ND-1000 spectrophotometer (Thermo Fisher Scientific Inc., Scorsby, Victoria, Australia). All RNA was stored at −80°C.
Seventy samples were used in the interrogation of miRNA expression, 64 of which were also used for HM450 and SEQUENOM methylation analysis (Additional file 5). TaqMan® MicroRNA Reverse Transcription kit and singleplex TaqMan ® microRNA Assays (Applied Biosystems, Life Technologies) (assays listed in Additional file 15) were utilised according to the manufacturer's instructions before routine quantitative real-time PCR (qRT-PCR) was performed using Applied Biosystems 7300 Sequence Detection System. A subset of 29 high quality samples which included all cell lines as previously described, CD19+, CD33+, CD34+, CD45+ as well as 8 leukaemic and 9 non-leukaemic patient whole bone marrows, were used for gene and pri-miR expression analysis. This was due to longer RNA species often being degraded in archived specimens [59,60]. The SuperScript® VILO™ cDNA Synthesis Kit (Life Technologies) and TaqMan® gene expression assays were used for gene/pri-miR expression as per manufacturer's instructions (Additional file 15 and Additional file 16) before performing qRT-PCR analysis according to the manufacturer's instructions. All qRT-PCR samples were analysed in duplicate.
All analyses used the combined average of nonleukaemic primary bone marrow tissue and CD sorted non-leukaemic cell expression values as the Reference Group (calibrator). Fold Change (FC) was calculated using the Livak method of 2 -ΔΔC t [61] plotting fold change and associated p-values. The fold change reported here is the difference of the means of each group, and a fold change of >2 between disease and non-leukaemic groups were considered noteworthy. microRNA gene target prediction was assessed using microRNA.org [62,63] and miRWalk [64]. The top 500 target genes for each miRNA (Additional file 17) were used for gene ontology/pathway analysis (Additional file 10) by IPA (Ingenuity® Systems, www.ingenuity.com) and GOrilla [58]. | 2015-06-01T23:46:22.000Z | 2014-05-24T00:00:00.000 | {
"year": 2014,
"sha1": "270a37177eca97f22ed424c7bde5455815b6ef1b",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-13-123",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d092838c19b28833a953196d75b7c6f262403fef",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
239886611 | pes2o/s2orc | v3-fos-license | Baseline Subfoveal Choroidal Thickness as a Predictor for Response to Short-Term Intravitreal Bevacizumab Injections in Diabetic Macular Edema
Purpose This article aims to evaluate how the subfoveal choroidal thickness (SFCT) and best-corrected visual acuity (BCVA) respond to the intravitreal injection of bevacizumab and to assess the correlation between these changes. It will also assess the use of the baseline SFCT as a predictor for BCVA changes in eyes of treatment-naive, diabetic macular edema (DME) patients. Methods This retrospective, consecutive case series comprised 59 eyes of 39 treatment-naive DME patients. Complete slit-lamp assessment, swept-source optical coherence tomography (SS-OCT) scans to measure SFCT and BCVA values were performed at two stages: baseline and one month after the third monthly injection of intravitreal bevacizumab. Results Patients’ ages ranged from 46.3 to 76.4 years (mean: 62.6 ± 2.3). The mean SFCT was 318 ± 82 μm at baseline, which decreased after 3 months to 300 ± 66 μm (P-value = 0.021). There was an improvement in the mean of the logMAR best-corrected visual acuity (BCVA) from 0.7 (decimal equivalent: 0.2) to 0.5 (decimal equivalent: 0.3) (P-value = 0.019). There was no association between SFCT changes and BCVA changes (P-value = 0.180). Wilcoxon signed-rank test disclosed that a better BCVA improvement was related to a greater subfoveal choroidal thickness at baseline P-value <0.00. Conclusion Eyes with a higher baseline subfoveal choroidal thickness (SFCT) attained greater BCVA improvement than eyes with a lower baseline SFCT. In addition to this, changes to SFCT do not appear to correlate with BCVA changes. These findings do not support using OCT SFCT changes as a prognostic factor for changes to BCVA after intravitreal bevacizumab treatment in evaluating treatment-naive DME eyes.
Introduction
Diabetic Macular Edema (DME) is a serious microvascular complication of diabetes mellitus. It is a common sight-threatening retinopathy and a significant cause of irreversible vision loss among adults with diabetic retinopathy worldwide, 1 with a global prevalence of 6.8%. 2 In DME, oxidative stress and inflammation due to hyperglycemia lead to abnormalities in retinal vasculature. This plays a major role in the pathophysiology of the disease, causing breakdown and malfunction of the blood-ocular barrier, which leads to increased retinal vascular permeability, causing plasma, protein, and lipid leakage within the macula and choroid. 3 The fluid subsequently accumulates within the retina and choroid, 4 causing impairment to the choriocapillaris (the major blood supply for the outer retina), leading to retinal ischemia, and can critically diminish visual acuity.
Reviewing choroidal changes on a macrovascular level in patients with diabetic macular edema (DME), several studies have shown increased retinal vascular permeability that led to choroidal thickening. 4 However, other studies of choroidal thickness changes in DME patients produced diverging results; some reporting thinning, 5,6 while others found no changes at all. Despite this range of findings in the choroidal vasculature in DME, it all leads to the same conclusion; abnormal choriocapillaris, ie, impaired retinal vasculature, leads to impaired vision.
With the advent of Swept-Source Optical Coherence Tomography (SS-OCT), an imaging technique that displays high-resolution cross-sectional images (introduced to clinical practice in 2012), a detailed visualisation of the retinal structures and choroid are now possible. This is due to its deeper penetration and longer wavelength compared to spectral-domain OCT (SD-OCT). 7,8 Even with the growing evidence in elucidating choroidal circulation changes in DME, 4 the correlation between the subfoveal choroidal thickness at baseline and vision response to anti-vascular endothelial growth factor (anti-VEGF) therapy has not been verified. Therefore, we studied the use of the baseline subfoveal choroidal thickness (SFCT) as a predictor for best-corrected visual acuity changes in response to bevacizumab injections, as well as the correlation between changes of best-corrected visual acuity (BCVA) and OCT findings of changes in SFCT one month after the third monthly intravitreal bevacizumab injection for treatment-naive patients with DME.
Methods
This retrospective, consecutive case series study was conducted at the Ophthalmology Department at An-Najah National University Hospital (NNUH) in Nablus-Palestine, after obtaining the ethical approval from An-Najah National University Institutional Review Board (IRB) and following the guidelines of the Declaration of Helsinki. Written and verbal informed consent was waived due to the retrospective nature of the study, the data was anonymized and maintained with confidentiality.
The population included 59 eyes of 39 patients, who had been diagnosed with DME, according to the following inclusion criteria: treatment-naive patients before their first anti-VEGF injection, given only bevacizumab as a monthly anti-VEGF injection for 3 months with a standard dose of 1.25 mg (including patients with systemic conditions such as hypertension (10)), and patients determined to have clinically significant macular edema based on the Criteria of the Early Treatment of Diabetic Retinopathy Study (ETDRS) guidelines. 9 We excluded patients who had received any type of medication related to diabetic retinopathy before or during the 3-month course of bevacizumab; such as steroid injection or a different anti-VEGF therapy, any patients with previous laser therapy or history of intraocular surgery, patients with presence of high refractive errors (>+5 and <-5; as choroidal thickness may change with the high refractive state), and any patients diagnosed with concomitant ocular disease that might affect their vision (significant cataract/glaucoma/agerelated macular degeneration, uveitis, etc.). We also excluded patients with non-proliferative diabetic retinopathy.
Patients' relevant sociodemographic and clinical data were recorded, including age, sex, systemic disease, diabetes mellitus duration, baseline glycosylated hemoglobin (HbA1C), complete slit-lamp biomicroscopic examination baseline and follow-up findings, BCVA at baseline and one month after the third injection, injection dates, and length of follow-up. BCVA values were recorded in a decimal unit and converted to LogMar. According to ETDRS and FrACT study, CF (counting fingers) at 30 cm can be replaced by 0.014 in decimal and visual acuity in the HM-range (hand motion), replaced by VA of 0.005 in decimal. 10 Intervention Until recently laser therapy was the main treatment of DME, reducing the risk of blindness and increasing the opportunity for vision improvement compared with conservative management. However, these valuable effects are also associated with considerable side effects due to the destructive nature of the laser photocoagulation on the retina. Recently, clinical trials of regular intravitreal injections of anti-VEGF have shown higher efficacy regarding vision preservation and decreased vision loss compared to laser photocoagulation. [11][12][13] In this study, all participants were treatment-naive before their first anti-VEGF injection, and given bevacizumab only, as a monthly anti-VEGF injection for 3 months with a standard dose of 1.25 mg. 10.15.003.01] findings: SFCT was recorded at the first visit as a baseline and one month after the third bevacizumab injection. The SFCT was measured manually as a vertical line from the outer surface of the retinal pigment epithelium to the lower border of the choroid (choroidsclera interface), by three different researchers separately to ensure the reliability and reproducibility of measurements. Each value was an average of 12 scans, with each scan centered at the fovea.
Statistical Analysis
All analyses were carried out by the Statistical Package for Social Sciences (SPSS) version 21. The data were described as mean ± standard deviation, median interquartile range], or frequency (percentage). The Kolmogorov-Smirnov test was conducted to assess the normality of data distribution, and the data was recognised as non-normally distributed data. Univariate analysis was accomplished to assess the influence of demographic and clinical characteristics (age, gender, systemic diseases, diabetes mellitus duration, and glycosylated hemoglobin (HbA1C)) on the continuous dependent variables (BCVA and SFCT). Additionally, the Wilcoxon signed-rank test was used to make a comparison between the findings at baseline and one month after the third injection. It was also used to assess whether baseline SFCT can predict BCVA response to bevacizumab therapy one month after the third injection. Then, by using the continuous measurements (BCVA and SFCT changes), a bivariate analysis (Spearman's rank correlation coefficients) was conducted to investigate the correlation between these changes. A p-value less than 0.05 is statistically significant.
In univariate analysis, there was no statistically significant effect of independent variables (age, gender, systemic diseases, diabetes mellitus duration, or glycosylated hemoglobin (HbA1C)) on the dependent variables including BCVA (P-value=0.062) and SFCT (P-value=0.400).
Ocular characteristics data are shown in Table 2. After intravitreal bevacizumab injections, the mean LogMar BCVA was significantly improved from 0.7± 1 (Decimal equivalent: 0.2 ± 0.1) at baseline to 0.5± 0.7 (Decimal equivalent: 0.3 ± 0.2) at one month after the third injection (P=0.019). In regard to the SFCT characteristics shown in Figure 1; after three months of treatment, the subfoveal choroidal thickness mean was significantly reduced to 300 ± 66 μm when compared to the baseline mean value of 318 ± 82 μm (P=0.021). Bivariate analysis results have shown no significant correlation between BCVA changes and SFCT changes with a p-value of 0.18. In non-parametric univariate analysis, Wilcoxon signed-rank test results have demonstrated that a greater BCVA improvement was significantly related to a higher baseline subfoveal choroidal thickness at one month after the third injection (P-value <0.00).
Discussion
This retrospective case series study aims to ascertain whether the subfoveal choroidal thickness at baseline is an indicator for treatment outcome, and to assess the correlation between the changes in BCVA and SFCT after intravitreal bevacizumab therapy in treatment naive DME patients. The choroidal vasculature delivers nutrients to and removes metabolic wastes from the outer retinal layers, and therefore plays a significant role in the preservation of healthy visual function. 14 Diabetic retinopathy has a clinical characteristic of ischemic retinopathy, causing localised hypoxia, triggering VEGF and other angiogenic factor overproduction. This in turn triggers degeneration and edematous leakage of the choriocapillaris, leading to impaired choroidal circulation, potentially progressing to retinal dysfunction and visual impairment. The results from this study indicate that the baseline subfoveal choroidal thickness (SFCT) may prognosticate the short-term bevacizumab response. Patients with a thicker baseline subfoveal choroidal thickness were more likely to experience an improvement in visual function at the 3-month follow-up. The potential mechanism behind this finding, pre-existing thicker choroid, may be associated with more intact choriocapillaris, less outer retinal layer ischemia, and more functional photoreceptors compared with patients with a thinner choroidal thickness. The findings of this study are consistent with the findings of Rayess et al, which concluded that baseline subfoveal choroidal thickness may prognosticate which patients with DME will experience BCVA improvement. 15 Conversely, this study showed a non-significant correlation between BCVA and SFCT changes, contradicting the Nourinia et al study, 16 which suggested SFCT changes were significantly associated with vision improvement. In our study, a small percentage of our patients showed the same results, but it did not reach statistical significance. We attribute this difference to the small sample size in the previous study (only 20 eyes of 20 patients); therefore, these results clearly warrant further study.
There were some limitations of this study including the retrospective nature, the manual calculation of choroidal thickness using OCT scans, and owing to the short followup time being unable to deduce whether these outcomes would remain significant with continuing longer-term therapy. In addition to this, all of our patients received bevacizumab treatment due to its cost-effectiveness, 17 without taking the baseline visual acuity into consideration, disregarding the DRCR.net protocol T. However, there were also several strengths of this study including a relatively good sample size (59 eyes) in comparison to other studies, 6,16 and the use of SS-OCT, rather than SD-OCT; which gives more accurate measurements. 7 Also, stipulating treatment-naive patients only mitigated the influence of any previous intravitreal injections on subfoveal choroidal thickness.
Conclusion
DME is a common cause of visual impairment in diabetic patients. The results of this study suggest that subfoveal choroidal thickness at baseline could be used as a predictive factor for visual outcomes in treatment-naive patients with DME after a short course of bevacizumab. Consequently, patients who had a thicker subfoveal choroidal thickness at baseline were more likely to experience a notable improvement in vision with bevacizumab therapy. | 2021-10-20T15:10:34.493Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "017e9e9e736af12b63866dc63a4d80293fdfbc04",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=75023",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8061f96054d1d43ef6ca5b991f2d3eee13eff02",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15194245 | pes2o/s2orc | v3-fos-license | Deep Lambertian Networks
Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representations. In this paper, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition.
Introduction
Multilayer generative models have recently achieved excellent recognition results on many challenging datasets (Ranzato & Hinton, 2010;Quoc et al., 2010;Mohamed et al., 2011). These models share the same underlying principle of first learning generatively from data before using the learned latent variables (features) for discriminative tasks. The advantage of using this indirect approach for discrimination is that it is possible to learn meaningful latent variables that achieve strong generalization. In vision, illumination is a major cause of variation. When the light source direction and intensity changes in a scene, dramatic changes in image intensity occur. This is detrimental to recognition performance as most algorithms use image intensities as inputs. A natural way of attacking this problem is to learn a model where the albedo, surface normals, and the lighting are explicitly represented as the latent variables. Since the albedo and surface normals are physical properties of an object, they are features which are invariant w.r.t. illumination.
Separating the surface normals and the albedo of objects using multiple images obtained under different lighting conditions is known as photometric stereo (Woodham, 1980). Hayakawa (1994) described a method for photometric stereo using SVD, which estimated the shape and albedo up to a linear transformation. Using integrability constraints, Yuille et al. (1999) proposed a similar method to reduce the ambiguities to a generalized bas relief ambiguity. A related problem is the estimation of intrinsic images (Barrow & Tenenbaum, 1978;Gehler et al., 2011). However, in those works, the shading (inner product of the lighting vector and the surface normal vector) instead of the surface normals is estimated. In addition, the use of three color channels simplifies that task.
In the domain of face recognition, Belhumeur & Kriegman (1996) showed that the set of images of an object under varying lighting conditions lie on a polyhedral cone (illumination cone), assuming a Lambertian reflectance and a fixed object pose. Recognition algorithms were developed based on the estimation of the illumination cone (Georghiades et al., 2001;Lee et al., 2005). The main drawback of these models is that they require multiple images of an object under varying lighting conditions for estimation. While Zhang & Samaras (2006); Wang et al. (2009) Figure 1. Diagram of the Lambertian Reflectance model.
∈ R 3 points to the light source. ni ∈ R 3 is the surface normal, which is perpendicular to the tangent plane at a point on the surface.
In this paper, we introduce a generative model which (a) incorporates albedo, surface normals, and the lighting as latent variables; (b) uses multiplicative interaction to approximate the Lambertian reflectance model; (c) learns from sets of 2D images the distributions over the 3D object shapes; and (d) is capable of one-shot recognition from a single training example.
The Deep Lambertian Network (DLN) is a hybrid undirected-directed model with Gaussian Restricted Boltzmann Machines (and potentially Deep Belief Networks) modeling the prior over the albedo and surface normals. Good priors over the albedo and normals are necessary since for inference with a single image, the number of latent variables is 4 times the number of observed pixels. Estimation is an ill-posed problem and requires priors to find a unique solution. A density model of the albedo and the normals also allows for parameter sharing across individual objects that belong to the same class. The conditional distribution for image generation follows from the Lambertian reflectance model. Estimating the albedo and surface normals amounts to performing posterior inference in the DLN model with no requirements on the number of observed images. Inference is efficient as we can use alternating Gibbs sampling to approximately sample latent variables in the higher layers. The DLN is a permutation invariant model which can learn from any object class and strikes a balance between laborious approaches in vision (which require 3D scanning (Blanz & Vetter, 1999)) and the generic unsupervised deep learning approaches.
Gaussian Restricted Boltzmann Machines
We briefly describe the Gaussian Restricted Boltzmann Machines (GRBMs), which are used to model the albedo and surface normals. As the extension of binary RBMs to real-valued visible units, GRBMs (Hinton & Salakhutdinov, 2006) have been successfully ap-plied to tasks including image classification, video action recognition, and speech recognition (Lee et al., 2009;Krizhevsky, 2009;Taylor et al., 2010;Mohamed et al., 2011). GRBMs can be viewed as a mixture of diagonal Gaussians with shared parameters, where the number of mixture components is exponential in the number of hidden nodes. With visible nodes v ∈ R Nv and hidden nodes h ∈ {0, 1} N h , the energy of the joint configuration is given by: The conditional distributions needed for inference and generation are given by: where Additional layers of binary RBMs are often stacked on top of a GRBM to form a Deep Belief Net (DBN) . Inference in a DBN is approximate but efficient, where the probability of the higher layer states is a function of the lower layer states (see Eq. 1).
Deep Lambertian Networks
GRBMs and DBNs use Eq. 2 to generate the intensity of a particular pixel v i . This generative model is inefficient when dealing with illumination variations in v. Specifically, the hidden activations needed to generate a bright image of an object are very different from the activations needed to generate a dark image of the same object. The Lambertian reflectance model is widely used for modeling illumination variations and is a good approximation for diffuse object surfaces (those without any specular highlights). Under the Lambertian model, illustrated in Fig. 1, the i-th pixel intensity is modelled The albedo a i , also known as the reflection coefficient, is the diffuse reflectivity of a surface at pixel i, which is material dependent but illumination invariant. In contrast to the generative process of the GRBM, the image of an object under different lighting conditions can be generated without changing the albedo and the surface normals. Multiplications within hidden variables in the Lambertian model give rise to this nice property.
The Model
The DLN is a hybrid undirected-directed generative model that combines DBNs with the Lambertian re-flectance model. In the DLN, the visible layer consists of image pixel intensities v ∈ R Nv , where N v is the number of pixels in the image. The first layer hidden variables are the albedo, surface normals, and a light source vector. Specifically, for every pixel i, there are two corresponding latent random variables: the albedo a i ∈ R 1 and surface normal n i ∈ R 3 . Over an image, a ∈ R Nv is the image albedo, N is the surface normals matrix of dimension N v × 3, where n i denotes the ith row of N. The light source variable ∈ R 3 points in the direction of the light source in the scene. We use GRBMs to model the albedo and surface normals, and a Gaussian prior to model . It is important to use GRBMs since we expect the distribution over albedo and surface normals to be multi-modal (see Fig. 4). The DLN combines the elegant properties of the Lambertian model with the GRBMs, resulting in a deep model capable of learning albedo and surface normal statistics from images in a weakly-supervised fashion. The DLN has the following generative process: where vec(N) denotes the vectorization of matrix N.
The GRBM prior in Eq. 4 is only approximate since we enforce the soft constraint that the norm of n i is equal to 1.0. We achieve this via an extra energy term in Eq. 6. Eq. 5 represents the probabilistic version of the Lambertian reflectance model. We have dropped "max" for convenience. "max" is not critical in our model as maximum likelihood learning regulates the generation process. In addition, a prior on lighting direction fits well with the psychophysical observations that human perception of shape relies on the assump-1 Extending our model to more flexible DBN priors is trivial. tion that light originates from above (Kleffner & Ramachandran, 1992).
DLNs can also handle multiple images of the same object under varying lighting conditions. Let P be the number of images of the same object. We use L ∈ R 3×P to represent the lighting matrix with columns { p : p = 1, 2, . . . , P }, and V ∈ R Nv×P to represent the matrix of corresponding images. The DLN energy function is defined as: The first line in the energy function is proportional to log p(v|a, N, ), the multiplicative interaction term from the Lambertian model. The second line corresponds to the quadratic energy of log p( ) and the soft norm constraint on n i . This constraint is critical for the correct estimation of the albedo, since we can interpret the albedo at each pixel as the L 2 norm of the pixel surface normal. The third line contains the two GRBM energies: h ∈ R N h represents the binary hidden variables of the albedo GRBM and g ∈ R Ng represents the hiddens of the surface normal GRBM:
Inference
Given images of the same object under one or more lighting conditions, we want to infer the posterior distribution over the latent variables (including albedo, surface normals and light source): p(a, N, L, g, h|V). With GRBMs modeling the albedo a and surface normals N, the posterior is complicated with no closed form solution. However, we can resort to Gibbs sampling using 4 sets of conditional distributions: • Conditional 1: p(g, h|a, N, L, V) • Conditional 2: p(a|N, L, h, V) • Conditional 3: p(L|N, a, V) • Conditional 4: p(N|a, L, g, V) Conditional 1 is easy to compute as it factorizes over g, and h: p(g, h|a, N, L, v) = p(h|a)p(g|N). Since Gaussian RBMs model the albedo a and the surface normals N, the two factorized conditional distributions have the same form as Eq. 1.
Conditional 2 factorizes into a product of Gaussian distributions over N v pixel-specific albedo variables: where s ip = n T i p is the illumination shading at pixel i and φ h i = b i + σ 2 ai j W ij h j is the top-down influence of the albedo GRBM.
This conditional distribution has a very intuitive interpretation. When a light source has zero strength, ( p = 0 → s ip = 0), then p(a i |n i , p , h, v i ) has mean at φ h i , which is purely the top-down activation. Conditional 3 factorizes into a product distribution over P separate light variables: p(L|N, a, V) = P p=1 p( p |N, a, v p ), where p( p |N, a, v p ) is defined by a quadratic energy function: Hence the conditional distribution over p is a multivariate Gaussian of the form: Conditional 4 can be decomposed into a product of distributions over the surface normals of each pixel: Since in our model we have the soft norm constraint on n i ( η 2 Nv i (n T i n i − 1.0) 2 ), there is no simple closed form for p(n i |L, g, a i , v i ). We use the Hamiltonian Monte Carlo (HMC) algorithm for sampling.
HMC (Duane et al., 1987;Neal, 2010) is an auxiliary variable MCMC method which combines Hamiltonian dynamics with the Metropolis algorithm to sample continuous random variables. In order to use HMC, we must have a differentiable energy function over the variables. In this case, the energy of conditional 4 takes form: where φ g i is the top-down mean of n i from the g-layer, and D i = diag(σ −2 ni1 , σ −2 ni2 , σ −2 ni3 ) is the 3 × 3 diagonal matrix.
We note that there is a linear ambiguity when we estimate the normals and lighting direction. In Eq. 5, n T i p = n T i RR −1 p . This means that we can only estimate n i and p up to a linear transformation. Fortunately, while R is unknown, it is constant across {v i } P i=1 due to the learned priors over N, a and . Therefore, recognition and image relighting tasks (Sec. 4) are not affected.
Learning
Learning is accomplished using a variant of the EM algorithm. In the E-step, MCMC samples are drawn from the approximate posterior distribution (Neal & Hinton, 1998). We first sample from the conditional distributions in Sec. 3.2 to approximate the posterior p(a, N, L, h, g|V; θ old ). We then optimize the joint log-likelihood function w.r.t. the model parameters. Specifically, where α is the learning rate. We approximate the integral using: } are approximately drawn from the posterior distribution p(a, N, L, h, g|V; θ old ) in the E-step. Maximum likelihood learning of GRBMs (and DBNs) is intractable. We therefore turn to Contrastive Divergence (CD) (Hinton, 2002) to compute an approximate gradient during learning. The complete training algorithm for the DLN in presented in Alg. 1.
Rather than starting with randomly initialized weights, we can achieve better convergence by first training the albedo GRBM on a separate face database. We can then transfer the learned weights before learning the complete DLN.
Experiments
We experiment with the Yale B and the Extended Yale B face databases. Combined, the two databases contain 64 frontal images of 38 different subjects. 45 images for each subject are further divided into 4 subsets of increasing illumination variations. Fig. 3 shows samples from the Yale B and Extended Yale B database.
For each subject, we used approximately 45 frontal images for our experiments 2 . We separated 28 subjects from the Extended Yale B database for training and held-out all 10 subjects from the original Yale B database for testing 3 . The preprocessing step involved 2 A few of the images are corrupted. repeat //Approximate E-step: for n = 1 to #training subjects do 3: Given V n , sample p(a, N, L, h, g|V n ; θ old ) using the conditionals defined in Sec. 3.2, obtaining samples of {a (i) , N (i) , L (i) }. end for //Approximate M-step: 4: Treating {a (i) } as training data, CD is used to learn the weights of the albedo GRBM. 5: Treating {N (i) } as training data, CD is used to learn the weights of the surface normal GRBM. 6: Maximum likelihood estimations of the parameters σ 2 v i , µ , and Λ are computed. until convergence S1 S2 S3 S4 Figure 3. Examples from the Yale B Extended face database. Each row contains samples from an illumination subset.
downsizing the face images to the resolution of 24×24. Using equations of Sec. 3.2, we can infer one albedo image and one set of surface normals from each of the 28 subjects. These 28 training albedo and surface normal samples are insufficient for multilayer generative models with millions of parameters. Therefore, we leverage a large set of the face images from the Toronto Face Database (TFD) (Susskind et al., 2011). The TFD is a collection of 100,000 face images from a variety of other datasets. To create more training data for the surface normals, we randomly translated all 28 sets of them by ±2 pixels.
The DLN used 2 layer DBNs (instead of single layer GRBMs) to model the priors over a and N. The albedo DBN had 800 h 1 nodes and 200 h 2 nodes. The normals DBN had 1000 g 1 nodes and 100 g 2 nodes. To see what the DLN's prior on the albedo looks like, we show samples generated by the albedo DBN in Fig. 4. Learning the multi-modal albedo prior is made possible by the use of unsupervised TFD data.
Inference
After learning, we investigated the inference process in the DLN. Although the DLN can use multiple images of the same object during inference, it is important to investigate how well it performs with a single test image. We are also interested in the number of iterations that sampling would take to find the posterior modes.
In our first experiment, we presented the model with a single Yale B face image from a held-out test subject, as shown in Fig. 5. The light source illuminates the subject from the bottom right, causing a significant shadow across the top left of the subject's face. Since the albedo captures a lighting invariant representation of the face, correct posterior distribution should automatically perform illumination normalization. Using the algorithm described in Sec. 3.2, we clamp the visible nodes to the test face image and sample from the 4 conditionals in an alternating fashion. HMC was used to sample from N. In total, we perform 50 iterations of alternating Gibbs sampling. During each iteration, the N variables are sampled using HMC with 20 leapfrog iterations and 10 HMC epochs. The step size was set to 0.01 with a momentum of 2.0. The acceptance rate was around 0.7.
We plot the intermediate samples from iterations 1 to 50 in Fig. 5. The top row displays the inferred albedo a. At every pixel, there is a surface normal vector n i ∈ R 3 . For visual presentation, we treat each n i as a RGB pixel and plot them as color images in the bottom row. Note that the Gibbs chain quickly jumps (at iteration 5) into the correct mode. Good results are obtained due to the knowledge transfer of the albedo and surface normals learned from other subjects.
We next randomly selected single test images from the 10 Yale B test subjects. Using exactly the same sampling algorithm, Fig. 6(a) shows their inferred albedo and surface normals. The first column displays the test image, the middle and right columns contain the estimated albedo and surface normals. We also found that using two test images per subject improves performance. Specifically, we sampled from p(a, N|V ∈ R Nv×2 ) instead of p(a, N|v ∈ R Nv ). The results are displayed in Fig. 6(b).
Relighting
The task of face relighting is useful to demonstrate strong generalization capabilities of the model. The goal is to generate face images of a particular person under never-before seen lighting conditions. Realistic images can only be generated if the albedo and surface normals of that particular person were correctly inferred. We first sample the lighting variable from its Gaussian prior defined by {µ, Λ}. Conditioned on the inferred a and N (see Fig. 6(b)), we use Eq. 5 to draw samples of v. Fig. 6(c) shows relighted face images of held-out test subjects.
Recognition
We next test the performance of DLN at the task of face recognition. For the 10 test subjects of Yale B, only image(s) from subset 1 (with 7 images) are used for training. Images from subsets 2-4 are used for testing. In order to use DLN for recognition, we first infer the albedo (a i ) and surface normals (n i ) conditioned on the provided training image(s) of test subjects. For every subject, a 3 dimensional linear subspace is spanned by the inferred albedo and surface normals. In particular, we consider the matrix M of dimensions N v × 3, with the i-th row set to m i = a i n i . its cosine similarity to all training images is computed. The test image takes on the label of the closest training image. Normalized Correlation performs significantly better than Nearest Neighbor due to its normalization, which removes some of the lighting variations. Finally, the SVD method finds a 3 dimensional linear subspace (with the largest singular values) spanned by the training images of each of the test subjects. A test image is assigned to the closest subspace.
We note that for the important task of one-shot recognition, DLN significantly outperforms many other methods. In the computer vision literature, Zhang & Samaras (2006); Wang et al. (2009) report lower error rates on the Yale B dataset. However, their algorithms make use of pre-existing 3D morphable models, whereas the DLN learns the 3D information automatically from 2D images.
Generic Objects
The DLN is applicable not only on face images but also images of generic objects. We used 50 objects from the Amsterdam Library of Images (ALOI) database (Geusebroek et al., 2005). For every object, 15 images of varying lighting were divided into 10 for training and 5 for testing. Using the provided masks for each object, images are cropped and rescaled to the resolution of 48 × 48. We used a DLN with N h = 1000 and N g = 1500. A 500 h 2 layer and 500 g 2 layer were also added. After training, we performed posterior inference using one of the held-out image. Fig. 8 shows results. The top row contains test images, the middle row displays the inferred albedo images after 50 alternating Gibbs iterations, and the bottom row shows the inferred surface normals.
Discussions
We have introduced a generative model with meaningful latent variables and multiplicative interactions simulating the Lambertian Reflectance model. We have shown that by learning priors on these illuminationinvariant variables directly from data, we can improve on one-shot recognition tasks as well as generate images under novel illuminations. | 2012-06-27T12:59:59.000Z | 2012-06-26T00:00:00.000 | {
"year": 2012,
"sha1": "5ddbe706da4505bd08e1998ebf585c6535448c7a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6f40a5832139c3c49935e54836a0f6002d204ec9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
245242398 | pes2o/s2orc | v3-fos-license | Interactive Power Factor Management with Incentives Towards Reduction in Fuel Consumption and Carbon Emission
Reducing costs and emissions, and improving efficiency in the electric power networks are becoming urgent. It is, therefore, necessary to apply improvements in the industry’s operation. A case in point is the current efforts to maintain acceptable consumer’s operational PF (PFopr) gauged against reference PF (PFref), where the penalties are levied on monthly averaged PFopr below PFref. The efforts can be enhanced if based on interactive involvement and participation between consumers and services providers (SP’s), mainly by the fair implementation of penalties and incentives. Enticing consumer participation enables mutual benefits. The current treatment of PF in Saudi Arabia is based on average monthly measurements of consumers’ PFopr over an extended period. In this spirit, a novel mathematical model and framework are presented which consist of a time-referenced function relating applicable tariff to PFopr, thus benefiting the SPs by reducing the capital and maintenance costs, providing flexibility to focus on the peak load periods, and rewarding incentives to the consumers maintaining PF in an acceptable range. The model was implemented on energy measurements at four industrial facilities for one year and the results were verified in terms of reduction in losses, and the resulting monetary benefits, and reduction in CO2 emissions.
I. INTRODUCTION
L OW PF implies reduced operating efficiency which results in a need for larger conductors (wires) and increased equipment capacity, as well as causing voltage drops as power losses increase. These equate to higher capital investments and expenses and lower system performance. PF correction contributes to energy saving in general which can be directly correlated to PF difference, and how heavily loaded inductive devices are in the distribution system. However, correcting PF can bring significant savings in energy bills if the utility imposes a low PF penalty in their rate structure, as most utilities do for industrial consumers [1]- [3].
Most of the international utilities consider specified fees for lower PFopr with respect to PFref. An extensive review Those with a PF of less than 90% will see an adjustment to the demand kW on which they are billed. The adjustment is calculated based on a formula considering the level and duration of PF utilization [4]. For Duquesne Light Company (DLC), the policy indicates that if a customer is not using the available electricity efficiently, the higher cost is imposed by PF multiplier adjustment [5]. For Oncor Electric Delivery Company (ONCOR), the tariff policy states that if the PF of a retail customer is found less than 95% lagging, it may require the consumer to install appropriate equipment for PF correction. If they fail to correct the PF consistent with this standard, the demand associated with their use of delivery service, as determined in the appropriate rate schedules, may be increased according to the formulas in [6]. The tariff, related to the PF section of British Columbia Hydro states that, if the customer's average PF for the billing period falls below 90%, the bill will increase by a certain percentage applied to the total of all other charges for the same period [7]. The Egyptian utilities provide bonuses or rewards to the customers for maintaining a high PFopr in the range of 0.92 to 0.95 registered at the end of the year [8].
The Electric Power Research Institute (EPRI) conducted a study [9] entitled, Assessment of Transmission and Distribution Losses in New York State for the New York State Energy Research and Development Authority (NYSERDA), Albany, NY. The study required New York state utilities to identify measures to reduce system losses and/or optimize system operations. They also included the effect of the electric power tariff on the losses. Allegheny Power in the Midwest and Mid-Atlantic regions, US, applied a kVAR charge to the customer's kVAR capacity which required more than 25% of their kilowatt capacity. While PEPCO, DC, and Maryland, USA do not charge the customers for reactive demand apparently, except for time-metered rapid transit service accounts. The monthly billing reactive demand is the maximum 30minute integrated coincident kVAR demand of each delivery point served less the kVAR, supplied for an 85% PF. For Georgia Power, if there is an indication of PF less than 95% lagging, the company may, at its option, install metering equipment to measure the reactive demand, which shall be the highest 30-minute kVAR measured during the month. The excess reactive demand shall be kVAR which is over onethird of the measured actual kW in the current month. The company will bill excess kVAR at the rate of USD 0.27 (per kVAR).
In Saudi Arabia, the Saudi Electricity Company (SEC), based on the Water and Energy Regulatory Authority's (WERA) decision [10] for non-household consumption, charges PFopr below 0.9 with connected load above one MVA and is in the process to increase it to 0.95, with the same connected load. A penalty charge of SAR 0.05 (USD 1 = SAR 3.75) is applied for every additional kVARh registered monthly exceeding 48.4% of the registered active energy consumption corresponding to PFopr 0.9 [10]. Thus, the treatment of PF currently adapted is based on the average monthly measurement of the consumer's PFopr with a pro-gressive penalty applied for PFopr lower than the PFref. This local and international treatment for low PFref should be adjusted to correct two major shortcomings: • The monthly average measurement gives misleading information, as it does not allow for a focus on a critical period, i.e., peak load, • While in some cases [8], [11], flat numerical credit is awarded for improvements in the monthly average PF, it still cannot provide progressive incentives to the consumers, which in turn does not encourage them for promoting further improvements in PFopr, even at a relatively higher marginal investment cost, all subject to its cost/benefit analysis. In this spirit, a practical and flexible framework of tariff, with incentive and penalty is proposed, with a protocol to rectify these shortcomings. Moreover, an estimate of the monetary values related to the variation in the fuel consumption and CO2 emissions associated with the changes in PFopr has been investigated. The paper is organized as follows. Section 2 builds the background by giving the basis of the mathematical model, citing the current scenario of transmission and distribution losses in the Saudi Arabian context, fuel consumption for energy generation and resulting CO2 emissions, and lastly, the relationship of change in PF with the losses. Based on this background, Section 3 presents the proposed unified framework for incentivepenalty based tariff, a stepwise guide for its implementation, and a comparison of the proposed framework with that of the currently implemented tariff using the energy data of four major industrial entities, collected for a period of one year.
II. BACKGROUND
The presented approach is based on the concept proposed by Zedan et. al. [1], which links PF with the applicable tariff in (1), to provide a mathematical relation giving consumption charge resultant tariff (T res ) as a function of PFopr registered over timescale (minutes), thereby providing continuous-time referenced action/operation.
where, T res is the resulting variable consumption charge tariff in SAR/kWh, T is the fixed assigned tariff in SAR/kWh, N is the variable factor (0 ≤ N < 1), P F ref is the assigned reference PF, and P F opr is the operational PF incurred by the consumer.
With the advent of digital meters, the proposed model empowers the regulators/SPs to implement a framework for penalties and incentives, tailored to consumers' operations. Additionally, like time-of-day use, the focus can be placed on critical periods (e.g., peak load).
Equation (1) can be written for consumer's real energy W (kWh), and apparent energy S (kVAh) as: which can be interpreted as, consumption charge = charge on real energy and drawn apparent energy times P F ref .
With available smart meters, the model in (1) allows timerelated gradual penalty or incentive for P F opr lower or higher than P F ref , respectively. Since T res is inversely related to P F opr , penalty or incentive increases/decreases with variation in P F opr , an advantage is provided by allowing a gradual increase/decrease in T res per (kVARh), rather than what is currently applied, i.e., a constant penalty or reward irrespective of the degree of deviation in P F ref . Furthermore, the value of N can be assigned (tailored) to fairly fit the operational practice of each category of consumers.
The correct adjustment of N in (1) provides the means to not only levy penalties but to advance fair and acceptable incentives that encourage consumers to exert efforts and carry the additional investments to maintain a high P F opr . Furthermore, it has been shown in this paper that the segregation of N , as N p for the penalty, and Ni for incentive, provides independent assignments of the protocol. Depending on the consumption pattern of the category of consumers, a higher value of N provides higher emphasis on the kWh and consequently lower emphasis on kVAh. The paper presents a reliable basis for a narrow range of N 's for each category of consumers concerning the changes made in respective P F opr , with resulting positive/negative effects. Furthermore, it presents calculations for the associated monetary values in terms of changes resulting in heat losses, fuel consumption, and CO2 emissions. To establish a reliable base for the approach, and since the instantaneous variation in PFopr is inversely proportional to the drawn current (I), the resulting current-related heat losses are chosen as the best tool to quantify corresponding positive/negative effects.
A. TRANSMISSION AND DISTRIBUTION LOSSES
Part of the generated energy to serve consumers is dissipated as heat loss across the network. The quantifiable portion thereof is the current related T & D losses. In Saudi Arabia, as per Water Electricity Regulatory Authority's (WERA) 2018 report [12], 9.5% of the generated electricity is dissipated yearly as losses in the Saudi network. The report also showed that the delivered yearly energy to consumers is 299,188 GWh. Based on this data, it is possible to calculate the generated electricity as 330,594.48 GWh and losses that account for 31,406.48 GWh. These losses are classified into two categories, technical and non-technical. The former is due to the energy dissipated in the conductors, T & D lines equipment, and magnetic losses in transformers, while the latter is due to error in the meter reading, billing of consumer energy consumption, lack of administration, and financial constraints, as well as energy thefts.
The technical losses depend on both the mode of operation and the network characteristics, which can be classified into fixed losses, which are not affected by drawn current such as corona losses, leakage current losses, dielectric losses, and the variable losses, which are proportional to the square of the current. In this study, an evaluation of the effect of improving PF on transmission-connected industries is done by considering only the variable technical losses.
International standards and utility experiences indicate that 30% of the T & D losses are attributed to transmission Ohmic or current related losses [13], [14]. Based on this number, transmission Ohmic losses in Saudi Arabia are calculated as 9,421.94 GWh (30% of 31,406.48) a year. In addition, the ratio of transmission Ohmic losses to sold energy is given as 3.149%, which implies that to deliver 100 GWh to a load, 3.149 GWh is lost in the transmission system as Ohmic losses.
To depict the above losses in terms of the required thermal energy (BTUs), to be followed by resulting CO2 emissions, the following calculations are adopted based on WERA report data [12]. The yearly consumption of fuel for electricity, desalination, and steam production in Saudi Arabia is 3897 MMBTUs; out of this, the consumption for electricity generation is 87% (3390.39 MMBTUs). The portion of different fuel types used for electricity generation is 57% using Natural Gas (NG), 22% using Heavy Fuel Oil (HFO), 18% using Crude Oil (CO), and 3% using Diesel.
B. FUEL CONSUMPTION
To find the number of input thermal energy in BTUs for electric energy generation of unit GWh, we note that the total electrical energy generated from 3390. Table 1 shows energy generation by source in GWh for Saudi Arabia [15], where a significant increase in Natural Gas consumption can be seen compared with oil for the nearly same amount of energy generated.
C. CO2 EMISSION
Around 40% of the total CO2 emissions in Saudi Arabia are attributed to the energy sector, followed by industrial processes and agricultural sectors [15]. Currently, the Kingdom is making significant efforts and investments, as well as policy measures to reduce mainly these emissions in line with Article 12.1(b) of the United Nations Framework Convention on Climate Change (UNFCCC), by modernization of the power sector, the establishment of economic cities, investment in infrastructure, and the development and use of renewable energy and gas, to name a few [16]. VOLUME 4, 2016 Implementing PF improvement strategies in the industrial sector, which accounts for approximately 18% of the total electrical consumption as shown in table 2, can result in a significant reduction in CO2 emissions. In the context of the Chinese iron and steel industry, [17] showed that the proposed policies about energy savings and emission reduction could result in a cumulative reduction of 818.3 MtCO2 during the period of 2015-2030, compared with the existing policies. For Saudi Arabia, as shown in table 2, two major sectors contribute heavily to CO2 production; namely Electricity Heat Producers, and Transport. These two sectors showed a reduction in CO2 emissions of 9% and 5%, respectively, as given in table 2 and figure 1. Other sectors' emissions remained roughly the same. This can be explained based on switching from Oil to Natural Gas and Solar PV for energy generation because the latter two sources result in significantly less total CO2 emissions. As shown in table 1, there was an 8% decrease in oil consumption, with a similar increase (7%) in Natural Gas usage for energy generation from 2017 to 2018, with almost the same electricity generation.
D. RELATION OF CO2 EMISSIONS TO CRUDE OIL BARREL
The calculation of CO2 emissions per equivalent barrel of crude oil is determined as per the Environmental Protection Agency (EPA), Greenhouse Gases Equivalencies Calculations and References [18], by multiplying the barrel heat content times the carbon coefficient times the fraction oxidized times the ratio of the molecular weight of carbon dioxide to that of carbon (44/12) as follows: • The average heat content of crude oil is 5.80 mBTU per barrel [18]. • The average carbon coefficient of crude oil is 20.31 kg carbon per mBTU [18]. • The fraction oxidized is assumed to be 100 percent [19]
E. POWER FACTOR AND ENERGY LOSS RELATIONSHIP
In this section, a mathematical relation is presented to relate the variation in PFopr with the resulting percentage variation in heat losses. Assuming a load X utilizes apparent demand (S), and drawn real demand (P) given as, Assuming that within a period of P F opr changes, both P , and receiving end voltage V are constant, with drawn current (I), operational P F opr for two cases, P F 1 and P F 2 . Equation (3) gives, where, C 1 is a constant equal to To find the relationship between I, PF, and energy loss (E loss ) for the same time duration, the following analysis is conducted for the two cases: where R, is the transmission line resistance. Dividing both cases of equation (6) fori = 1, 2, the following is given, Equations (5) and (7), provide variation in P F opr as inversely related to the network's thermal losses, similarly to (BTU) and CO 2 emission, and is presented as, Equation (7), presented with the term ( Figure 2 shows the relation between loss difference and P F opr improvement. Although equation (8) is a quadratic relation, however, within the range of P F opr increasing or decreasing for P F ref by a multiple of ±0.01, linear approximation gives changes by a multiple of ±2.2%, and for an increase of P F opr from 0.9 to 0.95 gives reductions in corresponding heat losses, BTU consumption, and CO2 emission by roughly 11%.
III. ANALYSIS AND RESULTS
In this section, the benefits of improving P F opr of the industrial sector in terms of reduction in losses are quantified to a certain degree of accuracy. Also, to reflect these savings on the electricity tariff (T ), a framework is provided for the electricity service provider/regulator, whereby the governing parameters can be chosen to give the desired incentive and penalty for a consumer depending on the changes in P F opr .
A. ASSESSMENT OF PF IMPROVEMENT
Based on the given background, total savings in terms of equivalent crude oil barrels and CO2 emission mitigation for 1 GWh load are generalized. Table 3 shows the amount of reduction in energy losses in percentage concerning improvements in P F opr , given that P F ref = 0.9.
It can be seen that if a consumer improves P F opr from 0.7 to 0.9, it will result in a saving of at least 22 barrels and 11 mTons of CO2 for 1 GWh load; these results are also shown graphically in figure 3.
Similarly, if a consumer improves PFopr further above PFref of 0.9, table 4 and figure 4 show the results.
B. SELECTION OF INCENTIVE AND PENALTY FOR CONSUMERS
To reflect the savings or additional cost due to a corresponding change in P F opr on electricity tariff, the governing mathematical relationship given in equation (1) is adapted. For P F opr > P F ref , T res < T , i.e., incentive. Consequently, for P F opr < P F ref , T res > T which enforces a penalty. To arrive at a fair and acceptable range of T res giving reasonable penalty/incentive, the value of N needs to be evaluated based on the proportional sharing savings/losses VOLUME 4, 2016 between business partner, consumer, and SP.
To arrive at the required N , the term (T /T ) is subtracted on both sides of equation (1), giving, Equation (9) gives, To understand the relationship between N and F , equation (10) is given with N = 0 and N = 0.99, which are the minimum and maximum values of N , respectively. Following the constraints further, P F ref = 0.9, with P max = 0.99 and P F min = 0.8 are considered, providing maximum decrease and increase in T res , respectively. Defining these limits helps in calculating the boundary values of F as follows, 1) For N = 0, • F % at P F max = 0.99 will be -9.09%, which gives the maximum incentive. • F % at P F min =0.8 will be 12.5%, which implies maximum penalty. 2) For N = 0.99, • F % at P F max = 0.99 will be -0.0009%. • F % at P F min = 0.8 will be 0.00125%. These results are shown graphically in figure 5 and table 5.
There are two ways to choose N . First, to choose a single value that will fix both the incentive and penalty to corresponding values. The second is to have two different values of N , i.e., N p and N i to allow the choice of different values of penalty and incentive, respectively. Figure 5 shows that if it is desired to give a maximum incentive of 5% at user's P F opr = 0.99, then the choice of N = 0.45 which also imposes a maximum penalty of approx. 6.5%, if the user operates on the lowest PF of 0.8 -first case for the choice of N . For the second case, for instance, F is selected as 5% for maximum incentive and 10% for the maximum penalty, then N i = 0.45 and N p = 0.2, respectively.
C. PROPOSED METHOD'S IMPLEMENTATION
From the implementation point of view, it is possible to "reprogram" the existing energy meters to consider two values of N, as described above. This kind of criteria, where the value of a parameter depends on certain operating conditions, is commonly implemented in many electronic appliances. After reprogramming, the meter will keep a check on P F opr and choose the selected values of N .
It is worth mentioning that there will be no discontinuity experienced by the energy meter in the calculation of F because of different values of N on both sides of the reference P F ref . This is illustrated by the graph in figure 6, which shows that when P F opr switches from penalty to incentive side, there is no discontinuity observed. In this study, two values of N are used to separately estimate the maximum incentive and penalty. The following steps are outlined to reach the desired value of N using the graph in figure 5.
1) Find the average kWh consumed in a period of one month: X kWh / month, 2) • PENALTY: Calculate the share of loss incurred by the consumer as a percentage of total losses in the network: Y. • INCENTIVE: Calculate the share of Ohmic losses incurred by the consumer as a percentage of total losses incurred: Y. 3) • PENALTY: Divide losses by the total energy consumed to give F p = Y / X%. • INCENTIVE: Divide the Ohmic losses by the total energy consumed to give F i = Y / X%. 4) Using the graph of figure 5, choose N p or N i according to F p or F i , respectively. The studied sites with available data were analyzed and suitable values of F p or F i were found using the method discussed above and summarized in table 6. The values of N p and N i in the table are the same for all industrial entities studied, which adds to the ease of implementing the methodology.
D. COMPARISON OF THE PROPOSED MODEL WITH CURRENT MODEL IN KINGDOM OF SAUDI ARABIA
In this section, a comparative exercise is conducted using the current practices in Saudi Arabia (WERA Board of Directors' Decree No. (2/27/33) dated 21/10/1433 H), which apply a charge on the consumer of SAR 0.05 (USD 0.013) for every additional kVARh below P F ref of 0.9. This implies that consumers at 0.7 and 0.89 P F opr are both affected by the same rate. Figure 7 shows the comparison of resulting penalties and incentives with the proposed approach (called Zedan model in this paper) drawn as a red curve and with current Saudi practice called (WERA model) drawn as a blue curve. In addition, figure 8 shows the comparison of T res with the Zedan model (equation (11)) and that with the current model for industrial tariff (equation (12)). On the penalty side, both figures show that the two models are roughly matched up to P F opr 0.85, while for less than 0.85, the Zedan model charges more penalty rate. This will encourage low P F opr consumers to seek improvement. Furthermore, if it is desired to increase the penalty, N p can be chosen close to zero. On the incentive side, a maximum of 3.5% reduction in T res is seen as an incentive to the consumer with P F opr = 0.99 and N i = 0.654.
To illustrate the number of potential incentives for consumers of the proposed study, it is noted that all four of them operated on P F opr above 0.9. Table 7 compares the resulting tariff due to the proposed methodology and WERA's current VOLUME 4, 2016 practice; with an improved P F opr above 0.9, the consumers can have benefits of SAR 0.1 to 0.9 million per month, that can help in covering the costs of PF improvement equipment and encourage consumers to participate in such practices. It should be noted that the cost and savings indicated in this table are for the feeder under study and not for the industry, as they are supplied by more than one feeder.
IV. SUMMARY
Equation (1) provides the means of optimizing cooperation between consumers and SP to achieve mutual benefits. As indicated above, both sides can achieve a fair and acceptable return, only when consumers are provided the right incentive. The use of equation (1) does not impede the application of the daytime use process, where during the targeted periods, T can be set to higher values with the same values of N i and N p or even others. Consequently, T res will be given in fulfillment of the process.
Field measurements at Saudi Electric Company's (SEC) substations in Jubail Industrial City for the four industrial plants were completed.
• Data analyses were conducted to evaluate savings in Ohmic losses due to improvement in P F opr , which gave recognizable savings in fuel consumption and CO2 emissions. • The proposed tariff model (Zedan model) was based on the assigned tariff (T ), assigned P F ref , consumer P F opr , and a variable factor, 0 ≤ N ≤ 1, (equation (1)). • The factor N was selected based on savings in the Ohmic losses in the transmission systems. The study designated two values of N , N p for a penalty, and N i for incentive, providing, thereby flexibility to segregate the desired values of incentives and penalties concerning prevailing economic considerations. • Proposed model and WERA's current model for industrial tariff are compared ( fig. 8). The proposed model includes an incentive to consumers to maintain P F opr above P F ref .
• Proposed model can be successfully applied with programmed smart meters to compile T res values for minute intervals, which in effect gives, per interval, accumulated monthly consumption charge.
V. CONCLUSION
The key differences between the two models: PF management protocols and the proposed model are as follows: • The first is based on average monthly measurements, whereas the latter (proposed in this paper) facilitates measurement over smaller intervals down to a minute, • The first assigns fixed penalty rates, while the latter's penalty/incentive rates are segregated and are a function of time, as given by the resulting charging tariff T res . • The takeaways from these differences are: 1) Lack of focus on critical periods, vs. ability to focus, hence allowing needed catering, 2) Fixed rates vs. sensitivity towards slight deviation from P F ref , as being a function of the degree of deviation. | 2021-12-17T16:28:48.167Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "c8d3e2443726fcf63ee48643aee80b6bd295f160",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09652525.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "71f0b9a88bc5f30c99f4c565809f2601e46eaf06",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
117346043 | pes2o/s2orc | v3-fos-license | Infinitesimal Rigidity of Symmetric Frameworks
We propose new symmetry-adapted rigidity matrices to analyze the infinitesimal rigidity of arbitrary-dimensional bar-joint frameworks with Abelian point group symmetries. These matrices define new symmetry-adapted rigidity matroids on group-labeled quotient graphs. Using these new tools, we establish combinatorial characterizations of infinitesimally rigid two-dimensional bar-joint frameworks whose joints are positioned as generic as possible subject to the symmetry constraints imposed by a reflection, a half-turn or a three-fold rotation in the plane. For bar-joint frameworks which are generic with respect to any other cyclic point group in the plane, we provide a number of necessary conditions for infinitesimal rigidity.
Introduction
A d-dimensional bar-joint framework is a straight-line realization of a finite simple graph G in Euclidean d-space. Intuitively, we think of a bar-joint framework as a collection of fixed-length bars (corresponding to the edges of G) which are connected at their ends by joints (corresponding to the vertices of G) that allow bending in any direction of R d . Such a framework is said to be rigid if there exists no non-trivial continuous bar-length preserving motion of the framework vertices, and is said to be flexible otherwise (see [23] for basic definitions and background).
The theory of generic rigidity seeks to characterize the graphs which form rigid frameworks for all generic (i.e., almost all) realizations of the vertices in Euclidean d-space. For d = 2, this problem was first solved by Laman [8] in 1970: Laman proved that a generic two-dimensional bar-joint framework is minimally rigid if and only if the underlying graph G satisfies |E(G)| = 2|V (G)| − 3 and |E(G )| ≤ 2|V (G )| − 3 for any subgraph G of G with |V (G )| ≥ 2, where V (H) and E(H) denote the set of vertices and the set of edges of a graph H, respectively. For dimensions d ≥ 3, however, the analogous questions remain long-standing open problems, although there exist some significant partial results [23].
The theory of rigid and flexible frameworks has a wide variety of practical applications in many areas of science, engineering and design, where frameworks serve as a suitable mathematical model for various kinds of physical structures, mechanical gadgets (such as linkages or robots), sensor networks, biomolecules, etc. Since many of these structures exhibit non-trivial symmetries, it is natural to explore the impact of symmetry on the rigidity and flexibility properties of frameworks. Over the last decade, this research area has gained an ever increasing attention in both the mathematical community and in the applied sciences. Two separate fundamental research directions can be identified: 1. Forced symmetry: The framework starts in a symmetric position and must maintain this symmetry throughout its motion.
2. Incidental symmetry: The framework starts in a symmetric position, but may move in unrestricted ways.
Over the last few years, significant progress has been made in the rigidity analysis of forced-symmetric frameworks [11,10,22,6,19,20]. A key motivation for this research is that for symmetry-generic frameworks (that is, for frameworks which are as generic as possible subject to the given symmetry constraints), the existence of a non-trivial symmetric infinitesimal motion also guarantees the existence of a non-trivial finite (i.e., continuous) symmetry-preserving motion of the framework [14]. To simplify the symmetry-forced rigidity analysis of a symmetric framework a symmetric analog of the rigidity matrix, called the orbit rigidity matrix, was recently established in [19]. In particular, this matrix was used in [6] to formulate combinatorial characterizations of symmetry-forced rigid symmetry-generic frameworks in terms of Henneberg-type construction moves on gain graphs (group-labeled graphs), for all rotational groups C n and for all dihedral groups C nv with odd n in the plane.
In contrast, for the more general question of how to analyze the rigidity properties of an incidentally symmetric framework, there has not been any major progress in the last few years. This paper proposes a systematic way to analyze this general case. The state of the art in this research area is as follows.
The most fundamental result concerning the rigidity of symmetric frameworks is that the rigidity matrix of a framework with non-trivial point group Γ can be transformed into a block-decomposed form so that each block corresponds to an irreducible representation of Γ. This goes back to an observation of Kangwai and Guest [7], and was proved rigorously in [14,12]. Note that the submatrix block which corresponds to the trivial irreducible representation of Γ describes the forced-symmetric rigidity properties of the framework [19]. Using this block-decomposition of the rigidity matrix, necessary conditions for a symmetric bar-joint framework to be isostatic (i.e., minimally infinitesimally rigid) in R d have been derived in [5,4].
In [4] the necessary conditions were conjectured to be sufficient for 2-dimensional symmetry-generic frameworks to be isostatic. This was confirmed for the groups C 2 , C 3 and C s in [16,17], but it remains open for the dihedral groups.
However, note that in order to obtain combinatorial characterizations of symmetrygeneric infinitesimally rigid frameworks in the plane these symmetrized Laman-type results are only of limited use since, by the conditions derived in [4], a symmetric infinitesimally rigid framework usually does not contain an isostatic subframework on the same vertex set with the same symmetry. For example, it turns out that there does not exist an isostatic framework in the plane with point group C 2 or C s , where the group acts freely on the edges of the framework (see Figure 1) [4]. Moreover, there does not exist any isostatic framework in the plane with k-fold rotational symmetry, for k > 3 [4]. Figure 1: Infinitesimally rigid symmetric frameworks in R 2 with respective point groups C s and C 2 which do not contain a spanning isostatic subframework with the same symmetry.
In this paper, we establish several new results concerning the infinitesimal rigidity of ('incidentally') symmetric frameworks. First, for any Abelian point group Γ which acts freely on the vertices of a d-dimensional framework, we extend the concept of the orbit rigidity matrix described in [19] and show how to construct an 'anti-symmetric' orbit rigidity matrix for each of the irreducible representations ρ j of Γ (see Section 4). These 'anti-symmetric' orbit rigidity matrices are equivalent to their corresponding submatrix blocks in the block-decomposed rigidity matrix, but their entries can explicitly be derived in a transparent fashion.
For the reflection group C s and for the rotational groups C 2 and C 3 , we then use these orbit rigidity matrices in combination with Henneberg-type inductive construction moves on their corresponding gain graphs to establish combinatorial characterizations of symmetrygeneric frameworks in R 2 which do not have a non-trivial ρ j -symmetric infinitesimal motion. Taken together, these results lead to the desired combinatorial characterizations of infinitesimally rigid symmetry-generic frameworks for these groups (see Sections 5 and 6).
For the other cyclic groups C k , k > 3, we provide a number of necessary conditions for infinitesimal rigidity, and we also offer some conjectures.
Finally, in Section 7, we briefly discuss some further applications of our tools and methods and outline some directions for future developments.
Rigidity of bar-joint frameworks
For a finite graph G, we denote the vertex set of G by V (G) and the edge set of G by E(G). A bar-joint framework (or simply a framework) in R d is a pair (G, p), where G is a simple graph and p : For v ∈ V (G), we say that p(v) is the joint of (G, p) corresponding to v, and for e = {u, v} ∈ E(G), we say that the line segment between p(u) and p(v) is the bar of (G, p) corresponding to e. For simplicity, we shall denote p(v) by p v for v ∈ V (G).
An infinitesimal motion of a framework (G, p) in R d is a function m : where An infinitesimal motion m of (G, p) is a trivial infinitesimal motion if there exists a skew-symmetric matrix S and a vector t such that m(v) = Sp(v) + t for all v ∈ V (G). Otherwise m is called an infinitesimal flex (or non-trivial infinitesimal motion) of (G, p).
(G, p) is infinitesimally rigid if every infinitesimal motion of (G, p) is trivial. Otherwise (G, p) is said to be infinitesimally flexible [23].
These definitions are motivated by the fact that if (G, p) is infinitesimally rigid, then (G, p) is rigid in the sense that every continuous deformation of (G, p) which preserves the edge lengths p i − p j for all {i, j} ∈ E(G), must preserve the distances p s − p t for all pairs of vertices s and t of G.
A key tool to study the infinitesimal rigidity properties of a d-dimensional framework (G, p) is the rigidity matrix of (G, p). For a vector x ∈ R d , we denote the k th component of x by (x) k . The rigidity matrix R(G, p) is a |E(G)| × d|V (G)| matrix associated with the system of linear equations (1) with respect to m, in which each row is associated with an edge and consecutive d columns are associated with a vertex as follows, in the columns associated with v, and 0 elsewhere [23]. Throughout the paper, for a finite set S and a finite dimensional vector space W over some field, the set of all functions f : S → W is denoted by W S or by s∈S W (taking copies of W ). Then R(G, p) is regarded as a linear map from ( is an infinitesimal motion if and only if R(G, p)m = 0, which means that the kernel of the rigidity matrix R(G, p) is the space of all infinitesimal motions of (G, p). It is well known that a framework (G, p) in R d with n = |V (G)| is infinitesimally rigid if and only if either the rank of its associated rigidity matrix R(G, p) is precisely dn − d+1 2 , or G is a complete graph K n and the points p i , i = 1, . . . , n, are affinely independent [2].
A self-stress of a framework (G, p) is a function ω : E(G) → R such that at each joint p u of (G, p) we have v:{u,v}∈E(G) where ω uv denotes ω({u, v}) for all {u, v} ∈ E(G). Note that ω ∈ R E(G) is a self-stress if and only if R(G, p) ω = 0. In structural engineering, the self-stresses are also called equilibrium stresses as they record tensions and compressions in the bars balancing at each vertex.
If (G, p) has a non-zero self-stress, then (G, p) is said to be dependent (since in this case there exists a linear dependency among the row vectors of R(G, p)). Otherwise, (G, p) is said to be independent. A framework which is both independent and infinitesimally rigid is called isostatic [23].
A d-dimensional framework (G, p) with n vertices is called generic if the coordinates of p are algebraically independent over Q, i.e., if there does not exist a polynomial h(x 1 , . . . , x dn ) with rational coefficients such that h((p 1 ) 1 . . . , (p n ) d ) = 0. Note that the set of all generic realizations of G is a dense, but not an open subset of R dn .
We say that (G, p) is regular if the rigidity matrix R(G, p) has maximal rank among all realizations of G. It is easy to see that the set of all regular realizations of G is a dense and open subset of R dn which contains the set of all generic realizations of G [2,23].
It is well known that for regular frameworks (and hence also for generic frameworks), infinitesimal rigidity is purely combinatorial, and hence a property of the underlying graph. Thus, we say that a graph G is d-rigid (d-independent, d-isostatic) if d-dimensional regular realizations of G are infinitesimally rigid (independent, isostatic).
Rigidity of symmetric bar-joint frameworks
In this subsection, we review some recent approaches for analyzing the rigidity of symmetric frameworks. First, we introduce gain graphs, which turn out to be useful tools for describing the underlying combinatorics of symmetric frameworks. We then provide precise definitions of symmetric graphs and symmetric frameworks, and then explain the block-diagonalization of rigidity matrices.
Gain graphs
Let H be a directed graph which may contain multiple edges and loops, and let Γ be a group. A Γ-gain graph (or Γ-labeled graph) is a pair (H, ψ) in which each edge is associated with an element of Γ via a gain function ψ : E(H) → Γ. See Figure 3.2(b) for an example. A gain graph is a directed graph, but its orientation is used only for the reference of the gains. That is, we can change the orientation of each edge as we like by imposing the property on ψ that if an edge has gain g in one direction, then it has gain g −1 in the other direction.
Symmetric graphs
Let G be a finite simple graph. An automorphism of G is a permutation π : V (G) → V (G) such that {u, v} ∈ E(G) if and only if {π(u), π(j)} ∈ E(G). The set of all automorphisms of G forms a subgroup of the symmetric group on V (G), known as the automorphism group Aut(G) of G. An action of a group Γ on G is a group homomorphism θ : Γ → Aut(G). An action θ is called free on V (G) (resp., E(G)) if θ(γ)(v) = v for any v ∈ V (G) (resp., θ(γ)(e) = e for any e ∈ E(G)) and any non-identity γ ∈ Γ. We say that a graph G is Γ-symmetric (with respect to θ) if Γ acts on G by θ. Throughout the paper, we only consider the case when θ is free on V (G), and we omit to specify the action θ, if it is clear from the context. We then denote θ(γ)(v) by γv.
For a Γ-symmetric graph G, the quotient graph G/Γ is a multigraph whose vertex set is the set V (G)/Γ of vertex orbits and whose edge set is the set E(G)/Γ of edge orbits. An edge orbit may be represented by a loop in G/Γ.
Several distinct graphs may have the same quotient graph. However, if we assume that the underlying action is free on V (G), then a gain labeling makes the relation one-to-one. To see this, we arbitrarily choose a vertex v as a representative vertex from each vertex orbit. Then each orbit is of the form Γv = {gv | g ∈ Γ}. If the action is free, an edge orbit connecting Γu and Γv in G/Γ can be written as {{gu, ghv} | g ∈ Γ} for a unique h ∈ Γ. We then orient the edge orbit from Γu to Γv in G/Γ and assign to it the gain h. In this way, we obtain the quotient Γ-gain graph, denoted by (G/Γ, ψ). (G/Γ, ψ) is unique up to choices of representative vertices. Figure 2: A C s -symmetric graph (a) and its quotient gain graph (b), where C s = {id, s}.
For simplicity, we omit the direction and the label of every edge with gain id.
Clearly, Γ acts freely on the covering graph with the action θ defined by θ(g) : v → gv for g ∈ Γ, under which the quotient graph comes back to (H, ψ). In this way, there is a one-to-one correspondence between Γ-gain graphs and Γ-symmetric graphs with free actions (up to the choices of representative vertices).
The map c : G → H defined by c(gv) = v and c({gu, gψ(e)v}) = (u, v) is called a covering map. In order to avoid confusion, throughout the paper, a vertex or an edge in a quotient gain graph H is denoted with the mark tilde, e.g.,ṽ orẽ. Then the fiber c −1 (ṽ) of a vertexṽ ∈ V (H) and the fiber c −1 (ẽ) of an edgeẽ ∈ E(H) coincide with a vertex orbit and an edge orbit, respectively, in G.
Symmetric bar-joint frameworks
Given a finite simple graph G and a map p : V (G) → R d , a symmetry operation of the framework (G, p) in R d is an isometry x of R d such that for some α x ∈ Aut(G), we have The set of all symmetry operations of a framework (G, p) forms a group under composition, called the point group of (G, p). Since translating a framework does not change its rigidity properties, we may assume wlog that the point group of a framework is always a symmetry group, i.e., a subgroup of the orthogonal group O(R d ).
Given a symmetry group Γ and a graph G, we let R (G,Γ) denote the set of all ddimensional realizations of G whose point group is either equal to Γ or contains Γ as a subgroup [15,14,16,17]. In other words, the set R (G,Γ) consists of all realizations (G, p) of G for which there exists an action θ : Γ → Aut(G) so that for all v ∈ V (G) and all x ∈ Γ.
A framework (G, p) ∈ R (G,Γ) satisfying the equations in (2) for θ : Γ → Aut(G) is said to be of type θ, and the set of all realizations in R (G,Γ) which are of type θ is denoted by R (G,Γ,θ) (see again [15,14,16] and Figure 3). It is shown in [15] that (G, p) is of a unique type θ and θ is necessarily also a homomorphism, when p is injective. For simplicity, we will assume throughout this paper that a framework (G, p) ∈ R (G,Γ) has no joint that is 'fixed' by a non-trivial symmetry operation in Γ (i.e., (G, p) has no joint p i with x(p i ) = p i for some x ∈ Γ, x = id).
Let Γ be an abstract group, and G be a Γ-symmetric graph with respect to a free action θ : Γ → Aut(G). Suppose also that Γ acts on R d via a homomorphism τ : Γ → O(R d ). Then we say that a framework (G, p) is Γ-symmetric (with respect to θ and τ ) if for all γ ∈ Γ and all v ∈ V (G).
Let H be the quotient graph of G with the covering map c : G → H. It is convenient to fix a representative vertex v of each vertex orbit Γv = {gv : g ∈ Γ}, and define the quotient of p to bep : V (H) → R d , so that there is a one-to-one correspondence between p andp given by p(v) =p(c(v)) for each representative vertex v.
For a discrete point group Γ, let Q Γ be the field generated by Q and the entries of the matrices in Γ. We say that p (orp) is Γ-generic if the set of coordinates of the image of p is algebraically independent over Q Γ . Note that this definition does not depend on the choice of representative vertices. A Γ-symmetric framework (G, p) is called Γ-generic if p is Γ-generic.
Further, we say that (G, p) is Γ-regular if the rigidity matrix R(G, p) has maximal rank among all Γ-symmetric realizations of G (see also [15]). If a framework is Γ-generic, then it is clearly also Γ-regular.
Block-diagonalization of the rigidity matrix
It is shown in [7,14] that the rigidity matrix of a symmetric framework can be transformed into a block-diagonalized form using techniques from group representation theory. In the following, we will briefly present the details of this fundamental result in order to clarify the combinatorics underlying our further analyses in the subsequent sections.
For an m × n matrix A and a p × q matrix B, A ⊗ B denotes the Kronecker product of A and B. The following are well-known properties of this algebraic operation: Given two matrix representations ρ 1 and ρ 2 of a group Γ, the tensor product The set of all Γ-linear maps of ρ 1 and ρ 2 forms a linear space which is denoted by Let (G, p) be a Γ-symmetric framework with respect to a free action θ : Γ → Aut(G) and a homomorphism τ : where δ denotes the Kronecker delta symbol. Similarly, let P E : Γ → GL(R E ) be the linear representation of Γ consisting of permutation matrices of permutations induced by θ over E(G).
Let G be a directed graph obtained from G by assigning an orientation to each edge so that it preserves the action θ (i.e., an edge {u, v} is directed from u to v if and only if {γu, γv} is directed from γu to γv). The incidence matrix I G of G is the |E(G)| × |V (G)| matrix, where the row of e = (i, j) ∈ E( G) has the entries −1 and 1 in the columns of i and j, respectively, and the other entries are zero.
It is important to notice that since θ is an action on G we have I G ∈ Hom Γ (P V , P E ). To see this, we let for each e ∈ E(G), I e be the |E(G)| × |V (G)| matrix obtained from I G by changing each entry to zero except those in the row of e. Then I G = e∈E( G) I e , and we can easily verify that This relation can naturally be extended to rigidity matrices, as shown in [14,12]. Here we give a short proof.
Theorem 3.1. Let Γ be a finite group with τ : Γ → O(R d ), G be a Γ-symmetric graph with a free action θ (on V (G)) and (G, p) be a Γ-symmetric framework with respect to θ and τ . Then R(G, p) ∈ Hom Γ (τ ⊗ P V , P E ).
Proof. Let R e be the |E(G)| × d|V (G)| matrix obtained from R(G, p) by changing each entry to zero except those in the row of e. As above, we consider the directed graph G, and for each e = (u, v), we let where I e is defined as above. For each e ∈ E( G) and γ ∈ Γ, we now have where for the third equation we used the fact that (G, p) is Γ-symmetric and hence Since R(G, p) ∈ Hom Γ (τ ⊗ P V , P E ), there are non-singular matrices S and T such that T R(G, p)S is block-diagonalized, by Schur's lemma. If ρ 0 , . . . , ρ r are the irreducible representations of Γ, then for an appropriate choice of symmetry-adapted coordinate systems, the rigidity matrix takes on the following block form where the submatrix block R i (G, p) corresponds to the irreducible representation ρ i of Γ. The kernel of R i (G, p) consists of all infinitesimal motions of (G, p) which are symmetric with respect to ρ i (see [14] for details).
Fully-symmetric motions and the orbit rigidity matrix
Suppose that ρ 0 is the trivial irreducible representation of Γ, i.e., ρ 0 (γ) = 1 for all γ ∈ Γ. The kernel of R 0 (G, p) consists of all infinitesimal motions of (G, p) which exhibit the full symmetry of Γ (see also Fig. 4). Specifically, an infinitesimal motion m : We say that (G, p) is symmetry-forced (infinitesimally) rigid if every fully Γ-symmetric infinitesimal motion is trivial. To simplify the detection of fully Γ-symmetric motions of (G, p), the orbit rigidity matrix of (G, p) was introduced in [19]. The orbit rigidity matrix is equivalent to R 0 (G, p), and has successfully been used for characterizing symmetry-forced rigid frameworks in [6,20,11]. In the next section, we will extend this concept to each irreducible representation of Γ.
'Anti-symmetric' orbit rigidity matrices for bar-joint frameworks with Abelian point group symmetry
Let (G, p) be a Γ-symmetric framework in R d with respect to θ : Γ → Aut(G) and τ : Γ → O(R d ). In general, the entries of each block R j (G, p) are not as simple as those of R 0 (G, p). However, if we restrict our attention to the case where Γ is an Abelian group, then we can specifically describe an 'anti-symmetric' orbit rigidity matrix for each of the irreducible representations of Γ. For simplicity, we will first consider the case where Γ is cyclic (Section 4.1). The argument is then easily extended to general Abelian groups in Section 4.2. Throughout these two subsections we assume, again for the sake of simplicity, that θ acts freely on E(G). In Section 4.3, we will discuss the case when θ may not be free on E(G). In Section 4.4, we give several examples.
Case of cyclic groups
Throughout this subsection, Γ is assumed to be a cyclic group Z/kZ = {0, 1, 2, . . . , k − 1} of order k, and θ acts freely on E(G). It is an elementary fact from group representation theory that Γ = Z/kZ has k non-equivalent irreducible representations ρ 0 , ρ 1 , . . . , ρ k−1 , and that each of these representations is one-dimensional. Specifically, for j = 0, 1, . . . , k − 1, we have where ω denotes e 2π √ −1 k , a root of unity. To cope with such representations, we need to extend the underlying field to C if k ≥ 3, and regard R(G, p) as a linear function from
Decompositions of the regular representation of Γ
Let ρ reg : Γ → GL(R k ) be the regular representation of Γ, that is, regarding Γ as a subgroup of the symmetric group S k , ρ reg (γ) = [δ i,γ+j ] i,j for any γ ∈ Γ. Recall that ρ reg is equivalent to k−1 j=0 ρ j . For j = 0, 1, . . . , k − 1, let b j = (1,ω j ,ω 2j , . . . ,ω (k−1)j ) be a vector in C k , whereω is the complex conjugate of ω. Then we have This says that b j is a common eigenvector of {ρ reg (i) | i = 0, 1, . . . , k − 1}, and the one-dimensional subspace I j spanned by b j is an invariant subspace corresponding to ρ j . Hence, by decomposing C k into k−1 j=0 I j , ρ reg is diagonalized to k−1 j=0 ρ j . Next, consider τ ⊗ ρ reg . Since the character of the Kronecker product of two representations is written by the coordinate-wise product of the corresponding two characters, we see that the multiplicity of ρ j in τ ⊗ ρ reg is equal to Trace(τ (0)), that is, equal to d.
. Then observe that for each i ∈ Γ, and hence J j is a common eigenspace of {τ ⊗ ρ reg (i) : i = 0, . . . , k − 1}, and J j is an invariant subspace corresponding to ρ j . C dk is thus decomposed into invariant subspaces Since our goal is to characterize the infinitesimal rigidity of symmetric frameworks in terms of their quotient graphs, let us introduce a quotient Γ-gain graph (H, ψ) of G with a covering map c : G → H.
Observe, then, that since Γ acts freely on V (G), P V is the direct sum of |V (H)| copies of ρ reg , each of which represents an action of Γ over a fiber c −1 (v). Thus, Similarly, if we assume that Γ acts freely on E(G), then P E = ẽ∈E(H) ρ reg , and P E is equivalent to k−1 j=0 |E(H)|ρ j . (We will treat the case where Γ does not act freely on the edge set of G in Section 4.3.) Observe also that τ The decompositions of P E and τ ⊗ P V give us further information about R j (G, p). Since Γ acts freely on G, each vertex orbit is associated with a dk-dimensional subspace of (C d ) V (G) , while each edge orbit is associated with a k-dimensional subspace of C E(G) . In other words, C V (G) and C E(G) can be written as ṽ∈V (H) C dk and ẽ∈E(H) C k in terms of the quotient graph H. Since Recall that m : This system of linear equations for m is redundant if m is restricted to be ρ j -symmetric, and we now eliminate such redundancy as follows.
Recall that each edge orbit is written as a set c −1 (ẽ) = {{γu, γψẽv} : γ ∈ Γ} of edges of G, where ψẽ is the label assigned toẽ in (H, ψ). So (8) can be written as for eachẽ ∈ E(H). By the symmetry of p and m with respect to Γ, these k equations can be simplified to one equation for each edge orbit. Let us define the jointp(w) and the motionm(w) of a vertexw ∈ V (H) to be the joint p(v) and the motion m(v) of the representative vertex v of the vertex orbit c −1 (w). Then the analysis can be done on the quotient graph (H, ψ). More formally, for a Γ-gain graph (H, ψ) andp : We define the ρ j -orbit rigidity matrix, denoted by O j (H, ψ,p), as the |E(H)|×d|V (H)| matrix associated with the system (11), where each vertex has the corresponding d columns, each edge has the corresponding row, and the row corresponding toẽ = (ũ,ṽ) ∈ E(H) is given byũ where each vector is assumed to be transposed, and ifẽ is a loop atṽ the entries ofṽ become the sum of the two entries given above. Due to the one-to-one correspondence between J mo j and (C d ) V (H) , we conclude the following.
Case of non-cyclic groups
It is well known that any finite Abelian group Γ is isomorphic to Z/k 1 Z × · · · × Z/k l Z for some positive integers k 1 , . . . , k l . Thus, we may denote each element of Γ by i = (i 1 , . . . , i l ), where 0 ≤ i 1 ≤ k 1 , . . . , 0 ≤ i l ≤ k l , and regard Γ as an additive group.
Let k = |Γ| = k 1 k 2 . . . k l . Γ has k non-equivalent irreducible representations which are denoted by {ρ j : j ∈ Γ}. Specifically, for each j ∈ Γ, ρ j is defined by where ω t = e 2π √ −1 k t , t = 1, . . . , l. We now apply the analysis for cyclic groups by simply replacing each index with a tuple of indices. By Theorem 3.1, R(G, p) is decomposed into k blocks, and the block corresponding to ρ j is denoted by R j (G, p).
For each j = (j 1 , . . . , j l ) ∈ Γ, let b j be the k-dimensional vector such that each coordinate is indexed by a tuple i ∈ Γ and its i-th coordinate is equal toω i 1 j 1 1 · . . . ·ω i l j l l . Then, for the regular representation ρ reg of Γ, we have and hence b j is a common eigenvector of {ρ reg (i) | i ∈ Γ}. Hence, the one-dimensional subspace I j spanned by b j is an invariant subspace of C k corresponding to ρ j . A similar analysis determines the common eigenspace J j of {τ ⊗ ρ reg (i) | i ∈ Γ} for the eigenvalue ρ j (i) as a counterpart to the one defined in (6).
Following the analysis given in the previous subsection, we see that R j (G, p) is a linear mapping from J mo
Group actions which are not free on the edge set
In the previous sections, we restricted ourselves to the situation, where the group Γ acts freely on both the vertex set and the edge set of the graph G. Let us now also consider the case, where Γ acts freely on the vertex set, but not on the edge set of G. In other words, there exists an element γ ∈ Γ with θ(γ)(u) = v and θ(γ)(v) = u for some {u, v} ∈ E(G).
Since Γ still acts freely on V (G), it follows that if Γ does not act freely on c −1 ((ũ,ṽ)), then the edge orbit of (ũ,ṽ) is of size |Γ| 2 , that is, Γ/(Z/2Z) acts freely on c −1 ((ũ,ṽ)). Now, let (G, p) be a Γ-symmetric framework, where Γ is a finite Abelian group of order k, and suppose there are n edge orbits of size k and m edge orbits of size k 2 . Let g 1 , . . . , g t be the non-trivial elements of Γ which fix an edge of G, and let m i be the number of edge orbits whose representatives are fixed by g i . (Note that if an edge e of G is fixed by an element of Γ, then so is every other element in the orbit of e, because Γ is Abelian.) So we have m = t i=1 m i , and the character of P E is the vector χ(P E ) which has nk + m k 2 in the first entry corresponding to id ∈ Γ, m i k 2 in the entry corresponding to g i , i = 1, . . . , t, and 0 elsewhere. Now, let ρ j be an irreducible representation of Γ. Then, since each g i must be an involution, ρ j (g i ) is 1 or −1. Without loss of generality assume ρ j (g i ) = 1 for 1 ≤ i ≤ s and ρ j (g i ) = −1 for s + 1 ≤ i ≤ t. It is a well known result from group representation theory that the dimension of the invariant subspace I st j of C |E(G)| is given by 1 k (χ(P E )·ρ j ). Thus, It follows that the submatrix block R j (G, p) has n + s i=1 m i many rows. Although the size of R j (G, p) and that of O j (H, ψ,p) are different, we can still use O j (H, ψ,p) to compute the rank of R j (G, p), as Proposition 4.2 still holds. To see this, observe that if g i fixes c −1 (ẽ) for someẽ ∈ E(H), thenẽ is a loop with ψ(ẽ) = g i . Since To see the necessity, let µ A be the minimal polynomial of A. Since A is diagonalizable (as Γ is Abelian) and µ A divides (t − 1)(t − ω), an elementary theorem of linear algebra implies that the eigenvalues of A are only 1 and ω. Since Γ is Abelian and A = I d , we have ω = −1. This also implies A 2 = I d , and hence ψ 2 e = id.
It follows from Proposition 4.3 and the remarks above that the number of rows of R j (G, p) equals the number of non-zero rows of O j (H, ψ,p). Moreover, these two matrices clearly have the same number of columns, and by the same reasoning as in the previous sections, Propositions 4.1 and 4.2 still hold.
Reflection symmetry C s
The symmetry group C s has two non-equivalent real irreducible representations each of which is of dimension 1. In the Mulliken notation, they are denoted by A and A (see Table 1).
It follows that the block-decomposed rigidity matrix R(G, p) of a C s -symmetric framework (G, p) consists of only two blocks: the submatrix block R 0 (G, p) corresponding to the trivial representation ρ 0 , and the submatrix block R 1 (G, p) corresponding to the representation ρ 1 . The block R 0 (G, p) is equivalent to the (fully symmetric) orbit rigidity matrix (see also [19]). The block R 1 (G, p) describes the ρ 1 -symmetric (or simply 'antisymmetric') infinitesimal rigidity properties of (G, p), where an infinitesimal motion m of (G, p) is anti-symmetric if The corresponding quotient C s -gain graph (H, ψ) is depicted in Fig. 6, and the antisymmetric orbit rigidity matrix O 1 (H, ψ,p) of (G, p) is the following 6 × 6 matrix: where an edge (u, v) with label g is denoted by (u, v; g).
Recall from Proposition 4.3 that each loop in (H, ψ) gives rise to a zero vector in O 1 (H, ψ,p), and hence O 1 (H, ψ,p) has only three non-trivial rows. Geometrically, this is also obvious, as any loop in (H, ψ) clearly does not constitute any constraint if we restrict ourselves to anti-symmetric infinitesimal motions (see again Fig. 5(a)).
Rotation symmetry C 3
Over the complex numbers, the symmetry group C 3 has three non-equivalent one-dimensional irreducible representations. In the Mulliken notation, they are denoted by A, E (1) and E (2) (see Table 2). . It follows that the block-decomposed rigidity matrix R(G, p) of a C 3 -symmetric framework (G, p) consists of three blocks: the submatrix block R 0 (G, p) corresponding to the trivial representation ρ 0 , the submatrix block R 1 (G, p) corresponding to ρ 1 , and the submatrix block R 2 (G, p) corresponding to ρ 2 . By Proposition 4.1, each block R j (G, p) is equivalent to its corresponding orbit rigidity matrix O j (H, ψ,p).
As an example, consider the C 3 -symmetric framework (G, p) shown in Figure 7, where θ : C 3 → Aut(G) is the action defined by θ(C 3 ) = (1 2 3)(4 5 6), and τ : . Note that for this example, each of the three orbit rigidity matrices is a 3 × 4 matrix. The orbit rigidity matrix O 1 (H, ψ,p) is the 3 × 4 matrix where the first row corresponds to the edge (2, 5; id), the second row to the loop (2, 2; C 3 ), and the third row to the loop (5, 5; C 3 ). The orbit rigidity matrix O 2 (H, ψ,p) is the 3 × 4 matrix where the first row corresponds to the edge (2, 5; id), the second row to the loop (2, 2; C 3 ), and the third row to the loop (5, 5; C 3 ).
Dihedral symmetry C 2v
Finally, we consider the dihedral group C 2v = {id, C 2 , s h , s v } of order four which is the only non-cyclic Abelian point group in the plane. In the following, we think of C 2v as the additive group Z/2Z × Z/2Z, where id = (0, 0), C 2 = (0, 1), s h = (1, 0), and s v = (1, 1). This group has four non-equivalent irreducible linear representations each of which is real and one-dimensional. In the Mulliken notation, these representations are denoted by A 1 , A 2 , B 1 , and B 2 (see Table 3). Thus, for the dihedral group C 2v , the block-decomposed rigidity matrix R(G, p) consists of four blocks, each of which corresponds to one of the four irreducible representations of C 2v . The submatrix block corresponding to ρ 0 is of course again equivalent to the (fully symmetric) orbit rigidity matrix. In the following, we give an example of a B 1symmetric orbit rigidity matrix O (0,1) (H, ψ,p) which, by Proposition 4.2, is equivalent to its corresponding submatrix block R (0,1) (G, p).
Consider the C 2v -symmetric framework (G, p) shown in Figure 8 The other orbit rigidity matrices O j (H, ψ,p) can be obtained analogously. Note that the framework in Figure 8(a) has a non-trivial fully symmetric infinitesimal motion which even extends to a continuous C 2v -preserving motion [19,6]. (In the engineering literature, this motion is known as Bottema's mechanism.) It was shown in [6] that this framework is falsely predicted to be forced-symmetric rigid by the matroidal counts for the fully symmetric orbit rigidity matrix. Thus, the problem of finding combinatorial characterizations for forced-symmetric rigidity (and hence also for incidentally symmetric rigidity) of C 2v -generic frameworks (or C 2nv -generic frameworks, n ≥ 1) remains open.
Gain-sparsity and constructive characterizations
We now turn our attention to combinatorial characterizations of infinitesimally rigid symmetric frameworks in the plane. In this section we first present some preliminary facts concerning gain graphs and matroids on gain graphs which will be used in the next section to derive the desired combinatorial characterizations.
We say that an edge subset F ⊆ E(H) is balanced if all cycles in F are balanced; otherwise it is called unbalanced. The following is a slightly generalized concept to the one proposed in [6]. Similarly, an edge set E is called (k, , m)-gain-sparse if it induces a (k, , m)-gain-sparse graph.
Let I k, ,m be a family of (k, , m)-gain-sparse edge sets in (H, ψ). As noted in [6], I k, ,m forms the family of independence sets of a matroid on E(H) for certain (k, , m), which we denote by M k, ,m (H, ψ), or simply by M k, ,m . Let us take a closer look at this fact.
If (k, , m) = (1, 1, 0), then M 1,1,0 is known as the frame matroid (or bias matroid) of (H, ψ), which is extensively studied in matroid theory (see, e.g., [24]). It is known that If (k, , m) = (k, k + , m + ) for some 0 ≤ m ≥ k and ≥ 0, then M k, ,m is times Dilworth truncations of M k,k,m , and it forms a matroid. In particular, for k = 2 and = 3, M 2,3,m implicitly or explicitly appeared in the study of symmetry-forced rigidity. The generic symmetry-forced rigidity of C s -symmetric frameworks or C k -symmetric frameworks is characterized by the (2, 3, 1)-gain-sparsity of the underlying quotient gain graphs [9,10,11,22,6]. We shall extend this result in Section 6. For infinite periodic graphs, it was proved by Ross that the (2, 3, 2)-gain-sparsity of Z 2 -gain graphs characterizes the symmetry-forced rigidity of periodic frameworks on a fixed lattice [13].
For other triples (k, , m) very little properties are known for (k, , m)-gain-sparse graphs. Csaba Kiraly recently pointed out that M 2,3,0 is not a matroid in general. A number of different (or generalized) sparsity conditions of gain graphs are also discussed in [11,9,6,20].
Let (H, ψ) be a Γ-gain graph. The 0-extension adds a new vertexṽ and two new nonloop edgesẽ 1 andẽ 2 to H such that the new edges are incident toṽ and the other endvertices are two not necessarily distinct vertices of V (H). Ifẽ 1 andẽ 2 are not parallel, then their labels can be arbitrary. Otherwise the labels are assigned such that ψ(ẽ 1 ) = ψ(ẽ 2 ), assuming thatẽ 1 andẽ 2 are directed toṽ (see Fig.9 (a)).
The 1-extension (see Fig.9 (b)) first chooses an edgeẽ and a vertexz, whereẽ may be a loop andz may be an end-vertex ofẽ. It subdividesẽ, with a new vertexṽ and new edgesẽ 1 ,ẽ 2 , such that the tail ofẽ 1 is the tail ofẽ and the tail ofẽ 2 is the head ofẽ. The labels of the new edges are assigned such that ψ(ẽ 1 ) · ψ(ẽ 2 ) −1 = ψ(ẽ). The 1-extension also adds a third edgeẽ 3 oriented fromz toṽ. The label ofẽ 3 is assigned so that it is locally unbalanced, i.e., every two-cycleẽ iẽj , if it exists, is unbalanced.
The loop 1-extension (see Fig.9 (c)). adds a new vertexṽ to H and connects it to a vertexz ∈ V (H) by a new edge with any label. It also adds a new loopl incident toṽ with ψ(l) = id. The theorem is proved for m = 1 in [6,Theorem 4.4], and exactly the same proof can be applied in the case of m = 2. For special cases, Theorem 5.1 was proved by Schulze [16] and Ross [13].
In the covering graph these operations can be seen as graph operations that preserve the underlying symmetry. Some of them can be recognized as performing standardnon-symmetric -Henneberg operations [23] simultaneously [6].
Subgroups induced by edge sets
We have introduced the balancedness of an edge set in (H, ψ) in order to define gainsparsity matroids on E(H). However, we sometimes need to extract more information on the underlying group from (H, ψ). Such information is represented as subgroups induced by edge sets, which we are about to introduce. For simplicity, we will assume that Γ is Abelian. (See [6] for the general treatment.) Recall that for a cycle C of the formṽ 1 ,ẽ 1 ,ṽ 2 , . . . ,ẽ k ,ṽ 1 in (H, ψ), , define F to be the subgroup of Γ generated by the elements in the set {ψ(C)| C is a cycle in the subgraph induced by F }. Note that F is balanced if and only if F is trivial.
A switching at a vertexṽ with γ ∈ Γ is an operation that constructs a new labeling ψ : E(H) → Γ from ψ by setting We say that ψ is equivalent to ψ if ψ can be obtained from ψ by a sequence of switchings.
Then it can easily be checked that for any F ⊆ E(H), F is invariant up to equivalence (see, e.g., [6, Proposition 2.2] for the proof).
In the proof of [6, Lemma 5.2], it was shown that the rank of fully-symmetric orbit rigidity matrices (i.e., the case when ρ j is trivial) is invariant up to equivalence. Exactly the same proof can be applied to show the following. The following proposition is very useful to compute F . • For any forest T in E(H), there exists a ψ equivalent to ψ such that ψ (ẽ) = id for everyẽ ∈ T .
• For any F ⊆ E(H) and a maximal forest T in F , if ψ(ẽ) = id holds for everyẽ ∈ T , then F is the subgroup generated by {ψ(ẽ)|ẽ ∈ F \ T }.
Combinatorial characterizations for bar-joint frameworks in the plane
Based on the theory of block-diagonalizations of rigidity matrices, in this section we present combinatorial characterizations of infinitesimally rigid frameworks which are generic modulo cyclic symmetry in the plane. By (4) and Proposition 4.2 our task of computing the rank of the rigidity matrix is reduced to computing the rank of each orbit rigidity matrix.
Recall that each orbit rigidity matrix is defined for any Γ-gain graph (H, ψ) with p : V (H) → R d , and its rows define a matroid on the edge set of H. We will show that whenp is τ (Γ)-regular, this orbit-rigidity matroid is isomorphic to the (2, 3, m)-gainsparsity matroid of (H, ψ) given in Section 5 if the underlying symmetry is C s , C 2 or C 3 .
If the underlying symmetry is C k for k ≥ 4, then it turns out that orbit-rigidity matroids have more complicated combinatorial structures and the problem of characterizing them is still unsolved. However, we will present some non-trivial necessary conditions in the last subsection.
The following lemma implies that the row independence of an orbit rigidity matrix is preserved by the three operations given in Section 5. Proof. The proof is basically the same as the one given in [6, Lemma 6.1] for symmetryforced rigidity. Due to the definition of genericity, we may assume thatp is Γ-generic. Then it is easy to prove the statement for a 0-extension and a loop-1-extension (see the proof of [6, Lemma 6.1] for a formal proof). We therefore focus on the case, where H is obtained from H by a 1-extension. This is the only nontrivial case.
Suppose that H is obtained from H by a 1-extension which removes an existing edgẽ e and adds a new vertexṽ with three new non-loop edgesẽ 1 ,ẽ 2 ,ẽ 3 incident toṽ. We may assume thatẽ i is outgoing fromṽ. Letũ i be the other end-vertex ofẽ i , and let g i = τ (ψ (ẽ i )) andp i =p(ũ i ) for i = 1, 2, 3. By the definition of the 1-extension, we have τ (ψ(ẽ)) = g −1 1 g 2 . We also denote ω i = ρ j (ψ (ẽ i )) for i = 1, 2, 3. Note that the three points g ipi (i = 1, 2, 3) never lie on a line due to the Γ-genericity ofp (see [6, Lemma 6.1] for a formal proof). We takep : V (H ) → R 2 such that p (w) =p(w) for allw ∈ V (H), andp (ṽ) is a point on the line through g 1p1 and g 2p2 , but distinct from g 1p1 and g 2p2 . For the simplicity of the description, we assumeũ 1 =ũ 2 in the subsequent discussion, but exactly the same proof can be also applied ifũ 1 =ũ 2 . Then O j (H , ψ ,p ) has the form where the bottom right block O j (H −ẽ, ψ,p) denotes the ρ j -orbit rigidity matrix obtained from O j (H, ψ,p) by removing the row ofẽ. Sincep (v) lies on the line through g 1p1 and g 2p2 ,p (ṽ) − g ip (ũ i ) is a scalar multiple of g 1p1 − g 2p2 for i = 1, 2. Hence, by multiplying the rows ofẽ 1 andẽ 2 by an appropriate scalar, O(H , ψ ,p ) becomes Subtracting the row ofẽ 1 from that ofẽ 2 , we get Since τ (ψ(ẽ)) = g −1 1 g 2 , the row ofẽ 2 is equal to the row ofẽ in O j (H, ψ,p). This means that the right-bottom block together with the row ofẽ 2 forms O j (H, ψ,p), which is row independent. Thus, the matrix is row independent if and only if the top-left block is row independent. Since g ipi (i = 1, 2, 3) are not on a line, the line throughp (v) and g 3p3 is not parallel to the line through g 1p1 and g 2p2 . This implies that the top-left 2 × 2-block is row independent, and consequently O j (H , ψ ,p ) is row independent.
Characterizations for bar-joint frameworks with reflection symmetry
We now give a combinatorial characterization of infinitesimally rigid bar-joint frameworks with reflection symmetry C s in the plane. The following characterization of rigid frameworks with forced C s symmetry was already established in [10,6].
Theorem 6.2 (Malestein and Theran [10,22], Jordán et al. [6]). Let τ : Z/2Z → C s be a faithful representation, (H, ψ) be a Z/2Z-gain graph, andp : We now show that independence of the other submatrix block is characterized by (2, 3, 2)-gain-sparsity. It is easy to see that the same proof can be applied to show Theorem 6.2 (which is the proof given in [6]). Theorem 6.4. Le τ : Z/2Z → C s be a faithful representation, G be a Z/2Z-symmetric graph with θ : Z/2Z → Aut(G), and (G, p) be a C s -regular framework with respect to θ and τ . Then the rank of R(G, p) is equal to the sum of the rank of M 2,3,1 (H, ψ) and that of M 2,3,2 (H, ψ), where (H, ψ) denotes the quotient gain graph.
Proof. We may assume that p is C s -generic. By (4) and Proposition 4.1, we have rank R(G, p) = rank O 0 (H, ψ,p) + rank O 1 (H, ψ,p) for the quotientp of p. By Theorems 6.2 and 6.3, the rank of O j (H, ψ,p) is equal to the rank of M 2,3,1+j (H, ψ) for j = 0, 1. Theorem 6.4 shows how to compute the first-order degrees of freedom of (G, p). However, if we are only interested in checking infinitesimal rigidity, then we may use the following simpler condition. For example, using Corollary 6.5, it is easy to verify that the framework shown in Figure 5
Characterizations for bar-joint frameworks with rotational symmetry
We now discuss combinatorial characterizations of infinitesimally rigid frameworks with rotational symmetry C k in the plane. A characterization of the row independence of O 0 (H, ψ,p) was already established in [9]. (See also [6] for a simpler proof).
For frameworks with an arbitrary rotational symmetry C k , it is not as easy as for frameworks with reflection symmetry to extend Theorem 6.6 to the other orbit matrices. However, the following result holds for all rotational groups C k . Lemma 6.7. Let k ≥ 3, τ : Z/kZ → C k be a faithful representation, (H, ψ) be a Z/kZgain graph, andp : V (H) → R 2 be C k -regular. If O j (H, ψ,p) is row independent, then (H, ψ) is (2, 3, 0)-gain-sparse. Moreover, if j = 1 or j = k − 1, then O j (H, ψ,p) has a kernel of dimension at least 1, and (H, ψ) is (2, 3, 1)-gain-sparse.
Proof. Suppose that O j (H, ψ,p) is row independent. It is easy to see that |F | ≤ 2|V (F )| for any F ⊆ E(H).
Suppose further that j = 1 or j = k − 1. We will show that O j (H, ψ,p) always has a kernel of dimension at least 1. To see this, recall that for any γ ∈ Z/kZ, where τ (γ) = cos γθ sin γθ − sin γθ cos γθ and ω = e √ −1θ with θ = 2π k .
Note that Lemma 6.7 also shows how the space of infinitesimal translations is decomposed. This decomposition can also be read off from the character tables for the groups C k (see [1,3], for example).
Case of C 2
Combining Theorem 5.1, Lemma 6.1, Theorem 6.6, and Lemma 6.7, we obtain the following characterization of infinitesimally rigid frameworks with C 2 symmetry. The proof is identical to that for C s and hence is omitted.
Case of
The following lemma gives a necessary condition for the row independence of O j (H, ψ,p) for even k, which is stronger than the one given in Lemma 6.7. Lemma 6.13. Let k ≥ 4, τ : Z/kZ → C k be a faithful representation, (H, ψ) be a Z/kZgain graph,p : V (H) → R 2 be C k -regular, and j be an odd integer with 1 ≤ j < k. If O j (H, ψ,p) is row independent, then F is (2, 3, 2)-gain-sparse for any F ⊆ E(H) such that F is isomorphic to Z/2Z. Figure 10: A balanced circuit (b) and its corresponding covering graph (a). Note that we may assume that the label of each edge is the identity, by Proposition 5.3. An unbalanced circuit (d) and its corresponding covering graph (c).
There is another obstacle. Suppose that there is an edge subset F such that (i') F is unbalanced, (ii') |F | > 2|V (F )| − 1, and (iii') there are a vertexṽ ∈ V (F ), an element γ ∈ Z/kZ, and a labeling function ψ : E(H) → Z/kZ equivalent to ψ such that ψ (ẽ) = id for everyẽ ∈ F not incident toṽ, and ψ (ẽ) ∈ {id, γ} for everyẽ ∈ F directed tõ v (assuming that every edge incident toṽ is directed toṽ). See also Figure 10(c)(d). Then it can easily be checked that the covering graph c −1 (F ) is the union of k edge-disjoint 2-dependent sets. A minimal edge set F satisfying (i')(ii')(iii') is called an unbalanced circuit.
Consequently, if H = H +ẽ contains an unbalanced circuit or a balanced circuit, the covering graph G contains k edge-disjoint 2-dependent sets, which means that no edge of c −1 (ẽ) increases the rank of the 2-rigidity matroid of the covering graph.
Extensions
We finish by making some further comments about 'anti-symmetric' orbit rigidity matrices and their applications and by outlining some directions for future developments.
Bar-joint frameworks in higher dimensions
As we mentioned in the introduction, it is a key open problem in rigidity theory to find a combinatorial characterization of infinitesimally rigid generic bar-joint frameworks (without symmetry) in dimensions 3 and higher. Therefore, we restricted attention to twodimensional symmetric frameworks in Sections 5 and 6. However, note that we showed in Section 4 how to construct anti-symmetric orbit rigidity matrices for a symmetric framework in an arbitrary dimension d.
Each of these anti-symmetric orbit rigidity matrices gives rise to an independent set of necessary conditions for the framework to be infinitesimally rigid in R d . Analogously to the conditions derived for various symmetric two-dimensional frameworks in Section 6, these conditions can of course be expressed as gain-sparsity conditions for the corresponding quotient gain-graph. However, to state these conditions, we need to compute the dimension of the space of trivial infinitesimal motions which are symmetric with respect to the given irreducible representation. In dimension 3, the dimensions of these spaces can be read off directly from the character tables of the group (see [1,3], for example); for dimensions 4 and higher, one needs to compute these dimensions for each individual group. This can be done in a similar way as in the proof of Lemma 6.7, for example (see also [14]).
Finally, note that due the simplicity of its entries and its straightforward construction, each of the orbit rigidity matrices of a given d-dimensional framework allows a quick analysis of its row or column dependencies, and hence provides a powerful tool for the detection of infinitesimal motions and self-stresses which exhibit the symmetries of the corresponding irreducible representation and which cannot be found by checking the corresponding gain-sparsity counts.
Non-Abelian groups
In Section 4 we showed how to construct anti-symmetric orbit rigidity matrices for frameworks with any Abelian point group symmetry in an arbitrary dimension. The key problem to extend these constructions to frameworks with non-Abelian point group symmetries is that each non-Abelian point group has an irreducible representation which is of dimension at least 2, and an infinitesimal motion which is symmetric with respect to such a higher-dimensional representation is not uniquely determined by the velocity vectors assigned to the vertices in the quotient gain-graph. Therefore, the entries of an orbit rigidity matrix corresponding to such a representation (as well as the underlying combinatorial structure for such an orbit matrix) are more complicated. It remains open how to extend our methods and results to frameworks with non-Abelian point group symmetries.
Group actions which are not free on the vertex set
Throughout this paper, we assumed that the group Γ acts freely on the vertex set of the graph G. While in principle we do not expect any major new complications to arise if we allowed Γ to act non-freely on the vertices of G, the structures of the orbit rigidity matrices and the corresponding gain-sparsity counts would need to be adjusted accordingly and would become significantly less clear and transparent (see also [19]).
For example, suppose a joint p i of a two-dimensional C s -symmetric framework (G, p) is 'fixed' by the reflection s in C s , i.e., we have τ (s)(p i ) = p i . Then p i contributes only one column to the fully symmetric orbit rigidity matrix of (G, p) (as p i has only a onedimensional space of fully symmetric displacement vectors: the space of all vectors which lie along the mirror line of s) and only one column to the anti-symmetric orbit rigidity matrix of (G, p) (as p i has also only a one-dimensional space of anti-symmetric displacement vectors: the space of all vectors which lie perpendicular to the mirror line of s). Similarly, if p i is a joint of a two-dimensional C 2 -symmetric framework (G, p) which is 'fixed' by the half-turn C 2 , then p i would contribute no column to the fully symmetric orbit rigidity matrix of (G, p) (as p i has no fully symmetric displacement vectors) and two columns to the anti-symmetric orbit rigidity matrix of (G, p) (as p i has a two-dimensional space of anti-symmetric displacement vectors).
Due to these modifications to the structures and entries of the orbit rigidity matrices, the constructions of these matrices and the proofs for the combinatorial characterizations of Γ-generic infinitesimally rigid frameworks in the plane will become significantly more messy.
Extensions to body-bar and body-hinge frameworks
One now standard extension of bar-joint frameworks are the body-bar frameworks [23,21]. These form a special class of bar-joint frameworks, which have many important practical applications in fields such as engineering, robotics or biochemistry. Note that while a combinatorial characterization of 3-or higher-dimensional bar-joint frameworks has not yet been found, rigid generic body-bar frameworks (without symmetry) were characterized in all dimensions by Tay [21].
In [18], we extend our tools and methods to d-dimensional body-bar frameworks with Abelian point group symmetries by giving a description of symmetric body-bar frameworks in terms of the Grassmann-Cayley algebra. Moreover, we establish combinatorial characterizations of body-bar frameworks which are generic with respect to a point group of the form Z/2Z × · · · × Z/2Z using Dowling matroids.
Finally, in [18] we also extend our methods and results to body-hinge frameworks, i.e., to structures which consist of rigid bodies that are connected, in pairs, by revolute hinges along assigned lines. This is an important step towards applying our results to the rigidity and flexibility analysis of certain physical structures like robotic linkages or biomolecules. | 2019-04-17T15:40:52.428Z | 2013-08-29T00:00:00.000 | {
"year": 2013,
"sha1": "f33f2e6b6ecce5257df0d7b81d5398c8a34ce57e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7f3a852cc34f9bbc86fb579f15c18e47817a1644",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
208297924 | pes2o/s2orc | v3-fos-license | Computational NMR Spectra of o-Benzyne and Stable Guests and their Hemicarceplexes
The incarceration of o-benzyne and 27 other guest molecules within hemicarcerand 1, as reported experimentally by Warmuth, and Cram and co-workers, respectively, has been studied by density functional theory (DFT). H-NMR chemical shifts, rotational mobility and conformational preference of the guests within the supramolecular cage were determined, which showed intriguing correlations of the chemical shifts with structural parameters of the host-guest system. Furthermore, based on the computed chemical shifts reassignments of some NMR signals are proposed. This affects in particular the putative characterization of the volatile benzyne molecule inside a hemicarcerand, for which our CCSD(T) and KT2 results indicate that the experimentally observed signals are most likely not resulting from an isolated o-benzyne within the supramolecular host. Instead, we show that the guest reacted with an aromatic ring of the host, and this adduct is responsible for the experimentally observed signals.
Introduction
One of the most exciting and challenging research fields in chemistry emerged in 1985 with Cram's synthesis of a molecule capable of trapping other molecules in its interior. [1] Since then, the chemistry of molecular container compounds has become a challenging and rewarding field of organic chemistry. [2][3][4][5][6] Early container molecules were shown to be able to encapsulate in their cavity almost any component present in the reaction mixture [1] and were therefore called carcerand (from the Latin word carcer, i.e. prison). In these supramolecular hosts the encapsulated guest cannot leave the molecular prison, not even at high temperatures. In contrast, so-called hemicarcerands trap guests that can be liberated at elevated temperatures, with the combination of the host and guest called a hemicarceplex. The process of switching from encapsulation to liberation of the guest in these hemicarcerands was defined by Houk and co-workers as gating, [5,[7][8][9] which involves a change in conformation of the supramolecular container molecule. Moreover, the (de)complexation processes are controlled by a process known as constrictive binding, which is related to the activation barrier required to trap the guest molecule inside the host cavity through a size-restricting portal. [10] Several hemicarcerands have been synthesized by joining two cavitands with several linkers [3,6] and a large variety of compounds have been incarcerated inside hemicarcerands cavities ranging from Xe [10] to large molecules such as ferrocene, adamantine, camphor [11] or C60. [12] Nowadays, hemicarcerands and other classes of molecular containers can be used for many different applications in molecular recognition, catalysis, drug delivery, and storage. [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] For instance, Cram and co-workers synthesized the highly reactive cyclobutadiene inside a hemicarcerand, [30] and investigated the binding properties of hemicarcerands that can undergo chemical reactions without guest-release. [31] In the literature, other examples have been given of unstable compounds that exhibit a high stability when encapsulated inside host molecules. [3,[32][33] The finding that two benzene molecules perfectly fit into the cavity of some hemicarcerands, raised the idea of using the latter host molecules to perform bimolecular reactions. [34] Kang and Rebek successfully performed a Diels-Alder reaction between p-benzoquinone and cyclohexadiene in a self-assembling molecular capsule, [4] demonstrating that significant acceleration in the rates of chemical reactions may occur inside container molecules, just as was shown before inside cyclodextrins. [35][36] In addition, a study by Piatnitski and Deshayes demonstrated that photochemical radiation is not only able to initiate reaction inside a hemicarceplex but it is also able to release guests from the host in a controlled manner when the host is designed to be susceptible to photolysis. [37] Hence, the interior of a hemicarcerand has therefore been shown to be a suitable environment in which to synthesize and stabilize highly reactive compounds from thermal and photochemical reactions. The o-benzyne molecule (see Figure 1), another vulnerable species that does not survive in solution or the gas phase, posed one of the most intriguing systems. Its existence was shown [38] by NMR experiments on the photochemically generated o-benzyne molecule inside a molecular container. Soon after, the existence of o-benzyne was substantiated by low-level quantum chemistry calculations of the NMR spectra. [39] More accurate calculations at coupled-cluster level [40] and DFT level [41] however showed significant deviations (ca. 1.0-1.5 ppm) from the experimental data, even though these methods usually give a much smaller deviation (ca. 0.3-0.5 ppm). The most likely origin for this difference is probably the fact that these calculations were done on the isolated molecule, and not as a guest in the molecular container. However, given the high sensitivity of NMR, it cannot be completely discarded that the experimentally observed spectra do not belong to the o-benzyne molecule. Furthermore, benzyne was found to react with the hemicarcerand, so that NMR spectra attributed to benzyne are suspect. 43 It was also shown that the assignment of a molecular structure by experimental data alone can be tricky and may lead to wrong assumptions. [42] So far, the theoretical investigations of structures and dynamics of hemicarcerands have mainly involved force field (MM2, MM3, AMBER) and semiempirical calculations. [9,12,[43][44][45][46][47][48] While these methods calculate geometries and energies with sufficient accuracy to predict complexation behavior, it has been shown to have problems describing the unusual environment inside a hemicarcerand or hemicarceplex cavity accurately. This immediately suggests that more sophisticated methods are required on the full host-guest system to accurately analyze its electronic structure. [49] Moreover, reports on the calculations of NMR parameters for hemicarceplexes are sparse, even though the 1 H-NMR spectroscopy has been proven to be a very valuable tool for determining the presence of guest inside a host and the stoichiometry of complexation. With this in mind, the overall aim of this work is to study computationally the structure and 1 H-NMR chemical shifts of hemicarcerand 1 (see Figure 2) and its corresponding hemicarceplexes with o-benzyne and a variety of 27 acyclic, cyclic or aromatic guests for which experimental data are available for comparison. [38,50] In particular, we have predicted the 1 H and 13 C-NMR chemical shift constants of the isolated and incarcerated guests. Additionally, the rotational mobility and the conformational preference are described for the guest molecules.
Results and Discussion
The well-known hemicarcerand 1 is globular-shaped and is composed by attaching two tetraaryl bowls to one another and their rims through four -O(CH2)4O-hemispheric bridges (see Figure 2). Likewise, four R groups (R= C6H5CH2CH2, CH3(CH2)4) are attached to each bowl at their bases in 1. [50] X-ray structural determination has not been reported for empty 1, and therefore our initial model system for the host is based on all the key features of the reported X-ray structures for the host-guest systems 1@G (where G is any of 6H2O, 1,4-I2C6H4, p-xylene, C6H5NO2, (CH3)2NCO(CH3) or 2-BrC6H4OH). [50] To reduce the computational effort, the R groups on the outside of 1 were replaced with methyl groups since they are expected to have a minor influence on the binding properties once the guest is inside. Indeed, the optimized structures at the PBE-D/TZ2P level of hemicarcerand 1 (R= C6H5CH2CH2, CH3) showed that the effect of these groups on the core structure of the molecular container is negligible. Likewise, our model structure of host 1 (R= CH3) presents a conformation where the methylene bridges -(CH2)4-are all distributed on the outside of the container, and the upper hemisphere is twisted by 17º with respect to the lower hemisphere, which agree well with the twist angle of 15º reported for the X-ray structure of 1@6H2O. [50] In contrast, a difference of 6º was found if compared with the optimized structure reported by Liddell and co-workers (twist of 23º, semiempirical AM1 method). [49] The distance between the two oxygen atoms of each -O(CH2)4O-bridge varies between 2.80 and 2.82 Å and the separation between the two parallel square planes defined by the aryl carbons (bonded to H), used to define the length of the polar axis, is around 9.85 Å. For the geometry optimizations of hemicarceplexes 1@G, the guest molecules were introduced into the host starting from several initial host-guest geometrical configurations, placing the guests along the long polar and shorter equatorial axis of the host.
Guest molecules inclusion within host
First, we focus on the 28 different guest molecules that were incarcerated in the host 1. Besides o-benzyne, the guest molecules can be divided into classes A-F regarding their shapes. Class A contains acyclic aliphatic compounds of three to six non-hydrogen atoms containing zero to two branches, class B includes five-membered ring compounds and classes C-F contain aromatic guests in which all the structures tend to be planar except for the methyl fragments of 1,4-(CH3O)2C6H4 (16d) (see Figure S1). The particular case of o-benzyne will be discussed separately. Hence, in the following we present the results for the other 27 guest molecules: Table 1 shows the calculated 1 H-NMR chemical shift values (δ) of the classes A-F guests, both on their own as well as when encapsulated inside host 1; in it also are indicated the changes (Δδ) that occur upon incarceration of the guests, and for comparison also the experimental values are included. [50] In general, we found an excellent agreement between the calculated and experimental shift values, especially for the isolated molecules where 21 of the 27 guests show differences in chemical shifts of the order of ±0.3 ppm or less in all their shifts (see Δδ of G, Table 1). On the other hand, only 7 of the 27 incarcerated guests (1@G) were found with similarly low Δδ values (see Δδ of 1@G, Table 1). Accordingly, it appears that the host-guest interactions can alter the 1 H-NMR shift values dramatically and 10.1002/chem.201904756
Chemistry -A European Journal
This article is protected by copyright. All rights reserved. Table 1. Calculated and experimental 1 H-NMR chemical shifts (δ) of free and incarcerated guests and their spectra changes in the chemical shifts of guest protons caused by incarceration (Δδ).
Calculated
Experimental Accepted Manuscript
Chemistry -A European Journal
This article is protected by copyright. All rights reserved.
FULL PAPER
4 the orientation of the guest within host 1 might play a crucial role in some cases. Here, we report the results for the most favorable orientation of the guests inside the molecular container among the different possible orientations that we explored (see Supporting Information). Typically, the chemical shifts of the encapsulated guests are 1-4 ppm shifted up-field, which likely depends on guest orientation and perhaps dynamics. [51] The origin for this up-field shift is the magnetic shielding resulting from the benzene rings in the polar caps, and -OCH2O-and -OCH2Ar-groups, that form the framework of the host. As a result, the 1 H-NMR shift values of class A (acyclic molecules) guests decrease upon complexation, ranging from Δδ= 1.35 (for the Hb proton of (CH3)2NCOCH3,) to Δδ = 4.18 ppm (for the Ha proton of CH3COCH2CH3) (see Figure S1 for atom labeling
Chemistry -A European Journal
This article is protected by copyright. All rights reserved. Table 1). Analysis of the 1 H-NMR signals of class B guests (five-membered ring structures) showed a different behavior. First, the large changes for the isolated 4-butyrolactone (6b) in experiment vs. theory, suggests strongly that in the experiment the isolated compound has either another conformation or another electronic structure. We ruled out an orientational effect, thus we varied the model geometry to another realistic structure, which is its protonated form (protonation of the carbonyl oxygen). Our results for the protonated guest show again the anticipated small differences between theory and experiment (Δδ values of ±0.19 or less; see Table S1, SI), which suggests that in this case the protonated form is the one that was detected in the experiment. In the case of cyclopentanone (7b), our calculations clearly indicate an inversion of the assignment of the Ha and Hb spectral shifts. Thus, the experimentally observed peaks correspond to Ha= 2.15 and Hb= 1.94 ppm (isolated guest). A similar case was found for 2-cyclopenten-1-one (9b), where the Hc and Hd signals were also assigned inversely in the experimental study. Applying these changes to the experimental results, we obtained again small differences in the chemical shifts of the order of ±0.42 ppm or less in all their shifts (see Δδ of G, Table 1). The signal assignments of the class B incarcerated guests (1@G) turned out to be more complicated. The large Δδ values may indicate that in the experiment the guests have different conformation or electronic structure when encapsulated (see Δδ of 1@G, Table 1). Similar to empty host 1, these hemicarceplexes possess twisted conformations with angles that vary from 9º (8b and 10b guests) to 16º (9b guest). Moreover, our molecular models, in conjunction with the large decrease of the chemical shifts caused by incarceration (see Δδ, Table 1), suggest that these small guests may have strong interactions with the -O(CH2)4O-bridges of the host. Furthermore, there exists the possibility that larger compounds, including disubstituted (classes C, D and E) and trisubstituted phenyl derivatives (class F), would remain in an extended form along the long polar axis of the host. In this study, the aromatic guests are aligned correctly inside the host (see Figure S2, SI), as reported in Reference 50. For example, a good agreement between the calculated and experimental twist angle was obtained for the C6H5NO2 (12c) guest (0.8º difference between the optimized PBE-D/TZ2P structure and the reported X-ray structure). The class D compounds are the longest and most tightly held rigid guests. In particular, the cavity of the host 1 is spacious enough and complementary for the inclusion of 1,4-(CH3O)2C6H4 (15d). The two -CH3O-groups of the guest nicely fit into the two hemispheres of the host, achieving stabilizing van der Waals interactions with the aromatic rings of the host. This is also consistent with the 1 H-NMR observations that the methyl protons certainly occupy the polar caps (Δδ= 3.80) and whose Aryl-H atoms occupy the equatorial zone (Δδ= 1.30) of the host. It is well-known that phenol derivatives exhibit rotational isomerism. [52] Hence, particular attention has been focused on the aromatic compounds with hydroxyl substituents: 17d, 21e, 23e, 25f and 27f guests. No deviations from experiment were observed for the chemical shifts of the isolated 21e, 25f, and 27f guests suggesting a strong preference to only one isomer. However, the 17d and 23e guests showed large discrepancies according to the experimental values (see Δδ of G, Table 1).
In 2-bromophenol (23e), there exist two isomers (cis and trans) originated from the orientation of the OH group with respect to the Br substituent (see Figure 3). Our results at the PBE-D/TZ2P corroborate the greater stability of the cis over the trans form, in agreement with the results reported in the literature. [53] According to the calculations, these two forms are close in energy in gas phase (ΔE= 3.7 kcal•mol -1 ) and solvent, e.g. in chloroform solution using the COSMO model, has only a small effect on the energy gap (ΔE= 2.0 kcal•mol -1 ). Therefore, the populations of these two forms should be close or comparable and the consideration of only one form to calculate the chemical shifts may not be enough. The analysis of the 1 H-NMR shifts for the cis isomer of the 2-bromophenol (23e) shows that the calculated Ha proton value overestimates the experimental signal by 0.7 ppm. Further on, if to take into account that the calculation for the trans isomer predicts the Ha proton shift at higher field (underestimate the signal by -0.4 ppm), then the experimental value lies somewhere in between these two isomers which might be in fast exchange in the NMR time scale (see Table S2, SI). In the rest of the proton signals that form the aromatic ring, the calculations indicate that the two different pairs of signals, (Hb and He) and (Hc and Hd), were assigned inversely in the experimental study. Thus, applying these changes to the experimental results we obtained again small differences of the order of ±0.18 ppm or less in all the chemical shifts (see Δδ of G, Table 1). Similar considerations can be applied for the incarcerated 2-bromophenol (1@23e), however, in this case the assignment of the signals becomes more difficult because the Ha peak is hidden in the experimental study. For benzene-1,4-diol (17d), the consideration of both cis and trans isomers does not help to reproduce the observed Ha shifts (see Table S2, SI). Therefore, we also explored solvent effects by adding two explicit water molecules that interact with the hydroxyl groups (see Table S1, SI). Interestingly, inclusion of the solvent molecules leads to a large deshielding of the Ha proton NMR signals; the obtained results are now in excellent agreement with experiment, with only 0.1 (Ha) and 0.2 (Hb) ppm difference. This finding indicates for 17d the solvent effects play a larger role than the rotational isomerism.
Chemical shifts of incarcerated hosts 1@G
Even though the guest signals are much more sensitive than the host signals to incarceration, the 1 H-NMR spectra of the hemicarcerands themselves also might provide conclusive evidence for the incarceration of a guest within the container. The
Chemistry -A European Journal
This article is protected by copyright. All rights reserved.
host signals show sets of eight chemically different protons for the inward-(Hi) and outward-pointing acetal protons (Ho), the aryl protons (Ha), and the methine protons (Hm), of which we analyzed the Hi and Ho acetal protons that are the most sensitive to the presence of the guest (See Figure 2 for proton labels). A comparison of the calculated and experimental Hi and Ho chemical shifts of free and incarcerated hosts 1@G (δ), their chemical shift changes caused by incarceration (Δδ), and the corresponding differences between these values are summarized in Table S3 of the Supporting Information. According to the experimental spectra, [50] the Hi and Ho signals are equivalent when a symmetrical guest is encapsulated or when an unsymmetrical guest that is able to freely tumble inside the host in the 1 H-NMR time scale and makes the two halves of the host symmetrical. The important point to note is that for C6H5I (13c), 4-CH3C6H4OCH3 (18d), 2,4-Cl2C6H3CH3 (24f), and 3,4-Cl2C6H3CH3 (26f), the host signals (Hi and/or Ho) are split into two. This suggests that in the presence of the above guests the two halves of the host are not identical on the 1 H-NMR time scale. Thus, in these cases the inability of the guests to rotate around the short axis of the host causes the northern and southern hemispheres to be slightly different. This behavior was also identified in our calculations and, similar to the experimental signals, the deviations between the upper and lower hemispheres are of 0.3-0.5 ppm for the Hi shifts and 0.1-0.2 ppm for the Ho shifts (see δ, Table S3, SI). If we compare our results and those reported by Cram and coworkers, [50] we note a remarkable agreement between the calculated and experimental Hi and Ho shifts for the isolated host 1 and the incarcerated hosts 1@G. The maximum deviations from the experimental values are of ±0.6 or ±0.5 ppm in the Hi and Ho shifts, respectively (see Δδ, Table S3, SI). This reinforces the excellent agreement between experimental and theoretical determination of the 1 H-NMR chemical shifts of these hemicarcerands.
Encarceration of o-benzyne
In 1997, Warmuth made use of guest incarceration inside hemicarcerand 1 to stabilize the o-benzyne in solution. [38] The 1 H-NMR spectra signals for incarcerated o-benzyne appear at 4.99 and 4.31 ppm. The first signal was assigned to Ha, and the latter to Hb. Stronger evidence for incarcerated o-benzyne was obtained from its 13 C-NMR spectrum. Three carbon signals for o-benzyne were found, as shown in Table 2. Additionally, an estimate for the chemical shifts of isolated o-benzyne in solution were obtained by taking the chemical shift differences Δδ (H) and Δδ (C) of incarcerated benzene as a measure for the host shielding and adding them to the observed chemical shifts of incarcerated o-benzyne (see experimental δ of G, Table 2).
In the case of the isolated o-benzyne, the calculated 1 H and 13 C-NMR shifts reported by Jiao and co-workers [39] at the SOS-DFPT-PW91/III level are in reasonable agreement if compared with the data reported by Warmuth. However, discrepancies were found with the NMR calculations of Helgaker and co-workers, where the 1 H chemical shifts differ from the experimental values by more than is usual for such constants and the ordering of the observed protons is in inverse mode between theory (Ha<Hb) and experiment (Ha>Hb). [54] It is important to note that all calculations reported until now were done on the isolated molecule. Computational studies for the o-benzyne as a guest inside a molecular container have not been reported yet. With this in mind, we explored the incarceration of o-benzyne within hemicarcerand 1 (see Figures 1-2), starting from several initial host-guest geometrical configurations and placing the molecule along the long polar and shorter equatorial axes of the hosts. A stable conformation was achieved with the guest aligned along the long polar axis of the host (see Figure S3), in agreement with the minimum energy conformer reported by Cram and coworkers using molecular dynamics. [45] At the KT2/ET pVQZ level, the 1 H-NMR shift constants obtained for the incarcerated obenzyne are of 5.06 (Ha) and 5.33 ppm (Hb). If compared with the spectra signals reported by Warmuth, [38] the calculated 1 H-NMR shift values are of 0.07 (Ha) and 1.02 ppm (Hb) larger than the experimental values (see Δδ of 1@G, Table 2). Unlike experimental data, the host-guest (1@G) 1 H-NMR shifts indicate that Ha is more shielded than Hb. The isolated o-benzyne has a "triple" bond length of 1.25 (at PBE-D/TZ2P) and 1.26 Å (at CCSD(T)/aug-cc-pVTZ), in good agreement with the bond length of 1.24 ± 0.02 Å obtained by Orendt and co-workers by a simulation of the 13 C dipolar NMR spectrum, [55] and the bond length of 1.27 Å reported at the CCSD(T)/6-31G** level. [56] A comparison of the NMR chemical shift results for the isolated o-benzyne molecule are shown in Table 2. Our 1 H-NMR chemical shift calculations obtained with both KT2/ET-pVQZ and CCSD(T)/pcS-3 methods, like those of Helgaker and co-workers, [54] differ from the experimental values by more than is usual for such constants. At the KT2/ET-pVQZ level, the 1 H-NMR shift constants are of 6.77 (Ha) and 7.60 ppm (Hb). If compared with the experimental data reported by Warmuth, [38] we found discrepancies of -0.92 and 0.59 ppm for the Ha and Hb protons, respectively (see Δδ of G, Table 2). Interestingly, it is not only for the encapsulated o-benzyne that we find discrepancies, but also for the isolated form do we observe substantial differences. One should bear in mind that the experimental values of isolated o-benzyne were obtained by assuming the same large incarceration shift of 2.7 ppm for the two 7 protons in o-benzyne, equal to the incarceration shift in benzene. [38] Moreover, the conformational and dynamic effects may play an important role in this case. According to the dimensions of the guest and cavity of the host, the o-benzyne has space to move inside the host molecule and a static optimized geometry may not be enough for obtain accurate chemical shift values. Note also that the results can be complicated by the fact that o-benzyne can react with an aromatic ring of its hemicarcerand host. [45,[57][58][59][60][61] For this reason, we examined the addition product of a Diels-Alder (DA) reaction between the o-benzyne and hemicarcerand 1. o-Benzyne adds to one of the aryl ether units of 1 (diene component) to give the germinal para adduct (see Figure S4, SI). In the 1 H-NMR spectrum of the 1@o-benzyne DA adduct, the proton signals originating from o-benzyne were identified at 6.14 (Ha), 5.21 (Hb), 4.58 (Hc), and 3.12 (Hd). [61] We optimized the 1@o-benzyne DA adduct and calculated the 1 H-NMR chemical shifts. Interestingly, our results indicate an inversion of the assignment of the Hb and Hc spectral shifts. Thus, the experimentally observed peaks correspond to Hb= 4.58 and Hc= 5.21 ppm. Applying these changes, we found an excellent agreement between the calculated and experimental shift values, with small differences of the order of ±0.53 ppm or less in all their shifts (see Δδ of G, Table S4, SI).
Conclusion
In the present investigation, we have analyzed in detail the 1 H-NMR chemical shifts of the hemicarceplexes formed by obenzyne and a variety of 27 guests within hemicarcerand 1 (1@G). Our study, via density functional theory (DFT) at the PBE-D/TZ2P level for geometries and KT2/ET-pVQZ level for the NMR shift constants, provides a new strategy to characterize these challenging host-guest complexes. In the particular case of the o-benzyne guest, we obtained in both isolated and host-guest (1@o-benzyne) cases significant deviations from the 1 H-NMR experimental data and we cannot draw any definite conclusions regarding the assignment of the NMR chemical shifts. Surprisingly, we have shown that the discrepancies between theory and experiment are not due to the incarceration as asserted in earlier studies. The results shown here indicate that it cannot be conclusively determined whether the experimentally observed spectra belong to the o-benzyne molecule or to an adduct with an aromatic ring of the hemicarcerand. An investigation of the conformational and solvent effects combining molecular dynamic simulations and average NMR chemical shift calculations (similar to a recent study on a platinum complex [62] ) is currently underway for the incarcerated o-benzyne.
Computational Methods
All electronic-structure calculations of the isolated guests and the host-guest hemicarceplexes were performed using DFT. Equilibrium geometries were computed in the gas-phase with the Amsterdam Density Functional (ADF) program, [63][64] using the QUILD program [65] with the dispersion-corrected PBE-D [66][67] functional in conjunction with uncontracted Slater-type orbitals (STOs) of triple-ζ quality plus one set of polarization functions (TZ2P). [68] In particular for the guest structures that contain iodide atoms, scalar (SR) and spin-orbit (SO) relativistic effects were included using the zeroth-order regular approximation (ZORA). [69][70][71][72][73] Moreover, and additional optimization of the o-benzyne guest was computed at the CCSD(T)/aug-cc-pVTZ [74][75] method with the CFOUR program. [76][77] All the 1 H and 13 C-NMR chemical shift constants were calculated using the KT2 functional and the conductor-like screening model (COSMO) [78][79][80] for simulating bulk solvation in chloroform. The ET-pVQZ all electron basis set was used for all atoms except iodine, for which we used the TZ2P basis set and the SR and SO relativistic effects were included using ZORA. [69][70][71][72][73] All chemical shift values are reported with respect to tetramethylsilane (TMS) and were obtained with the GIAO method. [81] | 2019-11-27T14:04:35.952Z | 2019-11-25T00:00:00.000 | {
"year": 2019,
"sha1": "0ab6d8696a33dec5d2e1f457d3199ddf5229831e",
"oa_license": "CC0",
"oa_url": "https://dugi-doc.udg.edu/bitstream/10256/17347/1/AM-ComputationalNMR.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "b8daf32e6c48c1b3fcc989338130de70055804f1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
14698142 | pes2o/s2orc | v3-fos-license | Untargeted Metabolomic Analysis of Amniotic Fluid in the Prediction of Preterm Delivery and Bronchopulmonary Dysplasia
Objective Bronchopulmonary dysplasia (BPD) is a serious complication associated with preterm birth. A growing body of evidence suggests a role for prenatal factors in its pathogenesis. Metabolomics allows simultaneous characterization of low molecular weight compounds and may provide a picture of such a complex condition. The aim of this study was to evaluate whether an unbiased metabolomic analysis of amniotic fluid (AF) can be used to investigate the risk of spontaneous preterm delivery (PTD) and BPD development in the offspring. Study design We conducted an exploratory study on 32 infants born from mothers who had undergone an amniocentesis between 21 and 28 gestational weeks because of spontaneous preterm labor with intact membranes. The AF samples underwent untargeted metabolomic analysis using mass spectrometry combined with ultra-performance liquid chromatography. The data obtained were analyzed using multivariate and univariate statistical data analysis tools. Results Orthogonally Constrained Projection to Latent Structures-Discriminant Analysis (oCPLS2-DA) excluded effects on data modelling of crucial clinical variables. oCPLS2-DA was able to find unique differences in select metabolites between term (n = 11) and preterm (n = 13) deliveries (negative ionization data set: R2 = 0.47, mean AUC ROC in prediction = 0.65; positive ionization data set: R2 = 0.47, mean AUC ROC in prediction = 0.70), and between PTD followed by the development of BPD (n = 10), and PTD without BPD (n = 11) (negative data set: R2 = 0.48, mean AUC ROC in prediction = 0.73; positive data set: R2 = 0.55, mean AUC ROC in prediction = 0.71). Conclusions This study suggests that amniotic fluid metabolic profiling may be promising for identifying spontaneous preterm birth and fetuses at risk for developing BPD. These findings support the hypothesis that some prenatal metabolic dysregulations may play a key role in the pathogenesis of PTD and the development of BPD.
Introduction
Preterm delivery (PTD) is a major challenge in the field of obstetrics and neonatology. Since 2006 preterm birth rates have been declining both in the United States and in European countries. Nevertheless, prematurity remains a major cause of morbidity and mortality worldwide, which exceed those of infants born full-term [1,2]. Preterm neonates are at increased risk of both short-and long-term pathological outcomes [3][4][5][6] and, among these, bronchopulmonary dysplasia (BPD) accounts for the vast majority of cases of chronic lung disease after premature birth [7]. In a recent workshop sponsored by the National Heart, Lung, and Blood Institute (NHLBI) on the primary prevention of chronic lung diseases, participants agreed that the insults leading to BPD may begin in-utero, and operate through gene-environment interactions and epigenetic mechanisms [8]. Such early insults may operate by altering the trajectory of airway growth and development in these children with effects persisting into adulthood [4,[7][8][9].
The pathogenesis and link between spontaneous preterm delivery and BPD is poorly understood. Much of the progress made in the understanding of the causes of preterm labor and BPD has derived from hypothesis-driven research [3,[10][11][12][13]. Although this approach has yielded important information, we propose that using a hypothesis-free approach based on high-throughput analytical techniques has the potential to provide a more comprehensive description of the complex mechanisms and interactions behind these disorders [14]. Amniotic Fluid (AF) is an ideal matrix for characterizing maternal-fetal conditions and contains fetal lung fluid. AF is rich in low molecular weight metabolites, and this makes it an appropriate biological matrix for the application of metabolomics [15,16].
The aim of this exploratory study was to evaluate whether the untargeted metabolic profiling of AF in women with symptoms of preterm labor can be useful to investigate the risk of spontaneous preterm birth and BPD development in the offspring.
Study design and population
We conducted a study on 32 infants born from 32 mothers who had undergone an amniocentesis between 21 and 28 gestational weeks because of spontaneous preterm labor with intact membranes (due to PROM, chorioamnionitis, flow alterations or other causes). Amniocentesis had been performed at the participating institutions (Padova and Treviso general hospitals, Veneto region, Italy) to assess the microbial state of the amniotic cavity and to diagnose intraamniotic infection/inflammation [17]. Twins and newborns with congenital anomalies were excluded. The amniotic fluid samples were collected by the same physician (MTG) with a standardized procedure.
Twenty-four of the 32 AF samples were obtained from trans-abdominal amniocentesis, the other 8 by amniocentesis at the time of cesarean delivery. Five milliliters of AF were collected, frozen and stored at-80°until the time of the analysis. For the purposes of our data analysis the samples were considered as follows: 1. In the first step, we aimed to assess whether preterm delivery could be discriminated by the metabolomic profile of amniotic fluid, by focusing on samples (n = 24/32) collected at least 1 day before birth. Among these, the metabolomic profile was compared with those of patients delivered preterm (PTD group, n = 13/24) and those of term newborns (TD group, n = 11/24).
2. The second step of the study consisted of determining whether amniotic fluid analysis could discriminate infants bound to develop BPD. Twenty-one samples from pregnancies which resulted in a preterm delivery (n = 21/32) were analyzed according to whether infants developed BPD (PTD with BPD, n = 10/21; PTD with no BPD, n = 11/21).
The study was approved by the Institutional Review Boards of the participating Institutions (Comitato Etico per la Sperimentazione, Padova and Treviso General Hospitals, protocol number 24139, Veneto Region, Italy). All the women gave their written informed consent to their AF being used for research purposes.
Clinical definitions
Spontaneous preterm labor was defined by the presence of regular uterine contractions associated with cervical changes occurred before 37 completed weeks of gestation and requiring hospitalization [2]. Clinical chorioamnionitis was defined as the presence of maternal fever, maternal and/or fetal tachycardia, elevated maternal CRP, uterine fundal tenderness, and purulent or foul-smelling amniotic fluid [18]. PTD was defined as birth before 37 weeks of gestation. Infants were followed until 3 months of life and BPD was defined as the need for supplemental oxygen at 36 weeks' postmenstrual age [19]. Chromatographic analysis and mass spectrometry. The metabolic analysis of the AF samples was performed with a Q-ToF Synapt G2 (Waters) high resolution mass spectrometer interfaced with a UPLC (Ultra Performance Liquid Chromatography) system (Acquity-Waters), characterized by high chromatographic resolution, short analytical time and enhanced sensitivity. The chromatographic analysis was performed through the reverse-phase HSS T3 column (Acquity HSS T3, Waters co., Miliford, MA USA) at 40°C. MS analysis was conducted with an Electrospray source (ESI) in both positive and negative ionization mode. Fig 1 shows chromatographic profiles of an amniotic fluid sample. A detailed description of chromatographic analysis, processing and pre-treatment of data is reported in the S1 Material.
Statistical data analysis
After a preliminary exploratory data analysis on the clinical data (metadata) using Principal Component Analysis (PCA) and Projection to Latent Structures-Discriminant Analysis (PLS-DA) to exclude any confounding effects with respect to the clinical groups under investigation, we applied a new version of PLS-DA called orthogonally Constrained PLS-DA (oCPLS2-DA) [20] to the data sets generated by UPLC-MS that enables orthogonal constraints to be included in the latent variable calculation. A description of the method is provided in the S1 Material. The main advantage to use oCPLS2-DA in data modelling consists in the possibility to remove the effects of potential factors that can influence the calculation of the latent variables by projection. Specifically, the latent structure discovered by oCPLS2-DA results to be orthogonal to the factors used as constraints. In our study, the constraints were defined on the basis of the metadata in order to obtain models were the variation of the metabolite content of the collected samples is explained only by latent variables that are independent from the metadata. To simplify the model's interpretation, we applied a post-transfomation of the oCPLS2-DA model [21]. To avoid over-fitting and prove the robustness of the models obtained, we performed N-fold full cross-validation with different values of N (N = 6,7,8) and permutation tests on the class responses (500 random permutations) in accordance with good practice for model validation [22]. The results of the cross-validation procedure were expressed as Q 2 , while those of the permutation test as p-values. In addition, stability selection based on Monte-Carlo sampling was applied [23]. Specifically, 200 subsets were extracted from the collected samples by Monte-Carlo sampling (with prior probability of 0.70) and used to build independent oCPLS2-DA models. The performance in prediction of each model was estimated by ROC analysis of the outcomes obtained by predicting the samples which had been excluded during subsampling. The first 50 variables having the highest regression coefficients were selected for each independent oCPLS2-DA model. The variables selected for more than the 90% of all the models were investigated as putative markers and submitted to a t-test and ROC (Receiver Operating Characteristics) curve analysis. Since multivariate data analysis explores the correlation structure of the collected data while univariate data analysis investigates the properties of single variables, we also performed the latter using t-test with false discovery rate correction and ROC analysis in order to complement the results of the multivariate data analysis. The PCA and PLS-DA were performed using SIMCA 13 (Umetrics, Umea, Sweden). The R 3.0.2 platform (R Foundation for Statistical Computing) was used for univariate data analysis (t-test with false discovery rate correction and ROC analysis [24]), and user-written R functions enabled us to run the oCPLS2-DA and the post-transformation of the discriminant models.
Identification of relevant variables
To identify the relevant variables characteristic of each clinical group and emerging from the correlation loading plot, we searched the main available metabolome databases (Human Metabolome DataBase (HMDB) and METLIN), which enable comparison to be drawn between the spectroscopic characteristics of the variables and those of known metabolites. This is the first in a series of different steps that ultimately lead to the identification of potential key metabolites, and it enables both the chemical structure and the biological activity of the putative molecules to be hypothesized. More information on the parameters used for the identification of key metabolites are provided in the S1 Material. Table 1 displays the demographic and clinical characteristics of the infants included in the first and second steps of our data analysis, together with details regarding the mothers and their gestations. In 9 cases belonging to the 'PTD with BPD' group and in 8 belonging to the 'PTD without BPD' group, placental pathology was available. In these groups acute histologic chorioamnionitis was detected in 3 and 4 cases, respectively.
Results
Amniotic fluid metabolome in women who delivered preterm vs. those who delivered at term Among the samples collected at least 1 day before delivery, those associated with PTD (n = 13) were compared with those associated with term delivery (TD, n = 11). The negative data set included 1369 RT_mass variables, while the positive data set included 1742 RT_mass variables.
The following metadata were considered: maternal age at amniocentesis, maternal BMI, previous miscarriages, maternal therapy at amniocentesis (nifedipine, betamethasone, atosiban, progesterone), gestational age at amniocentesis and sex of newborn. The PCA model on the metadata did not reveal clusters corresponding to the two groups under investigation. The PLS-DA models constructed considering the metadata as the X-block were also unreliable in modeling the differences between the two groups. There was therefore no confounding effect between the metadata and the clinical groups. Reliable oCPLS2-DA models were built to explore the structured variation in the negative data set (A = 1+2 components, R 2 = 0.47, Q 2 6-folds = 0.25, Q 2 7-folds = 0.26, Q 2 8-folds = 0.31, p-value permutation test for Q 2 7-folds = 0.044, area under curve for ROC analysis at the 95% confidence level estimated by 7-fold full crossvalidation = 0.66-1.00, specificity estimated by 7-fold full cross-validation = 0.73, sensitivity estimated by 7-fold full cross-validation = 1.00. ROC analysis for predicted outcomes: mean area under curve = 0.65, mean specificity = 0.70, mean sensitivity = 0.79 ; Fig 2), and in the positive data set (A = 1+2 components, R 2 = 0.47, Q 2 6-folds = 0.38, Q 2 7-folds = 0.35, Q 2 8-folds = 0.37, p-value permutation test for Q 2 7-folds = 0.026, area under curve for ROC analysis at the 95% confidence level estimated by 7-fold full cross-validation = 0.67-1.00, specificity estimated by 7-fold full cross-validation = 0.82, sensitivity estimated by 7-fold full cross-validation = 0.85. ROC analysis for predicted outcomes: mean area under curve = 0.70, mean specificity = 0.83, mean sensitivity = 0.72; S1 Material: S1 Fig). Stability selection based on Monte-Carlo sampling enabled us to select a subset of 21 promising key metabolites. Univariate data analysis based on the t-test with false discovery rate correction (q-value threshold equal to 20%) and ROC analysis did not provide any additional features of interest.
By searching the available online metabolite databases and studying the fragmentation spectra, we were able to identify a subset of biochemicals that underpin the models we have found ( Table 2). The PTD group was characterized by higher levels of variables attributable to the following classes of compounds: amino acids and their derivatives, unsaturated hydroxy fatty acids (putative metabolite: 3-methoxybenzenepropanoic acid), oxylipins (putative metabolite: 4-hydroxy nonenal alkyne), fatty aldehydes (putative metabolite: muconic dialdehyde). On the other hand, the TD group was characterized by higher levels of variables related to phosphatidylcholine.
To avoid the possible confounding effect of metabolic processes closely related to delivery, the comparison between PTD and TD was also performed including only the samples collected at least 5 days prior to delivery (10 PTD and 11 TD). The results were similar (data not shown).
Amniotic fluid metabolome and subsequent risk of BPD development
Considering only the subjects delivered preterm, a comparison between those who developed BPD (n = 10) and those who did not (n = 11) was undertaken. The negative data set included 1384 RT_mass variables, while the positive data set included 1826 RT_mass variables. The following metadata were considered: maternal age at amniocentesis, maternal BMI, previous miscarriages, maternal therapy at amniocentesis (nifedipine, betamethasone, atosiban, progesterone), gestational age at amniocentesis, sex of newborn and trans-abdominal amniocentesis method. The analysis of the metadata by PCA and PLS-DA excluded significant confounding effects between clinical groups and metadata. Reliable oCPLS2-DA models were built to explore the structured variation in the negative data set (A = 1 component, R 2 = 0.48, Q 2 6-folds = 0.36, Q 2 7-folds = 0.43, Q 2 8-folds = 0.42, p-value permutation test for Q 2 7-folds = 0.004, area under curve for ROC analysis at the 95% Fig 3), and in the positive data set (A = 1+1 components, R 2 = 0.55, Q 2 6-folds = 0.36, Q 2 7-folds = 0.38, Q 2 8-folds = 0.42, p-value permutation test for Q 2 7-folds = 0.022, area under curve for ROC analysis at the 95% confidence level estimated by 7-fold full cross-validation = 0.74-1.00, specificity estimated by 7-fold full cross-validation = 0.70, sensitivity estimated by 7-fold full crossvalidation = 0.91. ROC analysis for predicted outcomes: mean area under curve = 0.71, mean specificity = 0.78, mean sensitivity = 0.79; S1 Material: S2 Fig). For both models, the predictive latent variable resulted to be not correlated with the presence of chorioamnionitis. Stability selection based on Monte-Carlos sampling enabled the selection of a subset of 19 key metabolites. Univariate data analysis based on the t-test with false discovery rate correction (q-value threshold equal to 20%) and ROC analysis produced no additional interesting features.
Searching the available online metabolite databases and studying the fragmentation spectra enabled us to identify a subset of putative metabolites ( Table 3). The PTD group with BPD featured higher levels of leucinic acid, hydroxy fatty acids (putative metabolite: 4-Hydroxy-3-methylbenzoic acid and 2-hydroxy caprylic acid), oxy fatty acids (putative metabolite: 3-oxododecanoic acid), and a metabolite ascribable to a sulphated steroid. Compared to the BPD PTD group, the group without BPD was characterized by higher levels of S-Adenosylmethionine and aminoacid chains and 3b,16a-Dihydroxyandrostenone sulfate (DHEAS).
Discussion
This study provides proof-of-concept evidence that the development of BPD is associated with a dysregulated metabolic profile of amniotic fluid, and it suggests that metabolic profiling of amniotic fluid could be a useful tool to differentiate preterm delivery from term delivery. To our knowledge, no previous untargeted study based on high-dimensional biology technique applied to amniotic fluid, has investigated the relationship between AF composition and the respiratory outcome in newborns.
BPD has traditionally been attributed to an arrest in the alveolar and vascular maturation of the developing lung [7,25] and to the injury of lung tissue inflicted by the combination of barotrauma and oxygen toxicity. Today BPD typically affects neonates born weighing less than 1000 grams and it is essentially a developmental disorder in which the immature lung fails to reach its full structural complexity. Accumulating evidence suggests that particular insults sustained during fetal life (intra-amniotic inflammation/infections, placental dysfunction) can lead to preterm delivery [26] and affect lung development before birth [8,27,28].
Thus far, some studies focused on the analysis of AF for the purpose of elucidating the relationship between prenatal factors, PTD and BPD have used a targeted analysis of a single or few mediators [3,[10][11][12][13]. Using protein assays, for instance, some pro-inflammatory cytokines and other molecules, including interleukin-6 (IL-6), interferon-gamma-inducible protein (IP)-10 [10], and matrix metalloproteinase (MMP)-8 [11], appeared to be expressed at higher concentrations in the AF of women who subsequently delivered preterm. At the same time, high values of IL-6, IL-8, IL-1β, tumor necrosis factor (TNF)-α in the AF seemed to confer a higher risk for subsequent BPD [12,13]. Although each of these mediators provides useful information, our understanding of the complex pathogenetic mechanisms underlying preterm parturition syndrome and BPD may draw advantage from a more global approach, such as untargeted metabolomic analysis of amniotic fluid. Metabolomics consists in the analysis of low molecular weight metabolites created by cellular metabolic pathways through the use of mass spectrometry or nuclear magnetic resonance spectroscopy. This analytical approach is not driven by any a priori hypothesis, thus permitting metabolic patterns characteristic of a given pathological condition to be identified, and to eventually recognize potential biomarkers in the metabolic profile. As a result, new pathogenetic hypothesis may be formulated [29,30]. Appropriate statistical approaches are needed to extract information from the data set obtained. Specifically, multivariate methods have been introduced to integrate the results obtained by univariate statistical analysis. Unlike univariate analysis, multivariate statistical data analysis takes the correlation structure of the data collected into account, providing a holistic representation of the system under investigation. This brings to light synergic effects between variables that go undetected if one variable is considered at a time [31]. The risk of preterm birth was recently assessed in a retrospective study through the use of MS-based metabolomics applied on human AF [14]. The results indicated that metabolomic analysis on AF can be a novel approach to distinguish pregnancies with spontaneous preterm labor and intact membranes who will deliver at term from those who deliver preterm, irrespective of any intra-amniotic infection/inflammation. The present study extends these findings, showing that the two constrained PLS-DA models (one for each data set analyzed) were able to establish from the AF metabolic profile which pregnancies with preterm labor would end with a preterm delivery. This suggests that the metabolic pattern of the AF might be useful for predicting the risk of PTD in women with an episode of PTL. Interestingly among the key metabolites identified in patients who delivered preterm, we found increased concentrations of 4-hydroxy nonenal alkyne, supporting the role of oxidative stress in the preterm parturition syndrome [32].
Of potentially greater interest, our study suggests that the onset of BPD may be associated with a perturbed AF metabolic pattern during intra-uterine life. Indeed, the AF metabolome seems to be capable of distinguishing within those who delivered preterm, infants who will develop BPD and those who will not. This finding supports the hypothesis that BPD is determined not only by lung immaturity and postnatal factors (e.g. barotrauma and oxygen toxicity), but also by antenatal factors that impair the maternal-fetal equilibrium and the physiology of lung development [8,9].
As putative key metabolites for BPD development, we identified some hydroxylated and oxidated organic acids. We therefore suggest that AF collected in women whose offspring are bound to develop BPD is characterized by a particular fatty acids profile that may have a pathogenic role in the onset of BPD. The BPD group was characterized by reduced concentrations of a variable ascribable to S-adenosyl methionine, which is a methyl donor for biochemical methylation reactions and a precursor of the antioxidant glutathione. A reduction of this metabolite has been associated with increased oxidative stress [33,34]. This finding suggests that among premature babies those exposed to higher levels of in-utero oxidative stress are the most likely to develop BPD.We also found higher levels of a metabolite ascribable to DHEAS in the group of PTD without BPD than in the group of PTD with BPD. This finding confirms previous target studies demonstrating an association between reduced levels of cortisol and DHEAS, indicative of adrenocortical insufficiency, and BPD development [35,36]. Noteworthy, the agreement between our untargeted metabolomic approach and these previous target studies, supports the potential for metabolomics in identifying relevant metabolites associated with preterm delivery and BPD development. A limitation of this study is the lack of a validation cohort. This was a descriptive study conducted in a well-characterized set of patients with the aim of comparing the overall AF metabolic fingerprint of the recruited groups. Being a descriptive study, no external validation set was included in the design. The reliability of our findings is proved by internal validation obtained through full cross-validation and the Monte-Carlo sampling procedure. Nonetheless we recognize that further studies are necessary to replicate our findings in an independent cohort.
Another potential limitation of the study is the sample size. However, the possible interference of relevant clinical variables (metadata) on the classification of our limited number of samples has been excluded by applying an appropriate statistical strategy-the orthogonally Constrained PLS-DA [20]-which enabled us to infer that the group discrimination could be due only to the AF metabolite profile. In particular, the statistical data analysis permitted us to exclude that metabolic changes in the AF could be due to the different origin of samples, in according with a recent paper using 1 H NMR-based metabolomic profiling [37].
Although our study is descriptive and preliminary in nature, it leads the way to the identification of patients at risk for preterm delivery, as well as those at risk for BPD, the most important complication of prematurity. Every intervention in medicine begins with prediction before we can test the effect of preventive or therapeutic strategies. The knowledge of a predictive metabolic profile, and possibly the identification of specific biomarkers of prediction, shines a light on the biology underlying preterm labor paving the way to the early identification of newborns at high risk of BPD, for whom target therapeutic measures might be developed. We recognize that, at this stage, we can only speculate on the metabolic nature of the discriminating compounds and that further studies are needed to fully characterize the biochemical structure of the metabolites that emerged.
Conclusions
This study suggests that amniotic fluid metabolic profiling from mothers presenting with an episode of preterm labor may be a promising tool for identifying spontaneous preterm birth and fetuses at risk for developing BPD. Our findings strengthen the hypothesis that the injury responsible for BPD begins, at least partly, during the intra-uterine life. Further studies are required to validate the findings reported herein and understand the precise relationship between the differentially expressed metabolites and irreversible preterm parturition and the lung injury resulting in BPD. | 2018-04-03T00:41:41.100Z | 2016-10-18T00:00:00.000 | {
"year": 2016,
"sha1": "b0f349ab3b3f728a442cdcfb23e14276a49f4c6c",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0164211&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0f349ab3b3f728a442cdcfb23e14276a49f4c6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16102055 | pes2o/s2orc | v3-fos-license | Contour Detection Using Cost-Sensitive Convolutional Neural Networks
We address the problem of contour detection via per-pixel classifications of edge point. To facilitate the process, the proposed approach leverages with DenseNet, an efficient implementation of multiscale convolutional neural networks (CNNs), to extract an informative feature vector for each pixel and uses an SVM classifier to accomplish contour detection. The main challenge lies in adapting a pre-trained per-image CNN model for yielding per-pixel image features. We propose to base on the DenseNet architecture to achieve pixelwise fine-tuning and then consider a cost-sensitive strategy to further improve the learning with a small dataset of edge and non-edge image patches. In the experiment of contour detection, we look into the effectiveness of combining per-pixel features from different CNN layers and obtain comparable performances to the state-of-the-art on BSDS500.
INTRODUCTION
Contour detection is fundamental to a wide range of computer vision applications, including image segmentation Arbelaez et al., 2011), object detection and recognition (Shotton et al., 2008). The task is often carried out by exploring local image cues, such as intensity, color gradients, texture or local structures (Canny, 1986;Martin et al., 2004;Mairal et al., 2008;Arbelaez et al., 2011;. Take, for example, that use structured random forests to learn local edge patterns, and report current state-of-the-art results with impressive computation efficiency. More recently, object cues are also considered in Kivinen et al. (2014) and Ganin & Lempitsky (2014) to further boost the performance. Despite the constant evolvement of relevant techniques in better solving the problem, seeking an appropriate feature representation remains the cornerstone of such efforts. We are thus motivated to propose a new learning formulation that can generate suitable per-pixel features for more satisfactorily performing contour detection.
We consider deep neural networks to construct a desired per-pixel feature learner. In particular, since the underlying task is essentially a classification problem, we adopt deep convolutional neural networks (CNNs) to establish a discriminative approach. However, one subtle deviation from typical applications of CNNs should be emphasized. In our method, we intend to use the CNN architecture, e.g., AlexNet (Krizhevsky et al., 2012), to generate features for each image pixel, not just a single feature vector for the whole input image. Such a distinction would call for a different perspective of parameter fine-tuning so that a pre-trained per-image CNN on ImageNet (Deng et al., 2009) can be adapted into a new model for per-pixel edge classifications. To further investigate the property of the features from different convolutional layers and from various ensembles, we carry out a number of experiments to evaluate their effectiveness in performing contour detection on the benchmark BSDS Segmentation dataset (Martin et al., 2001).
The organization of the paper is as follows. Section 2 includes related work of contour detection and deep convolutional neural networks. In Section 3, we describe the overall model for learning per-pixel features and useful techniques for fine-tuning the parameters. Section 4 provides detailed experimental results and comparisons to demonstrate the advantages of our method. In Section 5 we discuss the key ideas of the proposed techniques and possible future research efforts. Under review as a conference paper at ICLR 2015 2 RELATED WORK As stated, we focus on using a deep convolutional neural network to achieve feature learning for improving contour detection. The survey of relevant work is thus presented to give an insightful picture of the recent progress in each of the two areas of emphasis.
CONTOUR DETECTION
Early techniques for contour detection (Fram & Deutsch, 1975;Canny, 1986;Perona & Malik, 1990) mainly concern local image cues, such as intensity and color gradients. Amongst them, the Canny detector (Canny, 1986) stands out for its simplicity and accuracy owing to exploring the peak gradient magnitude orthogonal to the contour direction. Detailed discussions about these approaches can be found in, e.g., Bowyer et al. (1999). Subsequent work along this line (Martin et al., 2004;Mairal et al., 2008;Arbelaez et al., 2011) also identifies that textures are useful local cues for increasing the detection accuracy.
Apart from detecting local cues, learning-based techniques form a notable group in addressing this intriguing task (Dollár et al., 2006;Mairal et al., 2008;Zheng et al., 2010;Xiaofeng & Bo, 2012;Lim et al., 2013;Kivinen et al., 2014;Ganin & Lempitsky, 2014). Dollár et al. (2006) adopt a boosted classifier to independently label each pixel by learning its surrounding image patch. Zheng et al. (2010) analyze the combination of low-, mid-, and high-level information to detect object-specific contours. In addition, Xiaofeng & Bo (2012) propose to compute sparse code gradients and successfully improve Arbelaez et al. (2011). Lim et al. (2013) classify edge patches into sketch tokens using random forest classifiers, which can capture local edge structures. Isola et al. (2014) consider pointwise mutual information to extract global object contours. Their results display crisp and clean contours. Like in Lim et al. (2013), use structured random forests to learn edge patches, and achieve current state-of-the-art results in both accuracy and efficiency.
More relevant to our approach, Kivinen et al. (2014) and Ganin & Lempitsky (2014) learn contour information with deep architectures. Kivinen et al. (2014) encode and decode contours using multilayer mean-and-covariance restricted Boltzmann machines. Ganin & Lempitsky (2014) establish a deep architecture, which composes of convolutional neural networks and nearest neighbor search, and obtain convincing results. Different from Ganin & Lempitsky (2014), we strive for designing fine-tuning mechanisms with a small dataset for adapting an ImageNet pre-trained convolutional neural network for producing per-pixel image features. As we will see later, this effort leads to the state-of-the-art results of contour detection on the benchmark testing.
CONVOLUTIONAL NEURAL NETWORKS
Noticeably, CNNs are popularized by LeCun and colleagues who first apply CNNs to digit recognition (LeCun et al., 1989), OCR (LeCun et al., 1998) and generic object recognition (Jarrett et al., 2009). In contrast to using hand-crafted features, CNNs learn discriminative features and exhibit hierarchical semantic information along their deep architecture.
The AlexNet by Krizhevsky et al. (2012) is perhaps the most popular implementation of CNNs for generic object classification. The model has been shown to outperform competing approaches based on traditional features in solving a number of mainstream computer vision problems. In Turaga et al. (2010) and Briggman et al. (2009), CNNs are used for image segmentation. To extend CNNs for object detection, Farabet et al. (2013) utilize CNNs for semantic segmentation. Sermanet et al. (2013a) use CNNs to predict object locations via sliding window, while learning multi-stage features of CNNs for pedestrian detection is proposed in Sermanet et al. (2013b). Girshick et al. (2013) also consider features from a deep CNN in a region proposal framework to achieve state-of-the-art object detection results on the PASCAL VOC dataset.
While CNNs thrives in generic object recognition and detection, less attention is paid to applications demanding per-pixel processing, such as contour detection and segmentation. Our method exploits the AlexNet model for contour detection and explores its per-pixel fine-tuning with a small dataset. Recently and independently from our work, generating per-pixel features based on CNNs can also be found in Hariharan et al. (2014) and Long et al. (2014).
PER-PIXEL CNN FEATURES
Learning features by employing a deep architecture of neural net has been shown to be effective, but most of the existing techniques focus on yielding a feature vector for an input image (or image patch). Such a design may not be appropriate for vision applications that require investigating image characteristics in pixel level. In the problem of contour detection, the central task is to decide whether an underlying pixel is an edge point or not. Thus, it would be convenient that the deep network could yield per-pixel features.
We propose to construct a multiscale CNN model for contour detection. To this end, we extract perpixel CNN features in AlexNet (Krizhevsky et al., 2012) using DenseNet (Iandola et al., 2014), and pixelwise concatenate them to feed into a support vector machine (SVM) classifier. In particular, DenseNet provides fast multiscale feature pyramid extraction of any Caffe convolutional neural networks (Jia et al., 2014) and the convenience of working with images of arbitrary size. To extract per-pixel features, we upsample the feature maps from the first convolutional layer (Conv1) to the fifth convolutional layer (Conv5) to the original size of the input image. We then pixelwise stack the features from differen convolutional layers to constitute the per-pixel features. Depending on the selection of the convolutional layers, the resulting feature vector at each pixel would encode different level of information about an underlying pixel. Figure 1 illustrates the case of concatenating features from all five convolutional layers to form a 1376-D feature vector at each pixel.
To decide a pixel, say, (i, j) is a contour point, one can now readily feed its corresponding feature vector to an SVM classifier. In practice, it is useful to include information from neighboring pixels so that local contour structures can be better distinguished. We consider the following eight neighboring pixels (i ± k, j), (i, j ± k), (i ± k, j ± k) and append, starting from (i − k, j − k), their respective feature vector to that of (i, j) in clockwise order. In our implementation, we have tested k = 1, 2, 3, which correspond to an image patch of size 3 × 3, 5 × 5 and 9 × 9, respectively.
DENSENET FEATURE PYRAMIDS
We use DenseNet for CNN feature extraction because of its efficiency, flexibility, and availability.
DenseNet is an open source system that computes dense and multiscale features from the convolutional layers of a Caffe CNN based object classifier. The process of feature extraction proceeds as follows. Given an input image, DenseNet computes its multiscale versions and stitches them to a large plane. After processing the whole plane by CNNs, DenseNet would unstitch the descriptor planes and then obtain multiresolution CNN descriptors.
The dimensions of convolutional features are ratios of the image size, e.g., one-fourth for Conv1, and one-eighth for Conv2. We rescale feature maps of all the convolutional layers to the image size. That is, there is a feature vector in every pixel. As illustrated in Figure 1, the dimension of the resulting feature vector is 1376 × 1, which is concatenated by Conv1 (96 × 1), Conv2 (256 × 1), Conv3 (384 × 1), Conv4 (384 × 1), and Conv5 (256 × 1).
For classification, we first concatenate features from the surrounding eight pixels to incorporate information about the local contour structure, and then use the combined per-pixel feature vectors to train a binary linear SVM. Specifically, in our multiscale setting, we train the SVM based on only the original resolution. In test time, we classify test images using both the original and the double resolutions. We average the two resulting edge maps for the final output of contour detection.
PER-PIXEL FINE-TUNING
To fine-tune parameters for per-pixel contour detection, we exclude the two fully-connected layers of the ImageNet pre-trained CNN model in that the two layers will cause to restrict the input image size and consequently the overall architecture. We keep only the five convolutional layers, and on top of Conv5, we add a new 2-way softmax layer for edge classification.
Specifically, the input image size of ImageNet pre-trained CNN model is 227 × 227, which is not suitable for our per-pixel design as each map in the Conv5 layer would still be 13 × 13. In addition, we need to remove padding in CNN to conform to that DenseNet does not use padding (except the input plane). To carry out per-pixel fine-tuning, we first generate a set of edge and non-edge patches. The image (patch) size is set to 163 × 163, and would reduce to 1 × 1 in Conv5, at which the 2-way softmax layer can now properly compute the per-pixel probability of being a contour point. Note that the loss for back-propagation is computed by the label prediction and the ground truth of the center pixel of 163 × 163 input patch.
COST-SENSITIVE FINE-TUNING
Compared with the number of parameters in DenseNet, the size of the training set of edge and nonedge patches is relatively small. Using the aforementioned per-pixel fine-tuning alone is usually insufficient to achieve good performance. Still, when addressing edges, it is evident that there will be certain underlying features especially crucial for distinguishing edges from non-edges. To further learn these subtle features from a small database, we adopt the concept in cost-sensitive learning. The original 2-way softmax training cost is the negative log-likelihood cost: where x i is the input image patch, θ is the parameters of CNN, and y i is the binary (0 or 1) edge label prediction. This cost is computed above the 2-way softmax layer, and will be back-propagated to train all convolutional layers. To apply cost-sensitive fine-tuning, we consider a biased negative log-likelihood cost: where α and β are respectively the bias for positive (edge) or negative (non-edge) training data. If α = 1 and β = 1, (2) is reduced to the original negative log-likelihood cost as in (1). In our approach, we set α = 2β for positive cost-sensitive fine-tuning, and 2α = β for negative costsensitive fine-tuning. Notice that, rather than directly back-propagating with (2), a convenient and alternative strategy is to create biased sampling for fine-tuning with (1). That is, for positive costsensitive fine-tuning, we sample twice more edge patches than non-edge ones, and vice versa, for negative cost-sensitive fine-tuning.
FINAL FUSION MODEL
The overall framework is an ensemble model. We combine an ImageNet pre-trained model, a perpixel fine-tuned model, a positive cost-sensitive fine-tuned model, and a negative cost-sensitive finetuned model together. We use a heuristic branch-and-bound scheme to decide the fusion coefficients. The idea of fusing different training models is to capture different aspects of features. It is worthy mentioning that the improvements owing to the model fusion indicates that the various fine-tunings have their own merits on feature learning and are all useful in this respect.
EXPERIMENT RESULTS
We test our method on the Berkeley Segmentation Dataset and Benchmark (BSDS500) (Martin et al., 2001;Arbelaez et al., 2011). To better assess the effects of the various fine-tuning techniques, we report their respective performance of contour detection. Comparisons with other competitive methods are also included to demonstrate the effectiveness of the proposed model.
The BSDS500 dataset is the current de facto standard image collection for contour detection. The dataset contains 200 training, 100 validation, and 200 testing images. Boundaries in each image are labeled by several workers and are averaged to form the ground truth. The accuracy of contour detection is evaluated by three measures: the best F-measure on the dataset for a fixed threshold (ODS), the aggregate F-measure on the dataset for the best threshold in each image (OIS), and the average precision (AP) on the full recall range (Arbelaez et al., 2011). Prior to evaluation, we apply a standard non-maximal suppression technique to edge maps to obtain thinned edges (Canny, 1986).
ON FINE-TUNING
The parameter fine-tuning is done on a a server with a GeForce GTX Titan Black GPU card. We set the overall learning rate as tenth of the original ImageNet pre-trained learning rate, and the softmax learning rate as ten times of the overall learning rate. The modification to the proposed per-pixel fine-tuning speeds up the parameter fine-tuning process. It takes 3 days to finish 100, 000 iterations of per-pixel fine-tuning, while requiring more than 10 days for traditional fine-tuning. For both traditional fine-tuning and per-pixel fine-tuning, we sample 500 boundary (edge) and 500 nonboundary (non-edge) patches per training image. For positive cost-sensitive fine-tuning, we sample 1000 boundary patches and 500 non-boundary patches per training image, while 500 boundary patches and 1000 non-boundary patches per training image for negative cost-sensitive fine-tuning.
We report the results of the various fine-tuning techniques in Table 1(a). The experiments use only Conv5 features, and are carried out with SVM classifications. Since this setting is most similar to a softmax fine-tuning architecture, we can directly observe the effectiveness of fine-tuning. The experiment results show that, compared with the baseline (the pre-trained model), traditional fine-tuning, (Arbelaez et al., 2011) .73 .76 .73 Sketch tokens (Lim et al., 2013) .73 .75 .78 SCG (Xiaofeng & Bo, 2012) .74 .76 .77 DeepNet (Kivinen et al., 2014) .74 .76 .76 PMI+sPb, MS (Isola et al., 2014) .74 .77 .78 N 4 -fields (Ganin & Lempitsky, 2014 which is the original fine-tuning architecture with padding in every layer, degrades overall performance by 0.2 to 0.4. This implies that traditional per-image fine-tuning is not appropriate in learning per-pixel features for per-pixel applications. On the other hand, per-pixel fine-tuning improves the performance by about 0.15 in all measurements. Pertaining to cost-sensitive fine-tuning, when compared with the per-pixel fine-tuning, positive fine-tuning slightly degrades and negative fine-tuning slightly improves. One possible explanation is that there are relatively more non-boundary regions than boundary points, so features learned for non-boundary regions improve the overall performance. However, if we combine features of positive and negative fine-tuning, the performance is significantly boosted again by 0.2. The performance gain signifies the complementary property of positive and negative fine-tunings as expected.
In conclusion, per-pixel fine-tuning raises the performances of per-pixel applications. Also, the combination of positive and negative cost-sensitive fine-tunings improves the classification performance the most. Therefore, it supports the advantage of using an ensemble fine-tuning model.
ON FEATURES IN DIFFERENT LAYERS
We next conduct experiments to show how features from different convolutional layers contribute to the performance. In Table 1(b), we see that features in the second convolutional layer contribute the most, and then the third and the fourth layer. These suggest that low-to mid-level features are most useful for contour detection, while the lowest-and higher-level features provide additional boost. Although features in the first and the fifth convolutional layer are less effective when employed alone, we achieve the best results by combining all five streams. It indicates that the local edge information in low-level features and the object contour information in higher-level features are both necessary for achieving high performance in contour detection tasks.
CONTOUR DETECTION RESULTS AND COMPARISONS
Finally, we show the experimental results of our pre-trained model and final fusion model. In Table 2, we report the contour detection performances on BSDS500 by our methods and seven competitive techniques, including gPb (Lim et al., 2013), Sketch Tokens (Lim et al., 2013), Sparse Code Gradients (Xiaofeng & Bo, 2012), DeepNet (Kivinen et al., 2014), Pointwise Mutual Information (Isola et al., 2014), N 4 -fields (Ganin & Lempitsky, 2014) and Structured Edges . The experiments show that our 5-stream ImageNet pre-trained model (using features from all five convolutional layers) already achieves impressive results for contour detection on ODS and OIS measurements. The proposed fine-tuning techniques can further improve the performance. In particular, the final ensemble model improves from 0.75 to 0.76 on ODS measurement, and from 0.77 to 0.78 on OIS measurement. It also achieves state-of-the-art performance on the AP measurement.
In Figure 2, we include a number of contour detection examples for qualitative visualization.
DISCUSSION
In this work, we describe how to use the DenseNet architecture to tailor for per-pixel computer vision problems, such as contour detection. We propose fine-tuning techniques to more effectively carry out parameter learning with a per-pixel based cost function and to overcome the limitation of using a small training set. The resulting cost-sensitive model appears to be promising for generating useful per-pixel feature vectors and should be useful for computer vision applications requiring analyzing local image property. An interesting future research direction is to establish a proper dimensionality reduction framework for the resulting high-dimensional per-pixel feature vectors and to examine its effects on the performance of contour detection. | 2015-02-27T23:37:54.000Z | 2014-12-21T00:00:00.000 | {
"year": 2014,
"sha1": "200193dc8cb2ab4742a4dd20ffda43c8f7adb97d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "200193dc8cb2ab4742a4dd20ffda43c8f7adb97d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
204832962 | pes2o/s2orc | v3-fos-license | Natural Landscape, Infrastructure, and Health: The Physical Activity Implications of Urban Green Space Composition among the Elderly
Urban green spaces (UGS) have been linked with a series of benefits for the environment, and for the physical health and well-being of urban residents. This is of great importance in the context of the aging of modern societies. However, UGS have different forms and characteristics that can determine their utilization. Common elements in UGS such as the type of vegetation and the type of surface are surprisingly understudied in regard to their relationship with the type of activity undertaken in UGS. This paper aims to explore the relationship between landscape diversity and the type of surface with the time spent and the physical activity intensity performed by seniors. To do so, this study uses GPS tracking data in combination with accelerometer data gathered from 63 seniors residing in Barcelona, Spain. Results showed that senior participants spent little time inside the analyzed UGS and sedentary behaviors (SBs) were more common than physical activities (PAs). The presence of pavement surfaces positively influenced the total time spent in UGS while gravel surfaces were negatively associated with time spent in active behaviors. The provision of well-defined and maintained paved areas and paths are some key infrastructures to be considered when designing UGS for overall urban residents and, especially, when aiming to potentiate the access for senior visitors.
Introduction
The provision of Urban Green Spaces (UGS) has been noted as a key strategy for policy-makers towards the achievement of sustainable urban development and the improvement of health and well-being for urban residents [1,2]. Being in contact with nature in UGS has been shown, among many health benefits, to lessen physiological stress [3,4]; to boost social interactions [5]; and to mitigate air pollution, heat, and noise levels [6,7]. In addition, UGS have also been associated with increased physical activity (PA) levels [8], as natural environments act as a facilitator of healthy habits since they provide easy access to safe and engaging settings for practices such as walking or sports [9][10][11]. Both physical and mental health benefits linked to UGS are also fundamental for tackling the challenges of aging in modern societies [12]. The possibility to perform PAs in UGS has been associated with decreases in the incidence of cardiovascular diseases, morbidity, and chronic diseases and also with the improvement of functional capacity and cognition among the senior population [13,14].
Public parks and gardens are the most common and most researched types of UGS around the world [8,15]. Nevertheless, the characteristics and composition of these types of UGS can vary widely depending on landscape diversity and availability of facilities, which can influence the type of activities that can be undetaken and the derived benefits for health to be associated with a particular UGS. Facilities such as playgrounds or sports fields have been widely linked with increased levels of PA by UGS visitors [8,16,17]. However, less evidence exists regarding other characteristics of UGS such as the type of surface, which is of high importance for senior visitors when they are walking and being physically active [18]. Additionally, considering the main function of UGS which is offering visitors the possibility to experience vegetation and natural elements [19,20], few studies have focused on the link between the type of vegetation and the type of activities performed [21][22][23].
The presence of different vegetation and surface types can make UGS more or less attractive to be visited or to perform certain activities in, which can influence the health benefits associated with UGS. Nowadays there is a huge number of tools at hand for multi-dimensional quality appraisal of UGS [8]. Vegetated open areas such as grasslands or meadows act as recreational places where social interactions or sports activities may occur [24][25][26]. Likewise, the provision of shaded areas by way of trees has shown to positively impact walking through or sitting in these areas [27][28][29][30][31]. Further, not all types of shrubs are positive for safety, and only the low-lying shrubs would be acceptable [32].
The quality and quantity of facilities and amenities are also important characteristics that allow the performing of different types of activities [11]. However, a crucial and usually understudied characteristic especially relating to seniors involves the types of walkable surfaces in UGS. Well-defined and long pathways are basic facilities of UGS that are associated with increased PA levels among seniors compared to other areas such as grasslands [33]. Paved paths as well as walkable surfaces have proven to encourage activities such as walking, cycling, or dog-walking [11,34,35]. This is especially important among senior visitors, since they are sensitive to uneven surfaces, preferring to use paved paths due to their greater steadiness and sense of security [36].
The accessibility and size of UGS are other characteristics that may encourage or discourage visiting, improve the experience, and influence the type of activities being practiced. The distance from visitors' homes to UGS is an important predictor for how often they use certain UGS, and closer distances have been associated with improved physical and mental health indicators among urban residents [37][38][39][40]. On the other hand, larger UGS have been linked with a higher availability of amenities, a better provision of facilities, a higher availability of space in which to be physically active, and increased PA levels [11,41].
In order to fill these research gaps, this study aimed to analyze the influence of vegetation and surface type on the PA of senior UGS visitors. Specifically, the analysis explores the effect of the type of vegetation and the different composition of walkable surfaces on the time spent in different physical intensity levels. To do so, GPS tracking data in combination with accelerometer data were used in this study to obtain the geocoded locations and PA intensity performed within a set of UGS in the city of Barcelona, Spain.
Study Area
Barcelona is a Spanish Mediterranean coastal city located in the northeast of the Iberian Peninsula (41 • 23'02", N 02 • 07'59" E). With an area of 102.16 km 2 , Barcelona lies between the confluence of the Llobregat and Besòs rivers to the southwest and northeast, respectively, and to the east of Collserola Natural Park [42,43].
The city contains 1135 ha of UGS in the form of parks and gardens, which equals to 7.1 m 2 of green space per inhabitant [44]. This is a small area per person compared to other European cities with areas reaching 300 m 2 per inhabitant in some cases, especially in the north of Europe [45]. These low levels of green space provision contrast with relatively high levels of greenness in squares and streets. In 2011, Barcelona had a ratio of street trees of 98.36 per 1000 inhabitants [46], surpassing the ratios in many Europe cities with the a ratio of street trees varying between 50 and 80 trees per 1000 inhabitants [47]. Of the vegetation present in UGS, 22.6% is indigenous, with a predominance of Quercus ilex, Pinus halapensis and Platanus × acerifolia, which account for 49% of urban trees in the city [42].
Participants and Study Design
A total of 269 participants were recruited from different senior centers between June 2016 and June 2017. All of Barcelona's senior centers, both public (n = 21) and charity-run (n = 15), were contacted, from which nearly half acceded to participate (n = 14). The sample of senior centers was balanced between those located in high-and low-income neighborhoods. After explaining the conditions of the participation in each of the centers, an equitable participation of individuals in terms of gender was intended (153 females vs. 116 males). Participants were given an informative document about the project and they were asked to sign an informed consent form before being provided with an accelerometer that had to be worn on the wrist together with carrying a GPS logger for 7 consecutive days. The devices recorded their daily routes, locations, and PA levels, and were returned after 7 days. Participants were also asked to fill out a questionnaire about their sociodemographic profile, daily mobility, and PA habits together with their perceptions of their neighborhood. From the 269 initial participants we excluded 147 that lived outside the city boundaries and only retained the participants that had visited at least one UGS during the duration of the study (n = 63). For the purpose of this study, tracking and PA data provided by GPS devices and accelerometers were combined with publicly-available GIS data from Barcelona's City Council to extract the use of UGS in Barcelona by senior residents. The study received the Autonomous University of Barcelona (UAB) institutional review board approval (CEEAH-3656).
GIS Data
We used the Land Use Map of Barcelona [48] to extract all the polygons categorized as Parks and Gardens (see Figure 1). Those polygons that were less than 10 meters away from each other were merged. Only areas that were accessible within the city's urban continuum and were over one hectare in size were selected for this study. While there is no general agreement on a critical size threshold of UGS for specific health benefits [2], the selected threshold of one hectare fits within the threshold of international and national health recommendations (between 0.5 and 2 HA) [49,50]. All UGS meeting these criteria were visited to account for any differences between urban planning and the built reality. After the merging process 122 UGS were obtained. The space inside these 122 UGS was divided and classified according to vegetation and surface type (See Figure 2). Vegetation diversity was identified by classifying areas within UGS as mostly provided with forest, shrubland, or grassland, while the different surface types in these UGS were divided into pavement, mix surfaces, or gravel soils. Water elements were also identified and classified. The field validations of the GIS data were performed by 8 field work technicians between April and May 2017. Finally, we filtered out all the UGS having no GPS tracking points from participants and kept the UGS that had attracted at least one visit. As can be seen in Table 1, within the final sample of 122 visited UGS, forest was the most represented vegetation type with 47.2% of the total area, followed by pavement with 16.4%, and mix surfaces with 14%. Shrubland and grassland were 6.7 and 6.8, respectively, with water (4.4%) and gravel (4.3%) being less abundant. The space inside these 122 UGS was divided and classified according to vegetation and surface type (See Figure 2). Vegetation diversity was identified by classifying areas within UGS as mostly provided with forest, shrubland, or grassland, while the different surface types in these UGS were divided into pavement, mix surfaces, or gravel soils. Water elements were also identified and classified. The field validations of the GIS data were performed by 8 field work technicians between April and May 2017. Finally, we filtered out all the UGS having no GPS tracking points from participants and kept the UGS that had attracted at least one visit. As can be seen in Table 1, within the final sample of 122 visited UGS, forest was the most represented vegetation type with 47.2% of the total area, followed by pavement with 16.4%, and mix surfaces with 14%. Shrubland and grassland were 6.7 and 6.8, respectively, with water (4.4%) and gravel (4.3%) being less abundant.
. GPS Tracking and Accelerometer Data
The collected GPS tracking data reflected the position of participants in 15-second intervals based on a dynamic median accuracy of 2.9 m [52] thanks to the use of Qstarz Q-1000XT GPS loggers (Qstarz International Co., Ltd.,Taiwan, R.O.C.). Accelerometer data were collected using an ActiGraph GT3X (ActiGraph LLC, Pensacola, Florida USA) and its devices and for the present study were classified into two PA intensity categories: sedentary and active. Following Esliger et al. [53], a threshold of <216 vector magnitude (VM) counts per minute was used to define sedentary time and
GPS Tracking and Accelerometer Data
The collected GPS tracking data reflected the position of participants in 15-second intervals based on a dynamic median accuracy of 2.9 m [52] thanks to the use of Qstarz Q-1000XT GPS loggers (Qstarz International Co., Ltd.,Taiwan, R.O.C.). Accelerometer data were collected using an ActiGraph GT3X (ActiGraph LLC, Pensacola, Florida USA) and its devices and for the present study were classified into two PA intensity categories: sedentary and active. Following Esliger et al. [53], a threshold of <216 vector magnitude (VM) counts per minute was used to define sedentary time and a threshold of 216 VM counts and more was classified as 'active'. GPS and accelerometry data were merged using PALMS software [54] to couple spatial, temporal, and PA-related information.
Measurement of the Use of UGS
An initial dataset with 5,423,327 GPS points was compiled by sampled participants during the data collection process. This number was reduced to 33,260 by eliminating all those points outside a UGS (see Figure 3). The dataset was reduced to 14,323 points after selecting the points of those participants who visited a UGS for at least 3 consecutive minutes [55]. Finally, since the devices were not waterproof, points registered in water areas have been removed from the database to avoid GPS accuracy errors. Thus, the final dataset contained 14,227 tracking points, which represented 290 visits by 63 users to 61 different UGS. a threshold of 216 VM counts and more was classified as 'active'. GPS and accelerometry data were merged using PALMS software [54] to couple spatial, temporal, and PA-related information.
Measurement of the Use of UGS
An initial dataset with 5,423,327 GPS points was compiled by sampled participants during the data collection process. This number was reduced to 33,260 by eliminating all those points outside a UGS (see Figure 3). The dataset was reduced to 14,323 points after selecting the points of those participants who visited a UGS for at least 3 consecutive minutes [55]. Finally, since the devices were not waterproof, points registered in water areas have been removed from the database to avoid GPS accuracy errors. Thus, the final dataset contained 14,227 tracking points, which represented 290 visits by 63 users to 61 different UGS.
Sample
There were more male participants (55.6%) than female participants, and the share of participants between 65 and 75 years of age (54%) was higher than the older than 75 years of age (mean age of 81.1 years). Regarding health-related indicators, the number of participants perceiving their health as good (76.2%) prevails, and regarding their Body Mass Index (BMI), obese participants (76.2%) outnumbered those with a BMI <30. BMI was calculated using self-reported height and weight, and considering BMI's over 30 as obese [56]. All participants were senior center (i.e., old-age community center) members.
Data Analysis
In order to analyze the use of the different UGS, the chosen outcome variables, for each participant, were total time spent in each UGS, time spent in sedentary behavior, and time spent in active behavior. Descriptive statistics display the different time expenditures by individual characteristics of participants and the characteristics of the analyzed UGS, as well as the use of different areas by both
Sample
There were more male participants (55.6%) than female participants, and the share of participants between 65 and 75 years of age (54%) was higher than the older than 75 years of age (mean age of 81.1 years). Regarding health-related indicators, the number of participants perceiving their health as good (76.2%) prevails, and regarding their Body Mass Index (BMI), obese participants (76.2%) outnumbered those with a BMI <30. BMI was calculated using self-reported height and weight, and considering BMI's over 30 as obese [56]. All participants were senior center (i.e., old-age community center) members.
Data Analysis
In order to analyze the use of the different UGS, the chosen outcome variables, for each participant, were total time spent in each UGS, time spent in sedentary behavior, and time spent in active behavior. Descriptive statistics display the different time expenditures by individual characteristics of participants and the characteristics of the analyzed UGS, as well as the use of different areas by both type of vegetation and surface within UGS by time and intensity. Due to the non-normal distribution of the sampled outcomes, the median was used in the descriptive statistics. Finally, a mixed-effects multilevel regression analysis for the 290 visits was used to test the association between log-transformed outcome variables and the explanatory variables which were grouped into socioeconomic personal variables and variables on the characteristics of UGS. The fixed effect analysis of these two models is aimed at measuring differences in each outcome variable between individuals after controlling for the independent explanatory variables at the individual level. The random effect explores the variation of the outcomes corresponding to the different profile and UGS-related factors at the level of UGS visit.
Results
Participants spent a median of 8.5 minutes within the perimeter of the analyzed UGS (Table 2), and they spent a median of 6.5 minutes on sedentary behaviors (SBs) and only a median of 3.5 minutes on physical activities (PAs). Significant differences were found in the total time spent in the analyzed UGS between individuals with different BMI thresholds. Obese participants spent significantly less total time in UGS (8.0 median min; p = 0.02) and registered the shortest time on sedentary behaviors (5.3 median min; p = 0.022) than those with BMI < 30 (10.5 and 7.8 median min; p = 0.020 and p = 0.022 respectively). Regarding time on PAs, participants under 75 years of age registered significantly more time being active than older participants (3.9 and 2.0 median min. respectively; p = 0.003). Finally, significant differences were detected also regarding the distance from home when conducting physical activity in UGS, those participants who stood out were residing 301-600 meters away from the analyzed UGS (7.5 median min; p = 0.02). Table 3 summarizes time spent in each type of vegetated area and type of surface for all types of activities and by intensity. The results show a relationship between the area type and the time spent in the area. Forest was the most frequently found area type (29,046 m 2 , representing 47.2% of total UGS surface) being in turn the area were participants spent most of their total time (41.4%), their sedentary time (38%), and their active time (41%). A remarkable difference has also been detected in the case of pavement and mix surfaces. Participants spent higher shares of their total time (22.9% for pavement and 13.6% for mix surfaces), their sedentary time (24.5% and 14.2% respectively), and their active time (22.2% and 14.9%, respectively) than the shares these categories represent (16.4% for pavement and 4.3% for mix surfaces). This fact occurs also for shrubland areas although in a less pronounced way (6.8%, 9.4%, 9.8%, and 8.5%, respectively). On the other hand, total time and both sedentary and active time registered in forest, grassland, and gravel were lower than the share of land devoted to these areas. Table 4 shows the multilevel regression analysis that explores the effect of both individual and UGS characteristics on total time, sedentary time and physically active time. This regression analysis shows how a share of variation in the different outcome variables (total, sedentary and active time) is attributable to differences between participants (intraclass coefficients of 6.03%, 18.73%, and 25.31%, respectively) and the proportion corresponding to the different explanatory factors explored at the individual level (19.16%, 8.34%, 52.21%, respectively). Overall, the chosen UGS factors can explain a small share of the total variance of the total time (1.36%) and sedentary time (1.70%), but a higher share in the case of active time (7.35%). Variation in UGS factors among participants was only found to be significant for sedentary behavior. In relation to the variables affecting total time in UGS, only the proportion of pavement (B = 0.003) showed a significant effect on the total time spent in the analyzed UGS, indicating that the higher the proportion of this surface, the more time that is spent there by participants. Second, regarding sedentary time, age (B = 0.012) was the only significant factor explaining time spent at this intensity within UGS, showing that time spent sedentary increased with age. Third, age (B = −0.017), BMI (B = −0.042), and gravel (B = −0.019) showed a significant negative association with active time, while distance from home showed a positive association (B = 0.000). As age, BMI, and the distance between UGS and the residence of participants increased, less time was dedicated to physical activities. Finally, the perceived health of participants, the total area of UGS, the proportion of forest, grassland, mix surfaces, and water were characteristics that did not show a significant influence on any of the outcomes.
Discussion
This study explored the relationship between the provision of different vegetation types and walkable surfaces available and the type of activities performed and time spent in UGS by a group of seniors residing in Barcelona, Spain. Overall, this study has shown that senior participants spent short periods of time (a median of 8.5 minutes per visit) inside the analyzed UGS and they predominantly engaged in sedentary behavior during their visit. Moreover, areas designated as forest were where participants spent the majority of their total time but also sedentary time and active time, while pavement was the most used surface type for total time spent.
Focusing on the characteristics of the analyzed UGS, the importance of walkable surfaces can be highlighted. On the one hand, pavement was associated with spending more time in the park. On the other hand, a higher proportion of gravel was associated with registering less active time by senior participants. Likewise, paths that are easy to walk along and without obstacles encourage outdoor activities among seniors and register the highest physical activity levels within UGS [25,33]. The importance of hard soils is more acceptable among senior users over another type of soil or environment with respect to the total time. Moreover, the significant effect of distance from home, also supported by previous authors [35,57,58], shows that shorter distances were associated with higher PA levels.
The influence of different types of vegetation proved to be a non-significant factor for total, sedentary, and active time spent, which has also been found in some other studies [20,59]. Areas with higher proportions of forest, shrubland, and grassland corresponded with a similar sharing of time registered in these UGS in the descriptive statistics. The effects of the remaining analyzed explanatory factors overshadowed the influence of landscape diversity on the behavior of participants in UGS. The fact that the type of vegetation had no significant influence on the different outcomes does not mean it is not relevant. The type of vegetation could have been a decisive factor in the decision to visit specific UGS but did not encourage staying longer or increasing PA levels for senior visitors. Moreover, the size of the UGS was not significant when analyzed together with other explanatory factors, as it has also been seen in other studies [38].
Finally, the small amount of time registered in UGS might be explained by the age of the participants. Previous studies reported that seniors spent less time in UGS compared to other population groups, as most of the parks are geared towards serving youths or do not provide proximate, accessible, and safe spaces with well-maintained walking infrastructures [60]. Besides promenading or practicing sports, one of the main reasons for using UGS is spending time with friends [61,62]. The fact that 45.9% of the older population in Barcelona live alone [63] could be explained by the possible non-collective nature of walks registered by participants. Second, the predominance of sedentary behavior among senior participants has been confirmed by similar studies, corroborating seniors' preferences for sitting on benches as the main reason for their sedentary behavior [55,64,65]. The links between age and sedentary behavior has also been confirmed [66] with age being a significant factor for both sedentary behavior and physical activity. In the present study, the increasing age of participants was associated with more sedentary time while younger participants registered significantly more time being physically active, as noted in similar previous studies [65,67]. Finally, another relevant observation in this study was that participants with a higher BMI spent less time being active, which is also in line with previous studies relating BMI with the use of the parks [68][69][70].
Limitations
This study is not without limitations. First, participants may have had a different walking behavior than usual as a result of being involved in this study. Likewise, another limitation is the fact that participants wore the accelerometer devices on their wrists, which means that certain activities might not have been registered because of the lack of movement by the arms. The analysis in this study might have been biased with participants engaging in this study presenting a better health condition than the average senior population. Correspondingly, the selection of participating senior centers was randomly selected and did not follow any pre-established sampling scheme. Additionally, other infrastructures of UGS that are important for seniors such as banks, drinking fountains, or toilets were not considered for this study. Future research should consider the influence of these elements, as well as the effect of the quality of the available infrastructure and the diversity of natural species on seniors' behavior. The inclusion of motivations and preferences of seniors for using UGS should also be included in future studies.
Conclusions
This study aimed to investigate the impact of different types of vegetation and walkable surfaces in urban green spaces (UGS) on the utilization and intensity of activities performed by seniors. The importance of paved surfaces within UGS among seniors as was determined in this research adds to the debate around the unclear definition of green spaces, which might vary depending on the ecological conditions, the inherited built environment, or the political priorities, among others, of a particular urban context.
Based on the findings of this study, the surface type acquires paramount importance, especially when it comes to encouraging walking among seniors. Paved surfaces have proven to be crucial for spending more time in UGS while soft surfaces have shown to discourage physical activity. This study also showed that, when designing and planning UGS, not only are the quality and quantity of facilities in UGS important to improve the experience of visitors, but being cogniscient of the different profiles of potential users is also important. The predominance of sedentary behavior undertaken by this specific population group highlights the importance of considering the age of potential users when designing environments with the aim to be inclusive and accessible for everyone. Therefore, the provision of infrastructure in UGS should be oriented towards encouraging both sedentary behavior and physical activity in order to incentivize the improvement of both the physical and the mental health of their visitors.
Moreover, the important role of paved surfaces that was determined in this study can be useful for city planners when it comes to designing UGS adapted to senior population-which is of high relevance in a worldwide demographic context of aging of the population-both to attract more users and to incentivize them to stay for longer periods of time. | 2019-10-23T13:06:38.796Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "63d1df5bf5b0e9ea2baec1faedca1e7e1b2979b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/20/3986/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b892df1499853d7a6799c40db8bb0978b371075c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
} |
33926457 | pes2o/s2orc | v3-fos-license | Optimization of Hydraulic Parameters of Iranshahr Alluvial Aquifer
Problem statement: The Iranshahr aquifer consists of an unconfined la yer. We simulated groundwater flow of the Iranshahr aquifer in a conc eptual model. This model is a suitable tool for management of groundwater system and would also be effective when applied in other countries. Approach: In this study, we constructed the conceptual model f Iranshahr aquifer, which is important and applicable in environmental studies. We used th e automatic parameter estimation method and hydraulic head observation through calibration in o rder to identify the best parameter values. During the calibration, the optimized values of the parame ters obtained. 24 parameters estimated by means of regression and the rest remaining unestimated param eters optimized using the trial-and-error method. Results: The results of the model show a good fit between o bservation and simulated values. Automated calibration procedure estimated the hydra ulic parameters of the aquifer. Conclusion: The optimized values and the zonation of the hydraulic parameters of the aquifer showed the best areas for developing and extracting groundwater from the aquifer taking the optimized hydraulics values into account.
INTRODUCTION
Mathematical models have been considered as suitable tools for management of groundwater systems (Donaldson and Schnabel, 1987;Tiedeman et al., 2004;Alipour and Derakhshani, 2010). Ground water models have been applied for different purposes Cooley, 2004;D'Agnese et al., 2002;Anderson and Woessner, 1992;Al-Rababa, 2005;Naeser, 2005;Rahmat et al., 2010;Abbasnejad and Derakhshani, 2010;Opafunso et al., 2009;Jolgaf et al., 2008;Suhail et al., 2010). Construction and development of a groundwater model has been based on the modeling protocol (Anderson and Woessner, 1992). This protocol consists of some steps in which preparing for conceptual model, calibration and verification are the most important steps. During calibration, model input such as system geometry, properties, initial and boundary conditions and stresses are changed so that the model output matches the related measured values (Hill and Tiedeman, 2007).
Two methods are commonly used for identification of mode parameters through calibration: trial and error method and automatic parameter estimation (Solomatine et al., 1999;Madsen, 2003). In the trial and error method, parameter values are assigned to the each node of the model and during the calibration these values has been adjusted, until the simulated values (head, discharge,...) are close to the observed ones.
In the automatic parameter estimation, the comparison between simulated values and observed ones (objective function) is accomplished quantitatively and the best parameter values that produce the smallest value of the objective function has been achieved (Hill and Tiedeman, 2007;Abbaspour et al., 2001).
There are many benefits and capabilities with the automatic parameter estimation that help modelers for complex systems (Hill and Tiedeman, 2007;Abbaspour et al., 2001;Poeter and Hill, 1997). Some of these benefits according to (Hill and Tiedeman, 2007;Abbaspour et al., 2001) are: • Clear determination of parameter values that produce the best possible fit to the available observations • Graphical analysis and diagnostic that quantify the quality of calibration • Inferential statistics that quantify the reliability of parameter estimation and predictions • Other evaluations of uncertainties MODFLOW-2000Hill et al., 2000), UCODE-2005 and PEST are the most commonly used computer programs that simulate three dimensional groundwater flow using finite difference method (Gardner, 2009). All of these codes perform inverse modeling, posed as a parameter-estimation problem, by calculating parameter values that minimize a weighted least-square objective function using nonlinear regression (Hill and Tiedeman, 2007).
MATERIALS AND METHODS
A modeling protocol, which is exerted from (Anderson and Woessner, 1992), for groundwater model construction is pursued more or less to construct a model for this study. The main steps to identify the hydraulic parameters are: • Development of a conceptual model of the system: Hydrostratigraphic units are identified in this step.
Field data are assembled including information on the water balance and data needed to assign to aquifer parameters and hydrological stresses • Model design: The conceptual model is put into form suitable for modeling. This step includes design of the grid, selecting time step, setting boundary and initial condition and preliminary selection of values for aquifer parameters and hydrologic stresses • Calibration: The purpose of calibration is to put the model in a position that can reproduce • Observation values: Calibration has been carried out by using automated parameter estimation code; MODFLOW 2000 • Only hydraulic head observation has been used through calibration • Sensitivity analysis: To establish the effect of uncertainty on the calibrated model is to put the model in a position that can reproduce
RESULTS
The study area, Iranshahr watershed, is located in the Sistan and Baluchistan province, in the southeast of Iran and southeast of Jazmurian depression ( Fig. 1). This area is located in the north of Makran (Farhoudi and Karig, 1977) and in the east of Zagros mountain Range (Rahnama-Rad et al., 2008;Stocklin et al., 1972). Iranshahr city is the main population center in the area. The watershed area is 8018 km 2 , of which 6882 km 2 are in sharp reliefs against the 1136 km 2 of alluvial plains. The highest point of the area has an elevation of 2720 m to the northeast and the lowest point has an elevation of 500 m to the west near the Bampour River. The geological formations consist of sedimentary (Hadavi, 2002) and igneous rocks (Emami, 2000) in the north and south of the plain respectively (McCall, 1985;Romanko, 2006;Hill, 1998). In the southeast, the lithology consists of about 8 kilometers thick Bazman granite. Most of the geologic formations around the Iranshahr plain are impermeable.
The climate of the area is arid and the average of precipitation, based on the three weather stations in the watershed, is 103.5 mm (recording period of 1989-2007). High temperature in the watershed causes a high evaporation. Average annual evaporation of the plain is 3295 mm (recording period of 1982-2007). Most of the rainfall is during January and February.
The main surface water features in the watershed are seasonal rivers (Daman and Saradan) that drain the runoff from north, east and southeast. The Bampour perennial river has been drained from the aquifer (Fig. 1). Iranshahr watershed has eight subbasins that drain the surface water to the plain and aquifer surface. The main subbasins are Daman (number 2) and Saradan (number 4) (Fig. 2). Table 1 shows the properties of these subbasins.
Iranshahr aquifer is within the area bounded by latitude 60° 25 -61° 25 N and longitude 52° 53 -53° 8 E. Groundwater flow direction is the same as surface water, i.e. from north, east and southeast to the west and discharge to Bampour River (Fig. 3). The most important sources of the aquifer's recharge are the direct recharge from precipitation and especially from subsurface flow of seasonal Daman and Saradan rivers (Fig. 3). Returned water from wastewater and irrigation also recharge the groundwater. Groundwater is extracting from 260 shallow and deep wells and 12 Qanat strings (underground artificial channels) in the aquifer area, mostly for agricultural and drinking use. Depths of the wells vary between 10-120 m.
Fig. 3: Surface map of the Iranshahr Aquifer
There is some exploration points in the aquifer area on which transmisivity (T) and storage coefficient (S) were obtained by using pumping tests (Fig. 3). Alluvial thickness varies from 50 m. in the west to approximately 250 m. in the center part of the aquifer near Sarkahoran village (Fig. 3). Alluvial thickness in the north and east of the aquifer is about 120-150 meters. Groundwater depth is about 85 m. in the east and decreases to near 5 m. and less to the west, where groundwater begins to drain into the Bampour River outside of the model area. Fluctuations of the water table were measured monthly using 21 observation wells (Fig. 3) and were later used for model calibration.
Groundwater flow of the Iranshahr aquifer has been simulated using MODFLOW-2000. The Iranshahr aquifer consists of an unconfined layer. To prepare the conceptual model, geological information, drilling logs, exploration logs, pizometeric data and other information has been used. After construction of the conceptual model for calibration, head observation during one year (12 months) has been used as 12 stress periods. Hydraulic conductivity and specific yield zones were designed by using pumping experiments and exploration well drilling logs. One recharge zone was considered at the surface of the aquifer for direct recharge. The input and output boundaries were simulated using the General Head Boundary (GHB) package.
The selected simulation time are 12 successive months (12 stress periods) with 21 head observations at the end of each stress period. Totally, 252 head observation data were used in the calibration stage, which are observation data. The simulation was run for both transient and steady-state simultaneously.
DISCUSSION
During calibration condition many variables of the aquifer were considered as parameters and tried to estimate using automated calibration methods. Hydraulic conductivity and specific yield have been parameterized using simple zonation method. Totally, 31 parameters, including 5 hydraulic conductivity and 5 specific yield of five different zone, 3 recharge at 3 precipitation stress periods, one hydraulic conductance parameter of river bed in the outlet of the aquifer, 12 parameters of water withdrawal from the exploitation wells, 4 hydraulic conductance parameters of groundwater inflow and 1 conductance parameters of groundwater outflow, entered into a regression by means of the 228 observations of hydraulic heads. Calibration is accomplished iteratively, using nonlinear regression to estimate the value of different parameters. The iterative parameter estimation procedure was initialized by an initial estimate of the parameters of interest based on geological and hydrogeological information of the aquifer. These values were improved in each iteration to reproduce the observed values. Some useful guidelines for effective model calibration using nonlinear regression method were used to obtain the optimal value of parameters. After calibration, optimized values of parameters were obtained 24 parameters were estimated by means of the regression and the rest un-estimated parameters were optimized using the trial-and-error method. Reasonableness of the estimated parameters, composite scaled sensitivity, correlation coefficients and all statistics mentioned and recommended by Hill (1998) were used for the optimization of the aquifer's parameters. The resulting objective function value (sum of squared difference between simulated and observed value) is 30.35 m 2 .The normality of the results is important to use of parameter uncertainty. The normal probability graph (Q-Q plot) of the residuals is shown in Fig. 3. The points would be expected to fall along a straight line if the residuals were both independent and normally distributed. A graph of simulated values versus observed values is shown in Fig. 4. It is expected the points are scattered around a line with slope one. Correlation coefficient of the obtained line is 0.99 which is suitable. During the calibration stage, zonation of hydraulic conductivity and specific yield changed with reasonable values to obtain the best fit between observed and simulated heads. Figure 5 show the final zonation and values of hydraulic conductivity and specific yield. As shown in the Fig. 6 specific values range between 0.1 and 0.23 and hydraulic conductivity range between 4.3 and 110 m day −1 .
CONCLUSION
Groundwater models especially with automated calibration capability are good tools for identifying hydraulic parameters such as hydraulic conductivity and specific yield. To prepare the conceptual model of Iranshahr aquifer, geological information, drilling logs, exploration logs, pizometric data and other information have been used and the groundwater flow was simulated by using MODFLOW-2000. The results of the model show a good fit of observation and simulated values. Hydraulic parameters of the aquifer have been estimated by using the automated calibration procedure. Optimized values and zonation of the hydraulic parameters of the aquifer indicate that the best area for development and extraction of groundwater of the aquifer are zones 3 and 4 respectively regarding the optimization of hydraulics values (Fig. 6). | 2019-04-24T13:13:57.094Z | 2010-12-31T00:00:00.000 | {
"year": 2010,
"sha1": "3f4b24cc7e9d2399fc4e2250a6774eceb61893e7",
"oa_license": "CCBY",
"oa_url": "https://thescipub.com/pdf/ajessp.2010.477.483.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bde5063ba2ce05bb74b5934788db3c2ce4c8b5b2",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
61153920 | pes2o/s2orc | v3-fos-license | An E ffi cient Framework for Remote Sensing Parallel Processing : Integrating the Artificial Bee Colony Algorithm and Multiagent Technology
Remote sensing (RS) image processing can be converted to an optimization problem, which can then be solved by swarm intelligence algorithms, such as the artificial bee colony (ABC) algorithm, to improve the accuracy of the results. However, such optimization algorithms often result in a heavy computational burden. To realize the intrinsic parallel computing ability of ABC to address the computational challenges of RS optimization, an improved multiagent (MA)-based ABC framework with a reduced communication cost among agents is proposed by utilizing MA technology. Two types of agents, massive bee agents and one administration agent, located in multiple computing nodes are designed. Based on the communication and cooperation among agents, RS optimization computing is realized in a distributed and concurrent manner. Using hyperspectral RS clustering and endmember extraction as case studies, experimental results indicate that the proposed MA-based ABC approach can effectively improve the computing efficiency while maintaining optimization accuracy.
Introduction
Image processing is of great importance for remote sensing (RS) applications [1], such as classification [2], clustering [3][4][5], and endmember extraction [6,7].Recently, many RS image processing problems have been converted to optimization problems to improve the results' accuracy [8,9].For example, an RS clustering problem can be converted to an optimization problem that minimizes the distance between the pixel and the cluster center [10] and an RS endmember extraction problem to a problem that minimizes the remixed error [11,12].Because these RS optimization problems are nonlinear and are difficult to solve using traditional linear approaches, the artificial bee colony (ABC) algorithm, an outstanding swarm intelligence (SI) algorithm, has been widely used for its ability to address nonlinear problems [5,[13][14][15][16].Experiments have demonstrated the improved results achieved by utilizing this intelligent algorithm.
However, using ABC to solve RS optimization problems is a computationally expensive task [17,18] because ABC is an iterative-based stochastic search algorithm that is usually executed sequentially in a central processing unit (CPU).In each iteration, each bee in the population must execute time-consuming operations, such as RS optimization's fitness evaluation, to obtain new solutions [18].Therefore, as these operations' complexity and the RS image volume increase, the computational burden increases substantially, resulting in poor performance.
To contend with the aforementioned computational challenges, efforts have been made to establish parallel computing approaches.The technique of employing graphics processing units (GPUs) is a Remote Sens. 2019, 11, 152 2 of 21 widely used approach [6,[19][20][21].A GPU has a massively parallel architecture consisting of thousands of small arithmetic logic units (ALUs), which are efficient for handling computing-intensive tasks simultaneously [22].With GPU-based RS optimization, the data processing behaviors of each individual in SI algorithms that contain large volumes of calculations, especially the most computationally intensive fitness evaluation, are offloaded onto the GPUs' threads for parallel computation [6,18].During such parallel computation, an RS image is usually divided into many subimages, and multiple threads execute the same computation on different RS subimages in parallel.However, since in a bee swarm each individual's behavior is operated on an entire RS image, after multiplying the number of subimages by the number of individuals, the number is often greater than the number of threads that the GPU hardware can provide; as a result, it is hard to implement multiple individuals' calculation behavior in parallel.
To efficiently achieve parallel execution of individuals' behavior, a multiagent (MA)-based ABC approach for RS optimization was proposed by utilizing distributed parallel computing based on the CPU [17].This approach treats food sources and bees in ABC as different agents, which are distributed and concurrently behave in multiple processor units (computers or hosts).By communicating through the network, different agents interact with each other to obtain an optimal solution, thus significantly increasing the computation efficiency.However, the agents' behaviors designed in [17] are relatively redundant, resulting in increased communication costs for its dispersed agents' behaviors, which is further analyzed in Section 3.
To further increase the computational efficiency of RS optimization while using the ABC algorithm, this paper proposes an improved MA-based ABC approach by appropriately integrating agents' behaviors to reduce communication among agents.The effectiveness and efficiency of the new method are demonstrated based on RS image clustering [5] and endmember extraction [16].The remainder of this paper is organized as follows.Section 2 presents relevant theory pertaining to remote sensing optimization, the ABC algorithm, and multiagent system technology.Then, the basic concept of the improved MA-based ABC approach and the framework design are described in detail in Section 3. Section 4 introduces two RS optimization tasks as case studies, clustering and endmember extraction.The corresponding experiments and results are presented in Sections 5 and 6.Finally, a discussion and a conclusion are provided in Sections 7 and 8.
Remote Sensing Optimization
Many RS-related problems, such as clustering, endmember extraction, and target detection, essentially involve maximizing or minimizing results on certain indexes by computation.For example, for clustering, researchers usually try to minimize the distance among points within a cluster or maximize the distance among multiple classes.In addition, endmember extraction requires maximizing the spectral angle, maximizing the internal maximum volume, or minimizing the external minimum volume of points in spectral spaces.By treating these indexes as objective functions, these problems can be abstracted as optimization problems and expressed as follows: min/max f (x) st.
x ∈ Ω where f (x) is the objective function of an optimization problem, x is a solution, and Ω represents the constraints that the solution must satisfy.
The ABC Algorithm
The ABC algorithm [23] is a method for finding an optimal solution to an optimization problem by simulating the foraging behavior of a bee colony in nature.In the ABC algorithm, each scout bee randomly generates a feasible solution (food source) initially.Then, employed bees search around their corresponding food sources (feasible solutions) to generate new solutions with the participation of randomly selected neighborhood solutions.Once all of the food sources are updated with those with better fitness values, each onlooker bee pseudorandomly selects a food source (a feasible solution), searches around it to generate a new solution, and updates the food source with the better solution.If a food source is not updated for a long time, it will be abandoned, and a new food source will be obtained by a scout bee's random selection.The bees' behaviors will be iterated until an optimal solution is found.The entire procedure is depicted in Figure 1.
represents the constraints that the solution must satisfy.
The ABC Algorithm
The ABC algorithm [23] is a method for finding an optimal solution to an optimization problem by simulating the foraging behavior of a bee colony in nature.In the ABC algorithm, each scout bee randomly generates a feasible solution (food source) initially.Then, employed bees search around their corresponding food sources (feasible solutions) to generate new solutions with the participation of randomly selected neighborhood solutions.Once all of the food sources are updated with those with better fitness values, each onlooker bee pseudorandomly selects a food source (a feasible solution), searches around it to generate a new solution, and updates the food source with the better solution.If a food source is not updated for a long time, it will be abandoned, and a new food source will be obtained by a scout bee's random selection.The bees' behaviors will be iterated until an optimal solution is found.The entire procedure is depicted in Figure 1.
Figure 1
The procedure of the artificial bee colony (ABC) algorithm.
Multiagent System
An agent is a software component that has autonomy in providing an interoperable interface for a system [24].The use of a multiagent system (MAS) is a technique for modeling complex problems.An MAS is constructed by multiple autonomous agents that interact with each other directly by communication and negotiation or indirectly by influencing the environment to fulfill local and global tasks [24][25][26].Combining an MAS with swarm
Multiagent System
An agent is a software component that has autonomy in providing an interoperable interface for a system [24].The use of a multiagent system (MAS) is a technique for modeling complex problems.An MAS is constructed by multiple autonomous agents that interact with each other directly by communication and negotiation or indirectly by influencing the environment to fulfill local and global tasks [24][25][26].Combining an MAS with swarm intelligence algorithms, such as ABC, in a distributed and parallel manner can be effective to shorten the computational time of a complex optimization problem [27].Usually, the individuals in swarm intelligence algorithms can be treated as a series of heterogeneous agents in an MAS involving different computing processors with diverse goals, constraints, and behaviors.By collaborating among these agents, the optimal solution can be achieved in a distributed manner.For example, in [17], each artificial bee and food source are implemented as independent software agents who run separately and simultaneously in an MAS, with an administration agent controlling the flow work of the RS clustering algorithm.One major advantage of such an MAS is a reduction in computational time because the computational burdens are offloaded onto different processors.Furthermore, the failure of one agent will not disturb the entire algorithm's calculat9ion, which is helpful for ensuring the robustness of the optimization framework.
Framework Design
The design of the improved MA-based ABC framework mainly consists of three parts: agents' role design, communication design, and behavior design.This section first elaborates the design of each part and then compares this improved framework with the former framework proposed in [17].
Agents' Role Design
Two types of agent roles are designed in this framework, massive bee agents and one administration agent.These agents are located in different computing nodes within the same network through which they can communicate with each other via messages.The agents' role design is depicted in Figure 2.
advantage of such an MAS is a reduction in computational time because the computational burdens are offloaded onto different processors.Furthermore, the failure of one agent will not disturb the entire algorithm's calculat9ion, which is helpful for ensuring the robustness of the optimization framework.
Framework Design
The design of the improved MA-based ABC framework mainly consists of three parts: agents' role design, communication design, and behavior design.This section first elaborates the design of each part and then compares this improved framework with the former framework proposed in [17].
Agents' Role Design
Two types of agent roles are designed in this framework, massive bee agents and one administration agent.These agents are located in different computing nodes within the same network through which they can communicate with each other via messages.The agents' role design is depicted in Figure 2. In [17], bee agents are only responsible for a neighborhood search to generate new solutions in the employed and onlooker bee phase, which introduces an extra communication cost, as indicated in Section 2. In this paper, we redesign the bee agent with the potential to decrease the frequency of communication.In addition to a neighborhood search, each bee agent has more tasks to be executed to maintain its corresponding solution, which include (1) generating a random solution in the initial phase and the scout bee phase, (2) evaluating the solutions' fitness, (3) updating the maintenance solution, and 4) recording the number N limit , which indicates that a solution has not been updated, to control the initiation and termination of the scout bee phase.These four behaviors are assigned to food source agents in [17].
Similarly to [17], the administration agent is responsible for the overall control of the algorithm, which includes the following functions: (1) exerting control over agents' lifecycle, namely, generating new bee agents in different computing nodes during the initial stage and killing them at the end of the algorithm; (2) executing data initialization; (3) determining the solutions participating in the neighborhood search; (4) performing iteration and convergence control; and (5) recording and outputting the optimal solution.
Agents' Communication Design
In this paper, a message-passing mechanism is adopted for the smooth implementation of the algorithm.All agents communicate with each other through the network by messages.According to the standard of agent communication language (ACL), each message contains at least five fields: the sender, the receivers, contents, language, and a communicative act [24].
For example, in ABC's employed bee phase, the administrator agent will pass a neighborhood solution to each bee agent before executing the neighborhood search.Therefore, the sender of the message is the administrator agent, and the receiver is a bee agent.The message content contains the neighborhood solution, which is coded in the language of Java by serialization in our design.Under these circumstances, the sender (the administrator agent) wants the receiver (a bee agent) to perform an action (begin its neighborhood search); thus, the communicative act should be set as REQUEST.However, in certain other situations, the sender only wants the receiver to be aware of a fact, such as a bee agent notifying the administrator agent while completing a scout bee behavior; thus, the communicative act should be set as INFORM.
Agents' Behavior Design
The agents' behavior in an MA framework is tightly coupled with the procedure of the ABC algorithm.There are five phases in ABC: the initial phase, the employed bee phase, the onlooker bee phase, the scout bee phase, and the convergence judgment phase (Figure 3).
(1) Initialization phase First, we launch an administration agent and set initial parameters, including MA-related data, such as the number of bee agents, the network address list of computing nodes that can participate in the parallel computation, and RS-optimization-related initial data, such as the number of clustering centers in the problem of hyperspectral image clustering.
Then, the administration agent will generate multiple bee agents in different computing nodes according to the parameters of the network address list and pass the RS-optimization-related initial parameters to each bee agent.
After receiving the initial parameters, each bee agent will generate a random solution.
(2) Employed bee phase First, the administration agent will pass to each bee agent a random neighborhood solution through the network.Then, the k th bee agent maintaining solution with fitness f it k will receive the solution X s as a neighborhood solution, where m × L is the dimension and k s.The k th bee agent then executes a neighborhood search to generate a new candidate solution X k according to Equation ( 2) where r is a random dimension index selected from the set {1, 2, . For a minimal optimization problem, when a new solution X k is generated, its fitness f it k will be calculated via Equation (3) after its objective function value U (k ) is obtained.Then, a greedy selection will be used to improve the fitness of the k th bee agent's solution.If f it k is better than the original solution's fitness f it k , the solution will be replaced by the new one; otherwise, the parameter N limit = N limit + 1.Later, the updated solution will be passed to the administrator agent through the network.
It should be noted that the objective function calculation U (k ) is a problem-focused process.How the solution's objective function value is calculated is irrelevant in the MA framework, since only its function value is needed to evaluate the fitness.However, the objective function calculation could be loosely coupled with the MA-based approach by providing each agent with the calculation interface.
Figure 3
The overall workflow of agents' behaviors for remote sensing (RS) clustering.(3) Onlooker bee phase Once the administrator agent receives all bee agents' fitness, a random selection probability for each bee agent will be calculated according to Equation (4).
where p k is the selection probability of the k th bee agent, f it k is the fitness value, and BN is the bee agent number.The probability gives a solution with better fitness a greater chance of being selected by an onlooker bee than the solutions with worse fitness.Then, the administrator agent will pass to each bee agent two solutions, X k and X r , where X k is obtained by roulette wheel selection according to the selection probabilities and X r is selected randomly.Later, each bee agent will execute a neighborhood search according to Equation ( 2) and calculate the new generated solution's fitness via Equation (3).If the new solution's fitness is worse than solution X k , then N limit = N limit + 1; otherwise, the new generated solution will be transferred to the k th bee agent to replace the original solution.Finally, all bee agents' solution will be transferred to the administrator agent to help it update each bee's best-so-far solution.
To further improve the parallel computation of the entire framework, the employed and onlooker bee phases could be carried out simultaneously.
(4) Scout bee phase After the onlooker bee phase, each bee agent will judge whether its parameter N limit exceeds the value of a predefined number limit.If the parameter exceeds the value, the original solution is abandoned, and a new solution will be generated randomly.
(5) Convergence judgment phase If the iteration meets the convergence condition, the administrator agent will kill all bee agents and export the best-so-far solution in its memory as the optimal solution.Otherwise, the employed bee, onlooker bee, and scout bee operations will be executed repeatedly.
Computational Complexity
If the numbers of employed and onlooker bees number are both BN, the maximum iteration number is T, the parameter related to a scout bee's behavior of abandoning a solution is N limit , a solution's dimension (for example, the number of endmembers and clustering centers in the problem of endmember extraction and clustering) is M, and the number of parallel computing nodes is C(C ≤ BN), the time complexity of the framework can be represented in Table 1, where g( * ) is the complexity of the RS optimization objective function value calculation.
Complexity Description
Initialization phase Generate BN new solutions by neighborhood search, and calculate the objective function values in T iterations in parallel.Onlooker bee phase Scout bee phase BN Calculate BN solutions' fitness.
In the worst case, BN bees abandon original solutions every K iterations and generate new solutions of M dimensions in parallel.
Comparison
In the MA-based ABC proposed in [17], a food source agent is only responsible for a solution's maintenance and a bee agent is only responsible for the neighborhood search (shown in Figure 4a).Because a bee agent does not store a solution, whenever it executes a neighborhood search in the employed and onlooker bee phases, it has to solicit two solutions from two different food source agents through the network (shown as step 1 in Figure 4a).Subsequently, the new generated solution should also be passed to its corresponding food source agent to update solutions (shown as step 3 in Figure 4a).The frequent communications in the MA-based ABC reduce the computational performance.
In the improved MA-based ABC framework proposed in this paper, each agent exhibits both behaviors (solution maintenance and neighborhood search), and its neighborhood search can be directly executed on its maintenance solution, which means that only one neighbor solution has to be passed to an agent in all of the employed phases (shown in Figure 4b) and parts of the onlooker bee phases (if one of the two randomly selected solutions for a bee happens to be maintained by the bee).Thus, the frequency of transferring solutions among agents will be effectively reduced, which is helpful for improving the efficiency of parallel computation.
In the MA-based ABC proposed in [17], a food source agent is only responsible for a solution's maintenance and a bee agent is only responsible for the neighborhood search (shown in Figure 4 (a)).Because a bee agent does not store a solution, whenever it executes a neighborhood search in the employed and onlooker bee phases, it has to solicit two solutions from two different food source agents through the network (shown as step 1 in Figure 4 (a)).Subsequently, the new generated solution should also be passed to its corresponding food source agent to update solutions (shown as step 3 in Figure 4 (a)).The frequent communications in the MA-based ABC reduce the computational performance.
In the improved MA-based ABC framework proposed in this paper, each agent exhibits both behaviors (solution maintenance and neighborhood search), and its neighborhood search can be directly executed on its maintenance solution, which means that only one neighbor solution has to be passed to an agent in all of the employed phases (shown in Figure 4 (b)) and parts of the onlooker bee phases (if one of the two randomly selected solutions for a bee happens to be maintained by the bee).Thus, the frequency of transferring solutions among agents will be effectively reduced, which is helpful for improving the efficiency of parallel computation.To quantitatively analyze the improvement, the number of transferred solutions among agents in one iteration can be listed as shown in Table 2, which indicates that the improved framework proposed in this paper will spend less time on communication than the former framework [17] does, thus achieving higher efficiency.
Algorithm phase The behaviors of agents
The former framework in [17] The improved framework in this paper
Employed bee
The administrator agents passing solutions to each employed bee agent 2BN BN (1) Each employed bee agent passing a solution to a food source agent BN 0 Passing solutions to the administrator agent
Food Source Agent Solution maintenance
Step Step 4. To quantitatively analyze the improvement, the number of transferred solutions among agents in one iteration can be listed as shown in Table 2, which indicates that the improved framework proposed in this paper will spend less time on communication than the former framework [17] does, thus achieving higher efficiency.
Algorithm Phase
The Behaviors of Agents The Former Framework in [17] The Improved Framework in This Paper
Employed bee
The administrator agents passing solutions to each employed bee agent 2BN BN (1) Each employed bee agent passing a solution to a food source agent BN 0 Passing solutions to the administrator agent BN BN
Onlooker bee
The admin agents passing solutions to onlooker bee agents Each onlooker bee agent passing a solution to a food source agent BN 0 Onlooker bee agents passing solutions to other bee agents 0 Passing solutions to the administrator agent Note: (1) In the employed bee phase, because each bee agent maintains a solution, only one neighborhood solution has to be transferred from the administrator agent to each bee agent; thus, the number of transferred solutions among agents is BN, where BN is the number of bees. (2)In the onlooker bee phase, two solutions (X k and X r ) must be transferred to each bee agent.If one of the two solutions happens to be the solution maintained by the bee agent, there is no need to transfer the solution.Fraction p 1 (0 ≤ p 1 ≤ 1) can be imported to describe the probability of this possibility.p 1 = N BN , where N is the number of bees whose maintaining solution happens to be one of the two solutions in their onlookers' neighborhood search.Thus, the total number of transferred solutions is BN+BN × p 1 . (3)For a bee agent, if its new generated solution X is better than X k , and X k is another bee agent's maintaining solution, X must be transferred.The fraction p 2 (0 ≤ p 2 ≤ 1) is defined to describe the probability of this possibility.Thus, the number of transferred solutions under such circumstances is BN × p 2 .
Case Studies
To validate the effectiveness and efficiency of the proposed MA-based ABC approach while solving the computational challenges of RS optimization, an image clustering problem considering Markov random fields (MRFs) [5] and endmember extraction [16] are taken as case studies.
RS Optimization for Clustering
The model aims to minimize the total MRF classification discriminant function value, which can be summarized as the following optimization problem: where U ij is the MRF classification discriminant function, a combination of spectral and spatial similarity (shown in ( 5)).The Euclidean distance d ij between the pixel r i and the cluster center c j , shown in (8), reflects the degree of spectral similarity between r i and class X j .b ij represents the spatial similarity between pixel r i and class X j , which can be obtained by Function (9).∂i represents a pixel in the neighborhood of r i , ω ∂i represents the class of that pixel in the neighborhood of r i , and δ(•, •) represents the Kronecker function.The parameter β is used to control the influence of the spatial information during classification, i is the number of pixels in the RS image, and j is the number of clusters.The objective function calculation of this model is detailed in [5].
RS Optimization for Endmember Extraction
One RS optimization for endmember extraction can be modeled to minimize the volume of endmembers and the root-mean-square error (RMSE) value of the extracted results.The model can be expressed as follows.
Comparison Experiments
All experiments were carried out in multiple computing nodes with the same hardware configuration shown in Table 3.The MA framework was developed in the language of Java by using the Java Agent Development (JADE) platform, and all objective function calculations were coded in MATLAB and imported into the MA framework as a JAR package.For each dataset, two comparison experiments were designed to validate the MA-based ABC approach in two respects: optimal solution accuracy and calculation efficiency.The accuracy of the optimal solutions is evaluated by comparing with the original algorithms implemented on a single computer.The calculation efficiency is evaluated by comparing with the MA framework proposed in [17].
Notably, weight values are involved in both case studies; for example, the β in formulation ( 7) and µ in formulation (1), which affect the solution quality and the convergence speed.However, the goal of the experiments designed here is to prove that the solution accuracy will not decrease under different frameworks with the same parameter settings but that the calculation efficiency will be improved.Thus, both the weight values in the two case studies are set to 1000 according to [5].It should be noted that the weight value setting may not be the best choice for all RS datasets and case studies.
Evaluation Criteria
Two types of criteria are used to evaluate the accuracy and efficiency.
Purity is a simple and transparent measure for evaluating how well the clustering matches the ground truth data, which can be calculated as follows: where the set of classes of the ground truth.c j is the set of image pixels in cluster j, and g k is the set of pixels in class k.The closer the value of purity is to 1, the better the cluster result is.High purity is easy to achieve when the number of clusters is large, but purity cannot be used to trade off the quality of clustering against the number of clusters.
To make this tradeoff, NMI can be introduced.
where P(c j ),P(g k ), and P(c j ∩ g k ) are the probabilities of a pixel being in cluster c j , class g k , and the intersection of c j and g k , respectively.The value of NMI is normalized within the range [0,1].A large value of NMI corresponds to a high-quality result.ARI is used to evaluate the degree of consistency between the classification results and the test samples.The clustering result and the ground truth data are two different class partitions of pixels.For an image with n pixels, a contingency table, such as that shown in Table 4, can be obtained by calculating the parameters a, b, c and d according to [28].The ARI is calculated as follows: The value of the ARI lies between 0 and 1.The higher the ARI value is, the better the classification result is.
The criteria for SA, an evaluation index for the classification accuracy, can be obtained by the power of spectral discrimination (PWSD), which is one way to measure the degree of difference between two different cluster centers for the same pixel.For pixel x i and cluster centers c j 1 and c j 2 , the PWSD Ω(c j 1 , c j 2 ; x i ) is as follows: where is the spectral angle distance between x i and c j .For an RS image, SA can be formulated as follows [29]: As the distinction between cluster centers and pixels increases, the corresponding values of PWSD and SA also increase.Therefore, large SA and PWSD values correspond to a high-quality clustering result.
• RS optimization for endmember extraction For the RS optimization problem of endmember extraction, the RMSE is a commonly used accuracy criterion.The RMSE quantifies the error between the original hyperspectral image and the remixed image, which represents the generalized degree of image information provided by the extracted endmembers [30].The RMSE is calculated by Equation ( 16).
(2) Efficiency criteria We use the classical notions of speedup and computational efficiency.The speedup S C = T 1 /T C of a distributed application measures how much faster the algorithm runs when it is implemented on multiple computing nodes than it does on a single computing node.
The computational efficiency e C = S C /C measures the average speedup value in a computation clustering environment.Here, C is the number of computing nodes, T 1 is the execution time on a single computing node, and T C is the algorithm's execution time on C computing nodes.
Accuracy
According to the logic of the ABC algorithm, there is no difference in accuracy between the improved framework reported in this paper and the algorithm without MA technology.In other words, the different parallel design will not affect the algorithm's accuracy, which is validated in this section.
Accuracy of Clustering
The MA-based ABC framework coupled with the ABC-MRF-cluster algorithm in [5] and the ABC-MRF-cluster without using MA technology were run 10 times for comparison.
The median, mean, and standard deviation of the objective function and the t-test value are listed in Tables 5 and 6.The values of the ARI and PWSD are similar and remain stable, which means that the accuracy of ABC-MRF-cluster classification accelerated by MA technology is nearly the same as that of ABC-MRF-cluster classification executed on a single computer.Additionally, both standard deviations of the objective function, which are much smaller than the mean and median value, prove the algorithm's stability.Furthermore, the t-test values, which can be used for evaluation if two sets of data are significantly different from each other, are calculated.The p-values of all criteria between the ABC-MRF-cluster and the MA-based ABC are much greater than the threshold value of 0.05, which proves that there is no notable difference between the optimization results of the MA-based ABC framework and the ABC algorithm without MA technology.Therefore, we can conclude that the MA-based approach will not affect the optimization accuracy.The MA-based ABC framework improved upon in this paper coupled with the ABC-EE algorithm in [16] and the ABC-EE without using MA technology were run 10 times for comparison.The accuracy statistics are shown in Table 7.The p-value of the RMSE between the ABC-MRF-cluster and MA-based ABC are greater than the threshold value of 0.05, which also proves that there is no notable difference between the optimization results of the MA-based ABC framework and the ABC algorithm without MA technology.[17] To validate the improvement in the enhanced MA-based ABC framework, a comparison with the former framework proposed in [17] was made to solve the same RS optimization in the same computing environment and under the same parameter settings.When performing all the experiments, all of the bee agents were uniformly distributed on each computation node and the administrator agent was randomly distributed on one node.
The comparison results are shown in Table 8.In the one-node computation environment, the computation time consumed per iteration of the improved framework is shorter than the former framework, with an average computing efficiency promotion gap of 5.83% for dataset 1 and 6.57% for dataset 2. With the increase in the number of computation nodes, the average efficiency gap between the improved and the former framework becomes increasingly large, exceeding 50% when there are 20 nodes for dataset 1 and dataset 2. The results indicate that the improved MA-based ABC framework is more efficient than the framework proposed in [17].8.
As shown in Figure 7, regardless of how many bees are involved in the computation, the average computation time per iteration decreases dramatically as the number of computation nodes increases.Moreover, each bee's average computation time in one iteration was calculated (Table 8).This finding indicates that the speedup of the improved MA-based ABC algorithm increases significantly with the increase in the number of nodes in the parallel computation environment.However, the computational efficiency of each node's performance nonlinearly decreases due to the increased communication cost among nodes within the network when adding more nodes.
2) Influence of the quantity of computation nodes
To better analyze the influence of the quantity of computation nodes participating in the MAbased ABC framework for RS optimization, a series of comparison experiments were performed by setting the population of ABC to 10, 20, and 40 in different computation environments with 1, 2, 5, 10, and 20 nodes.Each experiment was performed 10 times.The statistical results are shown in Figure 7 and Table 8.
As shown in Figure 7, regardless of how many bees are involved in the computation, the average computation time per iteration decreases dramatically as the number of computation nodes increases.Moreover, each bee's average computation time in one iteration was calculated (Table 8).This finding indicates that the speedup of the improved MA-based ABC algorithm increases significantly with the increase in the number of nodes in the parallel computation environment.However, the computational efficiency of each node's performance nonlinearly decreases due to the increased communication cost among nodes within the network when adding more nodes.The statistical values of the improved MA-based ABC framework's efficiency criteria are presented in Table 9.When increasing the number of computing nodes, the speedup increases The statistical values of the improved MA-based ABC framework's efficiency criteria are presented in Table 9.When increasing the number of computing nodes, the speedup increases significantly since all calculations can be carried out at multiple nodes concurrently.In addition, the results also show that with the increase in the number of nodes, the computational efficiency of a node decreases, as a higher network communication cost to other nodes is generated.Let there be 60 bees in experiments for both datasets; a comparison between this improved framework and the former framework in [17] can be made in different parallel environments with 1, 2, 5, 10, and 20 computing nodes.
The efficiency statistics of these two frameworks for endmember extraction are recorded in Table 10.Clearly, the computation time per iteration of the improved framework is shorter than that of the former framework for both datasets.In particular, as more nodes are added in the computing environment, the efficiency advantage of the improved framework becomes more prominent.The speedup and computational efficiency are calculated as depicted in Figure 8.By analyzing lines of the same color, it can be found that, with the increase in the number of computing nodes, the speedup increases dramatically, but the computational efficiency decreases because of the rising communication cost among different nodes.Comparing lines containing the same shapes (circles or rectangles) indicates that the improved framework outperforms the former framework with a higher speedup value and a lower efficiency descent rate.By observing lines of the same type, it can be observed that both the speedup and computational efficiency tendencies of dataset 1 are much better than those of dataset 2. The reason is that the amount of time used to calculate the objective function value for dataset 1 is less than that used to calculate dataset 2 after sampling, and the communication cost for dataset 1 occupies a higher proportion of the total calculation cost, which results in a greater computational improvement by saving on the same communication cost.
Remote Sens. 2018, 10, x FOR PEER REVIEW 3 of 23 cost for dataset 1 occupies a higher proportion of the total calculation cost, which results in a greater computational improvement by saving on the same communication cost.
Stability
In this paper, the failure of a single computing node does not lead to the failure of the overall computation, which provides the proposed computational framework with good computational stability.Such stability is achieved by predefining a time limit for each node's calculation.When the management agent does not receive the results returned by bee agents within the time after a calculation instruction is sent, it can be concluded that the node's calculation or the network communication failed.In this case, the administrator agent can resend calculation instructions to obtain the correct calculation results from bee agents.
Scalability
The computational framework proposed maintains good scalability.Any newly added bee agent can perform optimization together with other previously deployed bee agents as long as the agent is deployed in the same communication network and registered at the management agent.Therefore, the number of computation nodes of the parallel computation is easily increased, and the computation scale is easily expanded.Similarly, if certain computing nodes are not needed to participate in parallel computing, their network IP addresses can be deleted from the management agent.
Flexibility
Hyperspectral RS image clustering and endmember extraction were applied to validate the performance of an MA-based ABC approach.However, the parallel computing framework proposed in this paper can be easily applied to many other RS optimization problems.In this framework, all the behaviors of the managing agent, as well as the communication behaviors of the bee agents and the neighborhood search behaviors, are universal and can be used for most RS optimization
Stability
In this paper, the failure of a single computing node does not lead to the failure of the overall computation, which provides the proposed computational framework with good computational stability.Such stability is achieved by predefining a time limit t for each node's calculation.When the management agent does not receive the results returned by bee agents within the time after a calculation instruction is sent, it can be concluded that the node's calculation or the network communication failed.In this case, the administrator agent can resend calculation instructions to obtain the correct calculation results from bee agents.
Scalability
The computational framework proposed maintains good scalability.Any newly added bee agent can perform optimization together with other previously deployed bee agents as long as the agent is deployed in the same communication network and registered at the management agent.Therefore, the number of computation nodes of the parallel computation is easily increased, and the computation scale is easily expanded.Similarly, if certain computing nodes are not needed to participate in parallel computing, their network IP addresses can be deleted from the management agent.
Flexibility
Hyperspectral RS image clustering and endmember extraction were applied to validate the performance of an MA-based ABC approach.However, the parallel computing framework proposed in this paper can be easily applied to many other RS optimization problems.In this framework, all the behaviors of the managing agent, as well as the communication behaviors of the bee agents and the neighborhood search behaviors, are universal and can be used for most RS optimization problems.We can solve different RS optimization problems by modifying each bee agent's objective function calculation method and its required parameters.Therefore, the parallel computing framework proposed in this paper has good flexibility.
Conclusions
In this paper, an improved parallel processing approach involving the integration of an ABC optimization approach and multiagent technology is proposed.Taking hyperspectral RS image clustering and endmember extraction as examples, two types of agents are designed: an administrator agent and multiple bee agents.By executing the behaviors of each agent and the communication among agents, an optimal result without sacrificing accuracy can be obtained by parallel computation with dramatically increased efficiency.Moreover, a series of experiments proves that the improved MA-based ABC framework can achieve a greater enhancement in parallel computational efficiency than the framework proposed in [17] can.Moreover, the integration of MA and GPU technology by offloading each individual's behaviors to the GPU's arithmetic logic units under this MA-based ABC framework could be a more efficient approach, which should be further studied.
it is converged, kill all bee agents and export the bestso-far solution as the optimal solution; otherwise, repeat the step 3-8Bee AgentBee Agent Bee AgentWaiting...
Figure 3 .
Figure 3.The overall workflow of agents' behaviors for remote sensing (RS) clustering.
Figure 4
Figure 4The major steps in generating a new solution.(a) The approach reported in[17] and (b) an improved version of (a).The steps labeled with * in (a) are redundant with (b).
Figure 4 .
Figure 4.The major steps in generating a new solution.(a) The approach reported in [17] and (b) an improved version of (a).The steps labeled with * in (a) are redundant with (b).
)The
M endmembers in the dimension-reduced hyperspectral image ri N i=1 with N pixels; α ij is the abundance, which represents the proportion of the j-th endmember in the i-th pixel; and ε i is the random error.V( ẽj M j=1 is the volume of the simplex whose vertices are ẽj Pavia dataset with a spatial resolution of 1.3 m was collected by the Reflective Optics System Imaging Spectrometer (ROSIS) over the University of Pavia, Italy, in 2001.The dataset contains 103 bands (after the removal of the water vapor absorption bands and bands with a low signal-to-noise ratio (SNR)) with a wavelength range of 430-860 nm and covers an area of 610 × 340 pixels.A false color composite image (bands 80, 45, and 10) is shown in Figure 5a.The ground truth dataset, which contains nine classes, is shown in Figure 5b.The Pavia dataset with a spatial resolution of 1.3 m was collected by the Reflective Optics System Imaging Spectrometer (ROSIS) over the University of Pavia, Italy, in 2001.The dataset contains 103 bands (after the removal of the water vapor absorption bands and bands with a low signal-to-noise ratio (SNR)) with a wavelength range of 430-860 nm and covers an area of 610 × 340 pixels.A false color composite image (bands 80, 45, and 10) is shown in Figure 5(a).The ground truth dataset, which contains nine classes, is shown in Figure 5(b).
Figure 5
Figure 5 The Pavia dataset: (a) a false color composite image and (b) the ground truth.
Figure 6
Figure 6 The Indian Pine dataset: (a) a false color composite image and (b) the ground truth.
Figure 5 .
Figure 5.The Pavia dataset: (a) a false color composite image and (b) the ground truth.5.1.2.Dataset 2 The Indian Pine dataset with a spatial resolution of 20 m was obtained by the airborne visible/infrared imaging spectrometer (AVIRIS) in 1992.The dataset contains 169 bands (after the removal of the water vapor absorption bands and low-SNR bands) with a wavelength range of 400-2500 nm and covers an area of 145 × 145 pixels.A false color composite image (bands 54, 33, and 19) is shown in Figure 6a.The ground truth dataset, which contains 16 classes, is shown in Figure 6b.
Figure 5
Figure 5 The Pavia dataset: (a) a false color composite image and (b) the ground truth.
Figure 6
Figure 6 The Indian Pine dataset: (a) a false color composite image and (b) the ground truth.
Figure 6 .
Figure 6.The Indian Pine dataset: (a) a false color composite image and (b) the ground truth.
( 2 )
Influence of the quantity of computation nodesTo better analyze the influence of the quantity of computation nodes participating in the MA-based ABC framework for RS optimization, a series of comparison experiments were performed by setting the population of ABC to 10, 20, and 40 in different computation environments with 1, 2, 5, 10, and 20 nodes.Each experiment was performed 10 times.The statistical results are shown in Figure7and Table
Figure 7
Figure 7 The average computation time per iteration for different numbers of bees and computing nodes.(a) Dataset 1 and (b) Dataset 2.
Figure 7 .
Figure 7.The average computation time per iteration for different numbers of bees and computing nodes.(a) Dataset 1 and (b) Dataset 2.
Figure 8
Figure 8 The speedup and computational efficiency of the improved MA-based ABC framework. t
Figure 8 .
Figure 8.The speedup and computational efficiency of the improved MA-based ABC framework.
Table 1 .
The Time Complexity of the Framework.
Table 2
The Number of Transferred Solutions among Agents in one Iteration.
Table 2 .
The Number of Transferred Solutions among Agents in one Iteration.
Table 3 .
The Hardware Configuration Used in the Experiments.
Table 4 .
Contingency Table of the Clustering Result and Ground Truth Data.
Table 6 .
[5]uracy Statistics of the MA-based ABC Framework and the ABC-MRF-cluster in[5]for Dataset 2.
Table 8 .
[17]ciency Statistics of the Improved MA-based ABC Framework and the Framework Proposed in[17]for Clustering.
Table 9 .
Statistical Values of the Improved MA-based ABC Framework's Efficiency Criteria.
Table 10 .
[17]ciency Statistics of the Improved MA-based ABC Framework and the Former Framework Proposed in[17]for Endmember Extraction. | 2019-01-30T05:03:27.774Z | 2019-01-15T00:00:00.000 | {
"year": 2019,
"sha1": "a4e8a6a1759d5300457597e4db072e95e2168dab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/11/2/152/pdf?version=1547550529",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a4e8a6a1759d5300457597e4db072e95e2168dab",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
59306831 | pes2o/s2orc | v3-fos-license | Dyslexic individuals orient but do not sustain visual attention: Electrophysiological support from the lower and upper alpha bands
&NA; Individuals with developmental dyslexia have been characterized by problems with attentional orienting. In the current study, we specifically focused on possible changes in endogenous visual orienting that may be reflected in the electroencephalogram. A variant of the Posner cuing paradigm was employed with valid or invalid central cues that preceded target stimuli that were presented in the left or right visual field. The target stimuli consisted of vertical or horizontal stripes with low (two thick lines) or high (six thin lines) spatial frequencies. We examined lateralized alpha power in the cue‐target interval as recent studies revealed that a contra vs. ipsilateral reduction in alpha power relates to the orienting of attention. An initial orienting effect in the lower alpha band was more pronounced for dyslexic individuals than for controls, suggesting that they oriented at an earlier moment in time. However, in contrast with controls, at the end of the cue‐target interval no clear contralateral reduction in the upper alpha band was observed for dyslexic individuals. Dyslexic individuals additionally displayed slower responses, especially for invalidly cued high spatial frequency targets in the left visual field. The current data support the view that dyslexic individuals orient well to the cued location but have a problem with sustaining their attention.
Introduction
Developmental dyslexia has been defined as "an unexpected difficulty in reading in individuals who otherwise possess the intelligence and motivation considered necessary for fluent reading, and who also have had reasonable reading instruction" (Ferrer et al., 2010;p. 93). About 7% of the population is affected by developmental dyslexia (Banfi et al., 2017), which points to the importance of a better understanding of this condition. In studies on developmental dyslexia, it has frequently been observed that dyslexic individuals have problems with visuospatial attentional orienting (e.g., see Banfi et al., 2017;Collis et al., 2013;Facoetti and Turatto, 2000;Facoetti et al., 2001;Facoetti et al., 2003;Goswami et al., 2014;Klimesch et al., 2001;Ruffino et al., 2014;Vidyasagar and Pammer, 2010;Vidyasagar, 2013). This view has also been linked with structural changes in the brains of dyslexic individuals. Płoński et al. (2016) reported exclusive changes in the left hemisphere including the superior and middle temporal gyri, the subparietal sulcus, and the prefrontal areas (see also Roux et al., 2012). Other researchers noted the relation with processing along the magnocellular pathway (Livingstone et al., 1991; for reviews, see Stein, 2001Stein, , 2014, which has also been linked to the dorsal pathway (Gori et al., 2016), and the posterior parietal cortex (Jaśkowksi and Rusiak, 2005;Lobier et al., 2014;Valdois et al., 2018). Recently, Giraldo-Chica et al. (2015) observed that only the left lateral geniculate nucleus of the thalamus was smaller in individuals with dyslexia, which points to reduced processing along the magnocellular pathway in the left hemisphere. Nevertheless, the idea that dyslexic individuals experience problems with visuospatial attentional orienting (i.e., the attentional deficit hypothesis) has also been criticized. Ramus et al. (2018) recently argued that the apparent consensus on this hypothesis is based on a rather selective review of the evidence. Furthermore, Lukov et al. (2015) presented results indicating that visuospatial attention deficits do not underlie different types of dyslexia (see also Ziegler et al., 2010;Collis et al., 2013). Given this rather mixed state of affairs, we decided to further explore attentional orienting in developmental dyslexia.
Several different although related aspects of attentional orienting need to be distinguished to better characterize the attentional deficit hypothesis for dyslexic individuals. The first crucial distinction is between endogenous and exogenous spatial orienting, which are often examined with different variants of the Posner cuing paradigm (e.g., see Posner et al., 1980). The second important aspect is the time course of orienting as there are some differences between dyslexic individuals and controls with regard to the moment at which attention is directed after a relevant cue (Hari and Renvall, 2001).
Endogenous and exogenous orienting in dyslexic individuals
Endogenous and exogenous orienting are triggered by different processes although they may be considered to be related (e.g., see Chica et al., 2013). Endogenous orienting refers to the voluntary direction (e.g., set by instruction or internal goals) of attention to a location (e.g., see Van der Lubbe et al., 2006). In most studies, centrally presented cues are used that point to the to-be-attended location. At a behavioral level, the orienting effect is often estimated as the difference in performance between more frequent (e.g., 80%) validly cued, and less frequent (e.g., 20%) invalidly cued targets. As processing of the cue requires some time, cuing effects usually emerge after a short stimulus onset asynchrony (SOA) of about 300 ms. Endogenous orienting has been related to a bilateral dorsal frontal-parietal network (Corbetta and Shulman, 2002).
Exogenous orienting refers to the automatic and involuntary attraction of attention by specific stimulus features, e.g., the onset or offset of a stimulus (e.g., see Van der Lubbe et al., 2005), or the presence of a deviant (pop-out) stimulus that contrasts with its surroundings. Exogenous orienting can be examined by comparing performance between targets at cued and uncued positions that have equal probabilities of occurrence, and has been related to a right lateralized ventral frontal-parietal network (Corbetta and Shulman, 2002), especially the right temporo-parietal junction. Exogenous orienting effects vanish with increasing SOAs and may even invert with SOAs longer than 250 ms, which has been denoted as inhibition of return (IOR) (e.g., see Klein, 2000). A recent study suggests that endogenous and exogenous orienting have slightly different effects on processing in visual brain areas (see Bekisz et al., 2016).
Importantly, some researchers have argued that their results with dyslexic children (e.g., see Facoetti and Turatto, 2000;Facoetti et al., 2001) point to a selective impairment of right parietal functions related to exogenous orienting (for a recent overview see Liu et al., 2018), as no exogenous orienting effects were observed for these children, although this impairment maybe be restricted to children that have a low pseudoword reading accuracy (see Facoetti et al., 2006). Most relevant for the current study, Facoetti et al. (2001) used an endogenous orienting task (Exp. 2) and observed that especially responses to invalidly cued left targets were delayed in dyslexic children relative to controls. Invalidly cued targets are thought to require exogenous orienting triggered by their onsets, therefore, these findings were related to a deficit in the right parietal cortex. This view on dyslexia has been denoted as the leftside mini-neglect hypothesis, assuming a diminished ability to orient to stimuli in the left visual field, comparable to neglect patients (e.g., see Blumenfeld, 2010). Thus, these studies point to differences in dyslexic participants in exogenous orienting that may be related to the right parietal cortex (see also Lobier et al., 2014). Other studies, however, point to anatomical changes in the left hemisphere (e.g., see Giraldo-Chica et al., 2015). Furthermore, Kermani et al. (2018) recently argued that reading experience actually biases visual search and therefore may modulate observed asymmetries in search efficiency between dyslexic individuals and controls.
Results of several other studies suggest that dyslexic individuals have deficits in endogenous orienting (Buchholz and McKone, 2004;Aimola Davies, 2005, 2008). Buchholz and McKone (2004) revealed that high-functioning adults with developmental dyslexia (i.e., university students that met the criteria of dyslexia) performed comparably to a control group on a visual search task with popouts that trigger exogenous orienting, while they performed worse on a visual search task that required endogenous orienting. Interestingly, higher search slopes were observed in the latter task for individuals with reduced phonological abilities. Buchholz and Aimola Davies (2008) used the Attentional Network Task developed by Fan et al. (2002), which enables to separate effects of alertness, orienting, and executive control. No differences between dyslexic adults and controls were observed with regard to alerting effects and executive control. However, attentional orienting appeared to be more difficult for the dyslexic participants with cues at 6.5°, while results were comparable for cues at 3°. These results suggest that dyslexic individuals have problems with directing their attention in the peripheral visual field. In a very recent study, Liu et al. (2018) compared Chinese children with poor reading abilities with controls, and observed no endogenous orienting effect for poor reading children whereas a clear orienting effect was observed for controls. The absence of an orienting effect for the poor reading children might be related to slower processing of the cue, and/or a slower recruitment of attentional processes. Slower processing of the cue in dyslexic individuals (e.g., see Judge et al., 2013), or the slower subsequent recruitment of attention likely has an impact on exogenous and endogenous orienting effects.
The time course of orienting effects in dyslexic individuals
Several studies indicate that the time course of orienting in dyslexic individuals differs from controls. It has been argued that sluggish attentional shifting (SAS) can account for the impaired processing of rapid stimulus sequences in dyslexia (see Hari and Renvall, 2001). Additionally, there is support that the SAS hypothesis concerns both the visual and the auditory modalities (Lallier et al., 2009(Lallier et al., , 2010b. The SAS hypothesis may also explain the commonly observed response delays in dyslexic participants (e.g., see Facoetti et al., 2000aFacoetti et al., , 2000bFacoetti et al., , 2003Jonkman et al., 1992;Wijers et al., 2005). Furthermore, Facoetti et al. (2000b) observed that dyslexic children were not able to orient fast to a peripheral cue as no cuing effects were observed with short stimulus onset asynchronies (SOAs) between the cue and the to-be-detected targets (see also Facoetti et al., 2005). By employing different cue sizes and different SOAs in their third experiment, Facoetti et al. (2000b) were also able to show that cue size was no longer effective in dyslexics with long SOAs, which suggested that unlike controls, dyslexic children were not able to sustain their attention focused over time. Facoetti et al. (2009) additionally showed no exogenous orienting effects in visual and auditory tasks in dyslexic children that had problems with reading non-words at short SOAs of 100 ms, while these cues were effective for controls. For longer SOAs of 250 ms exogenous orienting effects were present for these children while they were absent for the controls. These results were corroborated by Ruffino et al. (2014). Altogether, most of the results of these studies suggest that exogenous orienting effects may be delayed in dyslexic individuals, in line with the SAS hypothesis, although there is also some support that dyslexics have problems with sustaining their attention.
Interestingly, although spatial attention is often linked to the right parietal cortex, there is quite some evidence that participants are not only able to attend to a specific location, but also to a specific moment in time (i.e., temporal orienting; for a review, see Nobre, 2001). Coull et al. (2000) revealed that temporal orienting involves a left-lateralized frontal-parietal network. Given the reported changes in the left hemisphere (see above), and the aforementioned differences in orienting over time, one could argue that the observed attentional differences between dyslexic individuals and controls are actually more related to changes in temporal orienting than in spatial orienting. Support for this idea comes from studies examining attentional masking (Ruffino et al., 2010), changes in the attentional blink (Facoetti et al., 2008;Lallier et al., 2010a), and temporal order judgements (Jaśkowksi and Rusiak, 2008;Ortiz et al., 2014). A more direct way to examine the time course of spatial orienting is to use direction-dependent measures derived from the electroencephalogram (EEG). Wijers et al. (2005) used an endogenous orienting variant of the Posner paradigm to examine whether high-functioning dyslexic individuals differed from age-and gender-matched controls. A target letter was presented with an SOA of 750 ms after the cue and required only a response on 25% of the trials. Dyslexic individuals responded slower than controls and also tended to miss more targets. Event-related potentials (ERPs) were derived from the EEG and showed three direction-dependent effects during the orienting phase. First, between 200 and 300 ms increased contralateral (relative to the cued side) negativity was observed above occipital sites. This negativity, known as the early directing attention negativity (EDAN; e.g., see Harter et al., 1989;Hopf and Mangun, 2000;Van der Lubbe et al., 2006) has been related to selection of the relevant side of the arrows (Van Velzen and Eimer, 2003), but recently it has also been linked to microsaccades (Meyberg et al., 2017). Secondly, the EDAN was followed by a posterior contralateral positivity that remained until stimulus onset. This positivity, referred to as the late directing attention positivity (LDAP; e.g., see Harter et al., 1989;Hopf and Mangun, 2000;Van der Lubbe et al., 2006), is thought to reflect the influence of spatial attention on visual processing. Importantly, Wijers et al. observed no differences between dyslexic individuals and controls for the EDAN and the LDAP. Thirdly, between 350 and 750 ms, a frontal negativity was observed that may be related to the anterior directing attention negativity (ADAN; Eimer et al., 2002;Nobre et al., 2000). Although this component has been frequently linked to attentional orienting, there is still considerable discussion about its functional role (see Green et al., 2005;Hopf and Mangun, 2000;Meyberg et al., 2017;Seiss et al., 2007;Van der Lubbe et al., 2006;Van Velzen et al., 2006). For control participants, differences between left and right cues were presented above the right hemisphere, while for dyslexic participants an additional effect was present above the left hemisphere. Wijers et al. concluded that their findings support the hypothesis about a disregulated interhemispheric asymmetry in dyslexia (Eckert and Leonard, 2003a;Eckert et al., 2003b). Importantly, the findings reported by Wijers et al. do not really support the SAS hypothesis, as one would expect to observe differences between dyslexic participants and controls above posterior sites, where effects of visuospatial orienting are largest (e.g., see Van der Lubbe et al., 2006). For example, the LDAP might have been delayed or less pronounced for dyslexic individuals as compared to controls, but this is not what they observed.
The time course of orienting in dyslexic individuals examined with EEG
Importantly, the aforementioned components (EDAN, LDAP, and ADAN), are all lateralized components derived from ERPs, also referred to as ERL (event-related lateralized) components, which implies that only lateralized activity that is time-locked to cue onset remains. Recent studies have indicated that the moment of attentional orienting likely varies over trials (and participants), and this variability may imply that certain effects cancel out due to the averaging procedure used to compute ERPs (e.g., see Van der Lubbe and Utzerath, 2013). The lack of a posterior group difference in the study of Wijers et al. could be due to this variability. Interestingly, several studies revealed that effects of endogenous orienting in the cue-stimulus interval are also clearly visible in lateralized activity in the posterior alpha (α) band (8-13 Hz; e.g., see Worden et al., 2000;Thut et al., 2006), which is based on the analysis of the raw EEG (i.e., before averaging across trials). The common observation is that alpha power is reduced above sites contralateral to the side to which attention was directed as compared to ipsilateral sites. Van der Lubbe and Utzerath (2013) followed the ideas of Thut et al. (2006) by computing ipsi-contralateral differences in alpha power weighted by the sum of their powers, which reduces overall inter-individual power differences. Additionally, in the method employed by Van der Lubbe and Utzerath an average is computed across left and right cued sides, which results in the so-called lateralized power spectra (LPS) index. The advantage of the latter procedure is that overall hemispheric differences in alpha power unrelated to the attended side cancel out (for a comparable procedure with the lateralized readiness potential, see Coles, 1989;De Jong et al., 1988). Van der Lubbe and Utzerath revealed that after correcting for overall hemispheric differences, a clear decrease in contralateral vs. ipsilateral alpha power was present. In line with several other studies (e.g., see Klimesch et al., 2007Klimesch et al., , 2011Klimesch, 2012;Thut et al., 2006), this contralateral reduction was interpreted as a release of inhibition that facilitates selection of the forthcoming lateral stimulus. Since this method reduces the impact of trial-to-trial as well as between-participant variability, it is an excellent tool to investigate attention-related differences between dyslexic individuals and controls during the orienting phase.
The current study
Our interest was especially directed at endogenous orienting. This type of orienting seems most relevant for reading, as reading is commonly driven by internal rather than by external goals. A version of the Posner paradigm with central cues was used that validly indicated the side where the target appeared on 80% of the trials after an SOA of 1000 ms. As targets non-alphanumeric stimuli were employed as we intended to demonstrate that observed effects are not limited to letterlike stimuli (in contrast with the suggestion by Collis et al., 2013). We chose to use horizontal or vertical gratings with only two thick lines (low spatial frequency: LSF) or six thin lines (high spatial frequency: HSF). Magnocellular cells are more sensitive to LSF stimuli (see Stein, 2001Stein, , 2014, therefore suboptimal performance for these stimuli might be observed for dyslexic individuals. Changes in the time course of endogenous orienting will be examined by using the aforementioned ERL components, and especially by computing the LPS for the lower (α 1 ) and higher (α 2 ) alpha bands (Van der Lubbe and Utzerath, 2013) as the latter method may be more sensitive in assessing changes in orienting. The main aim of the current study was thus to further test whether developmental dyslexia is related to a delay or changes in endogenous attentional orienting, which accords with the SAS hypothesis (e.g., Hari and Renvall, 2001). The minineglect hypothesis (e.g., Facoetti et al., 2001) implies that dyslexic individuals may especially have problems with responding to invalidly cued targets on the left while orienting effects may be small for targets on the right. To further examine the mini-neglect hypothesis and also control for possible group differences in executive functions we compared the performance of dyslexic individuals and controls on a few neuropsychological tasks (Trail Making, Bourdon-Wiersma, and Balloons) that have been employed to assess problems with executive functions, visual perception, and visual attention (see also Vieira et al., 2013).
Participants
Twenty-nine participants were tested in this study. Due to measurement problems, the data of three participants could not be used, which left 26 participants. Twelve of the participants (the experimental group) had been diagnosed as dyslexic (6 male, 6 female, 11 righthanded, 1 left-handed) while 14 participants were employed as controls (9 male, 5 female, all right-handed). 1 Handedness was assessed with the Annett's Handedness Inventory (Annett, 1970). All participants were recruited among the local student population at the University of Twente, the Netherlands, and had normal or corrected-to-normal vision. None of the participants were colorblind or had a history of neurological or psychiatric disease. Participants of the experimental group were also asked to bring their dyslexia statement to the experiment. Before the experiment, participants signed an informed consent form. The employed experimental procedures were approved by the ethical committee at the Faculty of Behavioral Sciences at the University of Twente.
Tasks and stimuli
A small dyslexia test battery was used, the DST NL (a Dutch dyslexia screening test) to examine whether the experimental and the control group differed on the relevant characteristics. A large number of the participants of the experimental group were already familiar with some of the subtests (seven participants). Therefore, we only reported results on these tests for the remaining five participants of the experimental and all participants of the control group. All participants additionally had to complete three neuropsychological tasks (Trail Making, Bourdon-Wiersma, and Balloons) that are clinically used to assess problems with executive functions, visual perception, and visual attention. The Trail Making task (Reitan, 1958) consists of two parts. Part A is a relatively simple task in which the participant has to connect consecutive numbers (1-2-…), while in part B, they have to alternate between numbers and letters (1-A-2-B-…). The idea is that part B tests more complex executive functions related to the frontal lobes (e.g., see Miskin et al., 2016). The Bourdon-Wiersma test (Lezak, 1995) requires participants to select a group of dots (e.g., 4) on a sheet of 50 lines of dot patterns consisting of 3, 4, or 5 dots. This test checks for problems with visual perception and vigilance. The Balloons task (Edgeworth et al., 1998) is also a paper and pencil test that consists of two versions. Version A is a control test with pop-out stimuli (select balloons with a lower vertical line among balloons without a line) that checks for visual impairments and exogenous orienting, while version B is a form of absence search (select balloons without a line among balloons with a line) that is thought to require inspection of each individual stimulus. In all neuropsychological tasks that we examined, the time taken in seconds to complete the tests was used as the dependent variable.
The main task during which EEG was measured was an endogenous version of the Posner (1980) cuing paradigm (e.g., see Van der Lubbe et al., 2006). The task consisted of 672 experimental trials in total, separated in four blocks of 168 trials each. The experimental trials were preceded by 20 practice trials. The total duration of the task was approximately 70 min. An overview of the relevant events on a trial is displayed in Fig. 1.
On every trial a default display was presented consisting of a central white fixation dot (0.16 * 0.16°) on a black background. The fixation dot was flanked on the left and right by two open light grey circles (at 12.06°, with r = 0.61°). Start of a trial was indicated by an auditory warning signal ("BEEP!!") and an enlargement of the fixation dot for 400 ms. Participants had to keep their eyes directed at the fixation dot. Six hundred milliseconds after offset of the warning signal a rhomb replaced the fixation dot. The rhomb (height 1.31°, width 2.62°) consisted of two colored triangles (red and green), pointing to the left and the right. Either the green or the red triangle was defined as task relevant. The relevant triangle pointed to the side of a subsequently presented to be discriminated target with a validity of 80%. Task relevance of the red or green triangle changed halfway the experiment. The rhomb was presented for 400 ms. After an SOA of 1000 ms relative to the rhomb the target was presented on 95% of the trials. On 80% of the trials the target appeared on the cued side, on 15% of the trials the target appeared on the uncued (or invalidly cued) side, whereas on 5% of the trials, no target occurred (catch trials).
We employed different types of targets that could appear in the left or the right open circle, which were presented for a duration of 300 ms. They either had a high or a low spatial frequency (HSF: six thin lines of 0.08°; LSF: two thick lines of 0.25°), and were presented in either a vertical or a horizontal orientation. Participants were instructed to press a left button with their left index finger (CTRL left) when a target with a horizontal orientation was presented and a right button with their right index finger (CTRL right) when a target with a vertical orientation was presented. Participants were instructed to respond as fast and accurately as possible.
Apparatus and EEG recording
Participants were seated on a comfortable chair in a darkened room at approximately 70 cm from a 17-in. monitor. Presentation software (Neurobehavioral Systems, Inc., 2012) installed on one computer was used to control stimulus presentation and send relevant markers to the EEG amplifier to code the onset of relevant events. The left and right CTRL buttons, which had to be pressed with the index fingers of the left and right hand, were located on a standard QWERTY keyboard.
EEG was recorded using passive Ag/AgCl ring electrodes placed on standard scalp sites according to the extended 10-20 system at 61 locations mounted in an elastic cap (Braincap, Brain Products GmbH). A ground electrode was affixed at the forehead. The horizontal and vertical electro-oculogram (hEOG and vEOG) were measured by using electrodes located above and below the left eye and by using electrodes located at the outer canthi of the left and right eye. Electrode gel and standard procedures were used to reduce resistance (< 5 kΩ). EEG and EOG data were amplified using a 72-channel QuickAmp (Brain Products GmbH). This amplifier has a built-in average reference. Data sampling with a frequency of 500 Hz and digital filters (TC = 5.0 s, low-pass filter 100 Hz, notch filter 50 Hz) was carried out with BrainVision Recorder software (Brain Products GmbH), which was installed on a separate acquisition computer.
Processing of the behavioral data
Standard procedures were used to score performance on the neuropsychological tasks. The behavioral data of the endogenous cuing task were analyzed by employing Matlab-scripts on the marker data for those trials that contained no detectable eye movements (|hEOG| < 60 µV, from 0 to 1200 ms after cue onset).
Reaction times (RT) faster than 100 ms were considered as premature, while RT slower than 2000 ms were categorized as misses. Only trials with correct responses were used to compute the average RT for each combination of Spatial Frequency of the target (LSF or HSF), Cue Validity (valid or invalid), Stimulus Side (left or right) and Response Fig. 1. Setup of trial. A rhomb, consisting of a green and a red triangle, was presented 1000 ms after an auditory warning signal. Either the red or the green part of the rhomb signaled the most probable side at which a target would occur. The target occurred 1000 ms after cue onset. The target was presented for 300 ms and consisted of a vertical or horizontal lines with either a low (two thick lines) or a high (six thin lines) spatial frequency. Participants were instructed to respond with a left or right button press depending on the orientation (vertical or horizontal) of the target lines.
Side (left or right). The effects of the factors Spatial Frequency, Cue Validity, Stimulus Side, Response Side, and the between-subjects factor Group (dyslexic individual or control) on RT were analyzed with a repeated measures ANOVA. Proportions of correct responses (PC) for each category were transformed (arcsin) to meet normality assumptions, and were subsequently analyzed in the same way as RT. Proportions of misses for each category underwent a similar procedure and analysis as PCs. All statistical analyses were performed with IBM SPSS 20 (IBM Corporation).
Processing of the EEG data
The raw EEG was analyzed with Brain Vision Analyzer software (version 2.1.0.327) by selecting first a −1000 to 3000 ms time window relative to cue onset. Low cutoff (TC = 2.5 s) and high cutoff (25 Hz) filters together with a notch filter of 50 Hz were applied, and a baseline was set from −100-0 ms. Trials with detectable eye movements (see above) were removed. EEG data were first checked for large artifacts (Min/max: ± 250 µV, low activity for 50 ms > 0.1 µV). Next, Independent Component Analysis (ICA) was carried out to remove components that had a non-cortical origin. After resetting the baseline, the EEG data were checked for residual artefacts (Min/max: ± 150 µV, low activity for 50 ms > 0.1 µV). Next two different analyses were performed to examine the cue-target interval.
Dyslexia screening and neuropsychological test outcomes
A control analysis revealed that after removal of the data of three participants with measurement problems, the control group (M age = 20.4 yrs, SD = 2.3) was slightly younger than the experimental group with high-functioning dyslexic individuals (M age = 23.3 yrs, SD = 4.3), F(1,24) = 4.5, p = .043. This difference seems due to the involvement of more students from the first and second years in the control group. To exclude the possibility of a confounding in subsequent analyses, Age was initially always included as a covariate. A control analysis on the obtained handedness scores revealed no group differences, F(1,24) = 0.5.
The scores on the subtests of the DST NL were first statistically evaluated with a MANOVA (Wilk's Lambda), while using Age as a covariate. This analysis revealed no effect of Age, F(6,10) = 1.6, and no group difference, F(6,10) = 1.0. However, separate ANOVAs per subtest revealed group differences on the nonsense sentence reading subtest, . The latter results suggest that the dyslexic individuals were actually better in absence search than the controls.
Behavioral results on the endogenous cuing task
Results on correct RTs (see Fig. 3) revealed no main effect or any relevant interaction involving the factor Age, therefore, this factor was excluded from further analyses. The dyslexic individuals responded on average slower than the control group (
Event-related lateralizations (ERLs)
ERLs for both groups and topographical maps for relevant time windows are displayed in Fig. 4.
Analyses of lateralized amplitudes determined for the PO8/7 electrode pair including the covariate Age did not reveal relevant effects involving this factor, therefore, this factor was excluded from further analyses. ANOVAs for each 20 ms time window revealed a negative deviation from zero from 280 to 320 ms, Fs(1,24)
Lateralized power spectra (LPS)
LPS estimates for the lower α 1 band and the upper α 2 band and topographical maps for relevant time windows for both groups are displayed in Figs. 5 and 6.
These positive estimates imply that contralateral power was reduced as compared to ipsilateral power. These analyses also revealed a group difference from 400 to 460 ms, F(1,24) > 8.1, p < .010, η p 2 > 0.25.
However, the analysis including the covariate Age indicates that this effect may be due to small age differences between the groups. Further inspection of the covariate revealed that for the time window in which its influence was most significant (460-480 ms), a highly significant positive correlation between Age and LPS amplitude was observed (r = 0.602, p = .001). Inspection of the data suggested that this effect could be due to an outlier in the experimental group. After removal of this outlier, the effect of Age was no longer significant. Positive deviations from zero were still observed from 400 to 740 and from 840 to ERLs are based on a double subtraction technique applied on event-related potentials (ERPs) that estimates the voltage difference between contralateral and ipsilateral electrodes that depends on attentional orienting while correcting for overall hemispherical differences. The displayed contra-ipsilateral electrode pair PO8/7 is located above occipital cortex. Topographies of the ERLs for most relevant time windows (280-300 ms and 620-640 ms) indicate that effects were most pronounced above occipital cortex. The earliest lateralized effect is often denoted as the EDAN (early directing attention negativity), while the later opposite effect is mostly denoted as the LDAP (late directing attention positivity). Given the observed group differences on the LPS estimates for the upper alpha band from 940 to 1000 ms, we explored whether there was any relation between individual response speed in the endogenous orienting task and the observed reduction in contralateral alpha power. Analyses for the dyslexic individuals showed a trend to a negative correlation between the LPS and RT (r = -.49, p = .055 [1-tailed]), suggesting that faster responses may be related to a stronger contralateral alpha reduction (lower panel Fig. 7). No such effect was present for the controls (r = .02, p = .480 [1-tailed]; upper panel Fig. 7).
We also explored whether there was a relation between the LPS estimates for the upper alpha band from 940 to 1000 ms and performance on any of the neuropsychological tasks per group. After Bonferroni correction, no effects were observed for the controls (p > 0.23), but for the dyslexic individuals we observed a significant relation between performance on the Bourdon-Wiersma test and the LPS (r = -.70, p = .006 [1-tailed]; see lower panel Fig. 8).
Finally, since performance on the Posner task and performance on the Balloons B task is thought to index endogenous orienting, we examined whether there was a relation with performance on these tasks. A trend to a significant correlation was observed between the response times (including both groups) in the Balloons B task and the validity effect (invalidly -validly cued RT) in the Posner task (r = .29, p = .077 [1-tailed]).
Fig. 5.
Grand averages of lateralized power spectra (LPS) estimates for the lower alpha band (α 1 ) for controls and dyslexic individuals. The LPS is based on a double subtraction technique that estimates the difference in power for a specific frequency band between ipsilateral and contralateral electrodes that depends on attentional orienting. This method is based on power estimates on a single trial basis, which has the advantage that trial-to-trial changes are not cancelled out. Like ERLs, this method corrects for overall hemispherical differences. The displayed ipsi-contralateral electrode pair PO8/7 is located above occipital cortex. Topographies of the LPS for most relevant time windows (420-440 ms and 540-560 ms) indicate that effects were most pronounced above occipital cortex. Fig. 6. Grand averages of lateralized power spectra (LPS) estimates for the upper alpha band (α 2 ) for controls and dyslexic individuals. The displayed ipsi-contralateral electrode pair PO8/7 is located above occipital cortex. Topographies of the LPS for most relevant time windows (420-440 ms and 960-980 ms) indicate that effects for controls were most pronounced above occipital cortex.
Discussion
Several researchers (e.g., see Banfi et al., 2017) have postulated that developmental dyslexia is characterized by problems with attentional orienting, which may be denoted as the attentional deficit hypothesis. Other researchers, however, have argued that this apparent consensus is due to a rather selective view on the evidence (see Ramus et al., 2018). Part of the confusion in the literature may be related to the different attentional deficits that have been reported (see also Valdois et al., 2018). Several researchers indicated that dyslexic individuals have problems with exogenous orienting (e.g., Facoetti et al., 2001), which can be related to the mini-neglect hypothesis while other researchers pointed to specific problems with endogenous orienting (e.g., see Buchholz and McKone, 2004;Liu et al., 2018). Furthermore, there is also support for changes in temporal orienting, either by a delayed recruitment of attention (the SAS hypothesis; Hari and Renvall, 2001), or by having problems with sustaining attention (Facoetti et al., 2000b). The goal of the current paper was to further examine the evidence related to endogenous orienting as the ability to voluntarily direct attention towards a specific location seems crucial for reading. We focused not only on behavioral differences between dyslexic individuals and controls, both in neuropsychological tests and an endogenous orienting task, but were especially interested in examining an electrophysiological index for attentional orienting that is based on local EEG changes in the lower and higher alpha band, assessed with LPS.
Before focusing on possible group differences related to attentional orienting, it is relevant to establish whether the dyslexic individuals scored worse on the Dutch dyslexia screening test (DST NL ) than the controls. Several members of the experimental group were already familiar with this test, however, the results revealed that the remaining five members of the experimental group scored worse when compared to the control group, especially on the nonsense word reading subtest. The latter result is a common observation in research on dyslexia, which suggests that the tested dyslexic individuals can be characterized as having a phonological decoding problem (e.g., see Facoetti et al., 2006).
The data of the neuropsychological tests did not support the presence of an attentional deficit in our dyslexic participants. First, results on the Trail-making tests were comparable for dyslexic individuals and controls suggesting no differences in executive functions. Secondly, results on the Bourdon-Wiersma, and Balloons-A test revealed no group differences, suggesting that visual perception, exogenous orienting, and overall vigilance (Bourdon-Wiersma) were unaffected. 3 Unexpectedly, analysis of the Balloons-B test showed faster overall search times for dyslexic individuals as compared to controls (see Fig. 2), which suggests that our dyslexic participants had no difficulty with endogenous orienting (but see Vieira et al., 2013;Siéroff, 2017). This improved performance might be related to increased development of the parvocellular system for dyslexic participants (e.g., see Stein, 2001), but could also be linked to effective interventions to deal with dyslexia as we The thick lines in the plots describe the overall trends for both groups. Note that different scales were used for the two groups along the ordinate as LPS data were much smaller for the dyslexic individuals than for the controls. Note that different scales were used for the two groups along the ordinate as LPS data were much smaller for the dyslexic individuals than for the controls.
examined high-functioning dyslexic individuals.
In stark contrast to the results of our neuropsychological tests, results of our endogenous version of the Posner cuing task provide support for the presence of attentional deficits and give more insight into the underlying neurophysiological processes. First, major group differences were observed as dyslexic individuals responded slower (see Fig. 3) and missed more targets than controls, which replicates and extends the related findings of Wijers et al. (2005). The question is whether this performance difference can be ascribed to attentional differences or whether other processes (e.g., slower decision making or even motoric processes) are responsible for the observed group difference. Overall, responses were clearly faster for validly cued than for invalidly cued targets for both groups. However, a complex interaction was observed involving the factor Group. Follow-up analyses suggested that the effect of Cue Validity was quite constant for controls, while for dyslexic individuals the effect of Cue validity was dependent on the spatial frequency of the presented targets, and the visual field in which they appeared. The effect of Cue Validity was largest for HSF targets in the left visual field (109 ms) and smallest for HSF targets in the right visual field (23 ms). These observations resemble the findings from Facoetti et al. (2001), and support the mini-neglect hypothesis. Thus, dyslexic individuals seem to have problems with orienting towards the invalidly cued left visual field. No comparable effect was found with LSF stimuli, which may imply that this effect only becomes visible when detailed visual processing is required. However, the mini-neglect hypothesis does not explain the overall observed delay for dyslexic individuals. This observation leaves open the possibility that the overall group difference is related to differences in endogenous orienting.
Our LPS estimates for the lower alpha band (see Fig. 5) indicate that attention-related effects (i.e., a contralateral vs. ipsilateral reduction) in alpha power are present in both groups at around 550 ms. Interestingly, this contralateral reduction started earlier for dyslexic individuals (~350 ms) than for the controls (~430 ms), which suggests that our dyslexic individuals oriented at an earlier moment in time than the controls, which obviously does not support the SAS hypothesis. Support for the presence of an attentional deficit on the basis of the EEG data comes from the LPS estimates for the upper alpha band (see Fig. 6). Although here attention-related changes in alpha power are also present in both groups at around 500 ms, shortly before target onset (940-1000 ms after cue onset) the attention-related effect is only clearly present for the controls but not for the dyslexic individuals. These results again do not confirm the SAS hypothesis of Hari and Renvall (2001; see also the ERL results below). However, the results suggest that dyslexic individuals had problems with sustaining their attention to the relevant side. With the current number of participants it may be difficult to demonstrate that the latter effect on the LPS is actually related to the delay observed on RT. Nevertheless, correlational analyses for the dyslexic individuals revealed that larger LPS estimates, which are indicative of normal orienting, may be related to faster responses (see Fig. 7). Thus, increased response delays for dyslexic participants may be related to a reduced ability to sustain attention. An additional comparison with performance on the neuropsychological tests revealed that for dyslexic individuals, faster responses on the Bourdon-Wiersma test were also related to larger LPS values (see Fig. 8). In other words, a larger score on our neurophysiological index for attention in the endogenous orienting task was related to faster selection of the relevant group of dots in the Bourdon-Wiersma task. These correlations suggest that individual differences in response speed for the dyslexic participants are at least partly related to attentional orienting. No such effects were observed for the controls, which suggests that the individual differences on RT in the control group are related to other processes, like decision making and motoric processes. Although the current evidence is preliminary as it is based on a low number of participants, it definitely points to a need for further research that examines the link between lateralized alpha power and behavior in dyslexia.
One could argue that the discrepancy in results between the lower and higher alpha bands in our study is related to a distinction that was already pointed out by Klimesch (1999). Klimesch noticed that the lower alpha band is related to general task demands while the upper alpha band was more related to task-specific semantic information. Nevertheless, the observed effects in the current study are both taskspecific as our method implies that unspecific effects are subtracted out. One possibility that we did not explore, is that there are differences in the individual alpha frequencies (IAF) between the groups, which is certainly an issue that should be incorporated in future studies (e.g., see González et al., 2018). For example, a difference in IAF between groups might explain why lateralized effects on power in the lower alpha band actually emerged earlier for dyslexic individuals than for controls in our study.
The observed group difference on the LPS data for the upper alpha band could be related to temporal orienting and a left-lateralized fronto-parietal network (Nobre, 2001). This finding could be investigated in future research by, e.g., employing a paradigm used by Rohenkohl and Nobre (2011), in which the relation between alpha power and temporal orienting was examined. They specifically observed a reduction in alpha power on trials in which targets were expected, but not presented. If such an effect was observed for controls but not for dyslexic individuals, it would provide further support for the hypothesis that dyslexics have specific problems with temporal orienting.
Participants in our study had more problems with identifying HSF than LSF targets, as slower responses and more errors were observed for HSF targets. It may be argued that our HSF targets are more related to parvocellular processing while the LSF targets may be more related to magnocellular processing. The observed performance differences between dyslexics and controls, however, do not suggest that dyslexic individuals encountered more problems with LSF stimuli, thus, our results do not support the idea of a selective impairment of magnocellular pathways. Nevertheless, the spatial frequency of our LSF stimuli may not have been sufficiently low to selectively recruit magnocellular pathways. Thus, our results should not be considered as evidence against the magnocellular hypothesis.
ERLs computed from ERPs related to the onset of the left and right cues demonstrated the presence of both the EDAN and the LDAP above occipital sites (see Fig. 4). Both lateralized components showed time courses and topographies that were quite comparable to those reported in earlier studies on participants without dyslexia (e.g., see Van der Lubbe et al., 2006). In line with the results of Wijers et al. (2005), we did not observe differences in these components between dyslexic participants and controls. Finally, we did not replicate the group difference on anterior negativity that was reported by Wijers et al. (2005), thus, our findings do not support the presence of an anterior interhemispheric asymmetry in dyslexic participants.
In a theoretical article, Vidyasager (2013) pointed to possible differences in neurophysiological oscillations between dyslexic individuals and controls. There, however, it was argued that dyslexic individuals might show an impairment in low gamma frequencies, partly based on the idea that reading is related to the speed of visual search, which is estimated to be about 20-45 ms per item (≈ 20-50 Hz). Recently, Scheeringa et al. (2016) proposed that gamma oscillations are related to a bottom-up flow of information (feed-forward, or stimulus-induced), while alpha oscillations are related to top-down inhibitory modulations (feedback; see also Klimesch et al., 2007Klimesch et al., , 2011Klimesch, 2012). Decreased contralateral alpha as observed at the end of the cue-target interval for controls might therefore imply increased contralateral gamma activity after onset of a subsequently presented stimulus and vice versa, and for dyslexic individuals it might mean that the lack of this contralateral decrease is accompanied with less gamma activity after stimulus onset, in line with the idea of Vidyasager (2013). The experimental setup employed in the current study is not optimal to properly analyze gamma activity, but future studies employing MEG (magnetoencephalographic) measures may test the proposed changes in gamma in relation to alpha for dyslexic individuals.
Results as observed in the current study might motivate interventions that enhance attentional selection processes. Several recent studies revealed that a specific intervention with action video games (AVG) may improve attentional processes and reduce reading problems in dyslexic children (Franceschini et al., 2013(Franceschini et al., , 2017Franceschini and Bertoni, 2018;Gori et al., 2016). For example, AVG training for dyslexic children improved phonological decoding (Franceschini et al., 2013(Franceschini et al., , 2017Franceschini and Bertoni, 2018), visuo-spatial attention (Franceschini et al., 2013(Franceschini et al., , 2017, and crossmodal attention shifting (Franceschini et al., 2017). A recent paper by Łuniewska et al. (2018), however, indicated that the effects of the AVG intervention with Italianand English-speaking children were not observed for Polish-speaking children, indicating that the effectiveness of the AVG intervention is still a point of discussion.
In sum, our data revealed the following results. First, no clear reduction in contralateral power in the upper alpha band at the end of the cue-target interval in the Posner task was observed for dyslexic participants while this reduction was present for controls. These findings suggest that dyslexic participants were not able to sustain attention. Secondly, we observed a contralateral reduction in both groups in the lower alpha band halfway the cue-target interval, which suggests a comparable and actually even faster initial orienting effect for dyslexic individuals, which does not confirm the SAS hypothesis. Third, dyslexic individuals had problems with identifying invalidly cued HSF targets in the left visual field, which partly supports the mini-neglect hypothesis. Fourth, the delay observed for dyslexic individuals seems related to a diminished orienting effect in the cue-target interval. Fifth, performance on the neuropsychological tests did not point to a problem at the level of executive functioning, while absence search was actually improved for dyslexic individuals, suggesting that these tests are sensitive to different aspects than the employed endogenous orienting task. Together, these observations support the view that developmental dyslexia can be characterized by different attentional deficits, and foremost an inability to sustain spatial attention. | 2019-01-29T14:02:27.154Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "39eb9eab172aff64724b59be26667cba4cb213c6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.neuropsychologia.2019.01.013",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "c6877264b69895158068293e21f2227ee597de6a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
41728027 | pes2o/s2orc | v3-fos-license | Single snapshot imaging of optical properties
: A novel acquisition and processing method that enables single snapshot wide field imaging of optical properties in the Spatial Frequency Domain (SFD) is described. This method makes use of a Fourier transform performed on a single image and processing in the frequency space to extract two spatial frequency images at once. The performance of the method is compared to the standard six image SFD acquisition method, assessed on tissue mimicking phantoms and in vivo . Overall both methods perform similarly in extracting optical properties.
Introduction
There is a significant interest in providing real-time feedback during procedures using nearinfrared (NIR) light. This is particularly well illustrated in the field of fluorescence imageguided surgery with a continuously increasing number of studies and clinical trials [1][2][3]. In parallel to fluorescence image guidance, new techniques not relying on the use of exogenous contrast agents are also reaching the clinic. In particular, NIR endogenous contrast shows promise for providing important information relative to tissue oxygenation, metabolism and hydration to healthcare practitioners [4][5][6][7][8]. However, only a few methods are currently capable in practice of providing images of optical properties or endogenous chromophores and none to date in real-time.
One method, called Spatial Frequency Domain Imaging (SFDI), has received significant attention because it is capable of measuring optical properties over large fields of view at once [9]. This method relies on structured illumination to extract the tissue modulation transfer function (MTF; the Fourier equivalent to the point spread function) at every location on the image, and therefore extract optical properties. However, this method currently requires multiple images to extract the MTF at several spatial frequencies and therefore is limited in its ability to provide data in real time. Given the importance of clinical workflow, shortening procedure time and the significant amount of motion during clinical applications, it is of paramount importance to provide guidance in real time, at a rate of at least 1 frame per second (fps), and preferentially faster than 10 fps.
In this work, we present a novel acquisition and processing method in the spatial frequency domain allowing the extraction of optical properties using a single image. This method relies on Fourier transform and processing in the frequency space of a single image to extract two spatial frequency images that are used to extract the optical properties using the fast 2-D lookup table SFDI approach. This method is tested on tissue mimicking phantoms and in vivo, and compared to the standard SFDI method employing six images.
Spatial frequency domain imaging
The theory of SFDI has been thoroughly described in the literature [9,10]. Briefly it consists of analyzing the modulation transfer function (MTF) of a turbid medium, the Fourier equivalent to the point spread function, at every location of an image. Note that the MTF analyzed in SFDI is the MTF of the turbid medium under study, and corresponds to the diffuse reflectance as a function of spatial frequency. The system is calibrated to take into account the effect of the MTF of the optics. SFDI makes measurements relative to a calibration phantom of known optical properties and uses a model-based approach, either diffusion theory or Monte Carlo simulations, to relate the measured diffuse reflectance and optical properties. In the case of sub-surface imaging, where optical properties are independently measured at every location in the image, a fast, pre-computed lookup table can be used to recover the optical properties from only two spatial frequency images, typically DC (i.e. 0 mm −1 ) and AC (e.g. 0.2 mm −1 ).
There are therefore two time-consuming processes when performing an SFDI measurement. First the acquisition, which consists of gathering images to obtain a DC and an AC image, and second, the processing toward the extraction of the optical properties. If the processing using the pre-computed lookup table can be very fast, the process of extracting the DC and AC images is currently comparatively slower. Indeed, the commonly used approach is to acquire, for each spatial frequency (0 mm −1 and 0.2 mm −1 ), 3 images phase shifted by 120 degrees (φ 1 , φ 2 , φ 3 ), hence a total of six images (Fig. 1). An analytical expression can then be used to extract the AC and DC images and process the data [9].
Single snapshot method
The single snapshot method we propose relies on a line by line Fourier transform of a single image and processing in the frequency space to extract the DC and AC images. As shown in Fig. 2, a single high spatial frequency image (Raw image, here illuminated at 0.2 mm −1 spatial frequency) is acquired and processed line by line. The line under processing is analyzed to find the first and last maxima or minima of the image and cropped to an integer number of periods. A Fast Fourier Transform (FFT) is performed on the cropped line, and the DC and AC components of the full spectrum are separated at a cutoff frequency (fc) determined automatically. To choose the cutoff frequency, the full spectrum is smoothed, the highest AC frequency detected (here 0.2 mm −1 ), and the closest local minimum located. The full spectrum is then split in two spectra, one DC and one AC. Finally, the DC and AC spectra are processed separately via Inverse Fourier Transform (IFFT), creating a line on the DC and AC images, respectively. This process is repeated for each line in the image. Spectrum is obtained via Fast Fourier Transform then DC and AC components are separated at a cutoff frequency f c to create two spectra. The final DC and AC images are then formed via Inverse Fast Fourier Transform. On the spectra plots in the center, the y axis represents intensity, I (A.U.), and the x axis spatial frequency, f x (mm −1 ).
Experiments
Three different experiments were performed to compare the results obtained with the novel single snapshot method and the currently used six image standard method. For all experiments, optical properties were measured at two spatial frequencies (0 and 0.2 mm −1 ) and three phases (0, 120 and 240 degrees) at a wavelength of 670 nm using our SFDI system [11]. Images were processed using all six images (Standard) or a single high frequency image (Snapshot) and results compared. The following experiments were performed: -Homogeneous phantoms measurements: Seven silicone based tissue mimicking phantoms were made, each with a unique combination of optical properties (μ a , μ s ') [12]. Optical properties were varied from 0.006 to 0.077 mm −1 in absorption and 0.5 to 1.3 mm −1 in reduced scattering. Titanium dioxide TiO 2 was used as a scattering agent and India ink as an absorbing agent. Each phantom measures 14 x 14 x 2.5 cm.
-Step function phantoms measurements: To investigate the potential degradation of the spatial information when using the snapshot method, two step function silicone based tissue mimicking phantoms were fabricated. The first phantom was made with a step in absorption from 0.0185 to 0.022 mm −1 with reduced scattering being constant at 0.91 mm −1 . The second phantom was made with a step in reduced scattering from 0.085 to 1.08 mm −1 with absorption being constant at 0.019 mm −1 . Each phantom measures 14 x 14 x 2.5 cm.
-In vivo measurements: A hand was imaged with our SFDI system.
Homogenous phantoms
Shown in Fig. 3 are the results from the homogenous phantoms measurements. Both absorption and reduced scattering maps appear identical, with the heterogeneous features retained by the snapshot method. The plots represent the comparison of the values recovered using one method against the other with the dashed line having a slope of 1.
Step function phantoms
Shown in Fig. 4 are the results from the step function phantoms measurements. Both absorption variation and reduced scattering variation maps appear identical, with the step function being reconstructed by both methods. The plots represent the comparison of the values recovered using one method against the other across the phantoms. Both methods provide results that are similar in absorption variation. The step transition in reduced scattering is slower using the snapshot method compared with the standard method, which is expected from analyzing the image line by line and processing with a single phase.
In vivo experiments
Shown in Fig. 5 are the results from the in vivo measurements. Both absorption and reduced scattering maps appear identical, with some degradation of the image quality at the boundaries when using the snapshot method compared to the standard method. The plots represent the comparison of the values recovered using one method against the other across the sample (white lines). Both methods provide results that are similar in absorption and reduced scattering highlighting the fact that despite image quality reduction at the boundaries, the optical properties are recovered correctly over the image when using the snapshot method.
Discussion
In this work we described and validated a novel single snapshot optical properties imaging method working in the spatial frequency domain. It consists of the acquisition and processing of a single high frequency image to extract both DC and AC images. These images are then used to recover the optical properties as it is performed with Spatial Frequency Domain Imaging (SFDI). One of the main advantages of this method is its capacity to image rapidly (one image only) virtually any size field of view, which makes it particularly interesting in the context of image guidance. However, our method does not come without limitations. As evidenced in the results, the image quality suffers from the processing of a single phase sine wave. This is caused mainly by the fast change in amplitude modulation in the sine wave as it arises at the boundaries of the sample. We are currently working in improving the quality of the image by using an hybrid method that either processes in the Fourier domain or using the maximum to minimum difference in the sine wave to extract the amplitude modulation (i.e. AC component) depending on its rate of change. Nonetheless, it is important to note that although the image quality may be degraded, the optical properties are correctly recovered over the sample between the boundaries.
Another method to improve image quality is to use a higher spatial frequency. Spatial frequencies of 0.3 to 0.5 mm −1 have been successfully tested and shown to improve significantly image quality. However, since the scope of this article is to compare the standard method that uses a frequency of 0.2 mm −1 , we restricted our study to this particular spatial frequency.
The capacity of making measurements in real time is of paramount importance to test endogenous chromophore imaging in the clinic for image guidance during procedures. Not only is it necessary from a workflow perspective, with healthcare professionals having to integrate a novel device in their environment, but also to avoid any blurring in the acquired images, from either voluntary or involuntary subject motion, and to guarantee co-registration, if necessary, between images [13]. Currently, the snapshot approach shortens the acquisition time by approximately 6 fold, i.e. in the range of 50 to 100 ms depending on the wavelength and the medium properties. Processing time to obtain the DC and AC images takes approximately 2 seconds, the snapshot method being twice slower than the standard method. However, the processing has not been optimized at this point and we anticipate a significant improvement in processing time.
To be fully integrated and tested in a clinically relevant system, this method must integrate a profile correction method. This feature is of paramount importance since virtually any non-contact imaging method is dependent upon both sample distance and shape. Previous work has been performed to take into account the surface geometry [14], and a novel approach being developed to extract the phase of the sine wave from a single snapshot image.
Conclusion
The method presented in this article allows for optical properties imaging using a single snapshot image. The method has been validated using homogeneous and step function tissue mimicking phantoms, as well as in vivo. Overall the single snapshot method performs similarly to the standard six image method. This novel snapshot method allows for single image acquisition and processing of optical properties and thereby represents a significant advance toward imaging optical properties and endogenous chromophores in real time. | 2018-04-03T05:08:41.732Z | 2013-12-01T00:00:00.000 | {
"year": 2013,
"sha1": "20580ffcc3fd739e5b56a0523b38effa24a6424e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.4.002938",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "68a90789d65ddfd5d6430a3bb4e1a3396c35b25f",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
204801241 | pes2o/s2orc | v3-fos-license | Gastroscopic Panoramic View: Application to Automatic Polyps Detection under Gastroscopy
Endoscopic diagnosis is an important means for gastric polyp detection. In this paper, a panoramic image of gastroscopy is developed, which can display the inner surface of the stomach intuitively and comprehensively. Moreover, the proposed automatic detection solution can help doctors locate the polyps automatically and reduce missed diagnosis. The main contributions of this paper are firstly, a gastroscopic panorama reconstruction method is developed. The reconstruction does not require additional hardware devices and can solve the problem of texture dislocation and illumination imbalance properly; secondly, an end-to-end multiobject detection for gastroscopic panorama is trained based on a deep learning framework. Compared with traditional solutions, the automatic polyp detection system can locate all polyps in the inner wall of the stomach in real time and assist doctors to find the lesions. Thirdly, the system was evaluated in the Affiliated Hospital of Zhejiang University. The results show that the average error of the panorama is less than 2 mm, the accuracy of the polyp detection is 95%, and the recall rate is 99%. In addition, the research roadmap of this paper has guiding significance for endoscopy-assisted detection of other human soft cavities.
Introduction
Gastroscopy plays a major clinical role in the diagnosis of gastric diseases.The detection and diagnosis for gastric polyps by gastroscopic intervention is the most routine solution [1].However, conventional endoscope diagnosis for polyps is prone to misdiagnosis, for the following reasons: first of all, as soft cavity, stomach is easy to deformation.Additionally, there're lots of folds in gastric inner wall, which leads to the result that gastric polyps aren't intuitive.During the examination, doctors need to move the camera lens of the endoscope back and forth to find polyps.What's more, doctors control the endoscope in vitro.Due to the narrow entrance and narrow vision, it's difficult for doctors to manipulate the lens flexibly to obtain a detailed and comprehensive observation of the gastric inner wall [2].Last but not least, the following-up examination for polyps detection often relies on the initial ink injection in the previous procedure (see Figure 1).In this case, the ink injection area may fall off as time goes on or is dissolved by gastric mucosa, which result in missing the located polyps [3].Based on the above reasons, it is certain clinical meaning to improve the method that detects and identifies the polyp lesion during the gastroscopic examination, to reduce the rate of misdiagnosis.
Many literatures have been proposed to assist gastroenterologists to detect gastric polyp and reduce the rate of misdiagnosis with modern science and technology [4].For instance, some researchers have focused on the method that combines computer vision technology and conventional endoscope diagnosis to detect gastric polyps.In those researches, a typical example is that [5] proposed a method to estimate confidence distribution of polyps based on the polar matrix and covariance matrix.In this method, they used covariance matrix to prejudge the possible lesions in endoscopic image to assist doctors in diagnosis.However, the experimental result shows the detection accuracy depends on the range of viewing angles as camera moves, which limits its application.Besides, Gao et al [6] design a non-invasive biopsy mark system applied in gastroscopic examination.In this technology, they constructs a virtual static three-dimensional model of the gastric inner wall, by the means of CT.And the model can assist in intraoperative navigation and biopsy resetting, but the detection and accuracy of navigation is not perfect because it's difficult to accurately calculate the flexible deformation of the stomach by the static preoperative model.To this end, some researchers developed a set real-time three-dimensional reconstructed method for soft cavity, based on RGB image of endoscope diagnosis.The latter method solved the problem that in the preoperative model it's unable to calibrate dynamically the deformation of soft organs [7].However, because the endoscope image texture feature isn't easily to be extracted, it's still an unsolved question that how to build stable and real-time models.In addition to three-dimensional models for the auxiliary examination of lesions, direct diagnosis in two-dimensional images is also an important research direction.For instance, [8] have proposed a new real-time method of pathological localization, in which they regard stomach peristalsis as regional affine transformation.However, in soft cavity organs, regional affine hypothesis is generally not valid.Furthermore, taking advantages of the non-invasive property, probe-based confocal laser endoscope (PCLE) are developed for assisting in quick detecting and positioning of polyps in real time.However, this technology relies on extra hardware solution and can't return visit and review the lesions [9].There is another research direction of lesion examination which uses panorama technology to expend the inner wall of soft organs.Through the expended panorama, doctors can give a comprehensive and quick diagnosis for the inner wall of target organs, without the need to repeatedly check and change the viewing angle.It can avoid misdiagnosis caused by occlusion and other issues and meanwhile improve the diagnostic effect.However, panorama technology has high requirement for image splicing and fusing.It is prone to appear shadow, blur, or even dislocation between the spliced images.Besides, regional distortion of images in expended panorama can also affect the result of diagnosis [10][11].Based on the above researches, this paper proposes an automatic diagnosis system for gastric polyps on basis of the panorama image of gastric inner wall.Specifically, the main innovative work of this paper shows as the following aspects: firstly, we build a panorama model for the gastric inner wall.Compared with the previous work, we don't rely on hardware to estimate camera position, so it's more available for operation.Furthermore, we optimize the method by means of optical consistency, and solve problems as registration error, blur and others caused by image stitching.Secondly, we develop an end-to-end panoramic multi-target detection network for gastroscopic procedure.Compared to conventional deep learning target detection framework, this paper uses the panorama image as network input, support the multi-target detection of polyps, and avoid distortion caused by stitching, which may mislead the doctors' diagnosis.Thirdly, we conduct clinical trials on entire system in affiliated hospital of Zhejiang University.The experimental result shows the model's error of our system is less than 2 mm and recalling rate of polyp detection is close to 100%.Our developed system can assist doctors in diagnosis and it helps reduce the rate of misdiagnosis, improve the efficiency of diagnosis and relieve the storage pressure of server data.Finally, the research method of this paper has a certain guiding significance for auxiliary diagnosis for other human soft organs, theoretically.In [23], a promising polyps detection method with CNN was also proposed.Compared with [23], the main significance of our method is our CNN framework considers the constructed gastric panoramic data as input.Moreover, our framework is designed as multi-target detection, which indicates our method can detect all the polyps of a patient just from one image.
The algorithm flow chart is shown in Figure 2. We take original image sequence from endoscope as the input of algorithm.After the image registration and the optimized texture fusion based on optical consistency theory, we obtain a more comprehensive view of the gastric inner wall.Then a proposed deep learning framework is worked on the generated panorama data and achieve automatic detection of gastric polyps.
Relative to conventional computer vision problems, there're lots of challenges to reconstruct panorama image of the gastric inner wall in the human soft cavity.These challenges can be summarized as the following aspects: first, it is hard to extract and match features, endoscope is usually a fisheye lens, so the images captured are always seriously distortion (see Figure 3).Furthermore, in soft cavity such as stomach, the inner walls are almost covered with mucosa, so the captured RGB images are prone to generate certain degrees of reflective spots (see Figure 1).All these problems can influence the stability of feature descriptors.Second, there're shadow and dislocation in the seam caused by panorama image stitching, and texture fusion is necessary.Besides, the conventional panorama image technology generally relies on texture projection and expansion from a square or sphere model [12].However, the soft cavity (e.g.stomach) totally differs with standard square or sphere.If re-projected directly, it will result in huge distortion.Therefore, it's necessary to develop a new texture projecting model.
Image Registration
In previous work, researchers used electromagnetic tracking device to estimate real-time coordinates of endoscope in the stomach to solve the matching problem [7].However, this method can't be applied in traditional endoscope examination, without hardware improving.In this paper, for the texture feature of soft organs, we adopt the Homographic Patch Feature Transform (HPFT) based on homology hypothesis [13].The public data shows that relative to the other patch feature transform of traditional computer vision, HPFT is more efficient in scene of soft organs represented by stomach, of which the main motion feature is regional peristalsis.
First of all, we give an overview of HPFT.HPFT uses some patch feature transform to detect initial feature points, such as SIFT.If local image blocks of image registration satisfy the homographic principle of computer vision, as.
In the formula (1), ′ and m represent image local feature point pair to be determined.H represents homographic relationship that can be shown as 3*3 matrix.ρ represents scaling scale.On basis of the homographic principle, it divides image sub-blocks for initial matching feature points.By means of KL similarity, it verifies the similarity of image sub-blocks.It iteratively subdivide for the inner heard of the image sub-block, until the subdivided image sub-blocks satisfy the homographic hypothesis, for the subblocks that don't satisfy the homographic hypothesis.
Our experimental result shows we get a better-distributed matching feature points by adopting HPFT, compared to patch feature transform of computer vision as SIFT, FAST and so on (see Figure 4).
The other problem during image registration is to exclude the mismatching image feature point pairs.Conventional method adopts external polar line constraints and other methods to filter the matching results of image feature points.However, the accuracy of external polar line highly relies on camera viewing.So the filtering result is not satisfactory.In this paper, we regard the entire process from gastroscope entering into stomach to leaving as a closed chain process of image registration, and use closed chain optimization to filter and exclude the mismatching points of the image registration.(f), the registration methods were applied to original gastroscopic and noise images (the Gaussian noise scalar varies from 0.01 to 0.5).For each registration method, the initial detected feature number was 200, thus the ideal matching features' number was also 200.In the figures, the color curves represent the number of the detected features whose FB error is smaller than the corresponding FB error threshold (unit: pixels).The figure is quoted from [13] and [24].
Optical Consistency Texture Fusion.
After the image registration, the major problem in building panorama image is how to deal with stitching problem between the borders of the stitched image blocks after directly using the result of image registration.The stitching problem is generally caused by the two following reasons: first, in the result of registration obtained by HPFT and closed chain optimization, there are mismatching point pairs exists inevitably, which lead to the error of transformation matrix between images calculated, and finally reflect on the seaming results; second, there are lots of mucosa on the gastric inner wall.The images captured from different viewings will have different degrees of glisten or even direct reflection.Even though for the images reflect a same physiological location, there may be large difference in the pixel value.
Based on above facts, we develop an optimized method of texture characterization on basis of optical consistency.This method directly regards pixel difference of minimizing stitched seam as optimized target, so as to obtain panorama image structure of gastric inner wall of smoothing process.This method will be elaborated below.
Assume the sequence of images to be matched is = { 1 , 2 , 3 , … , } .Stitching relationship gotten by image registration can be represented by −1, , as: The above formula can be organized into an optimized objective function: The above formula can be seen as a function relative to pixel c and transformation matrix T. In this paper, so as to simplify the process, we regard c as gray average value of the corresponding pixel on the image.Besides, ω represents the weight of value between the pixel differences, and β represents optimized regular term, so as to avoid over fitting.
We use iterative method to optimize the formula (4).Firstly, assuming that c is unchanged, we can regard image as a four-degree-of-freedom vector = ( 1 , 2 , 1 , 2 ) , 1 , 2 represent rotating information. 1 , 2 represent moving information.On basis of above assume, formula (4) can be seen as a linear optimization equation, and then use KL or Gauss Newton method to solve the equation.When T is fixed, the formula ( 4) is equivalent to solving the average of pixel values spliced to some coordinate.
For texture projection model, we adopt double cube projection model to project and expend the texture [18,19].Relative to conventional cube or sphere projection method, the double cube projection model can do better to deal with deformation problem after model expending.
Automatic Detection for polyps
After building the panorama image of the gastric inner wall, we develop an automatic detection technology for polyps to assist doctors in diagnosis.In the field of computer vision, this type of issues belongs to object detection.Currently, the mainstream object detection technology adopts deep learning frame to recur and fit the target region labeled by images, such as faster RCNN [20] and SSD [21].In the scene of panorama image, the problems we need to solve are: first, how to collect large amount of manual data labeled by panorama image and second, traditional object detection based on deep learning mostly consider natural images as input.Currently, there aren't publicly published technical solutions based on panoramic images.
Network Model
First of all, we need to develop the network model for gastroscopic panorama image detection.In the literatures of deep learning, the classic idea of object detection technology is like this: it firstly predicts potential local bounding box on the image, and then determines the category prediction confidence of bounding box in the thought of image classification.Finally, via a series of postprocessing optimization, the final results are finetuned.The representative technology of this method is RCNN and Faster RCNN [20], which is the most universal technology in the scene of target detection.However, the overcomplicated frame cascade structure and the slow convergence and performance are the existing problem.In this paper, we adopt SSD [21] network frame to achieve the target detection of panorama image.Relative to RCNN and other related methods, SSD technology converts target bounding box detection, category prediction and bounding box optimization to paralleled CNN convolution.Relative to Faster RCNN and other related methods, SSD technology has the advantage of faster convolution and more accurate in target locating [22].
During the panorama data processing, because the polyps are relatively small in the panorama image, after SSD prediction most of bounding boxes are negative samples.Compared to predict directly the original images, a large amount of negative samples will lead to the unbalance of training sample sets and be hard to converge in training.For these issues, we propose an optimized SSD called selective SSD.In selective SSD, we sort negative samples detected in the process of iteration on SSD original loss function, and only select the negative samples with high confidence for training, directly filtering the low ones.This method solves the network convolution problem, and improves the accuracy of convergence to a certain degree (see Table 4).
Sample Collection
In addition to network model, to get the training data of panorama image is also a key in polyps' detection.In this paper, based on gastroscopic examination system of Zhejiang University Affiliated Hospital, we recruit clinical experts to mark the polyps in original gastric images.Multiple-check solution is adopted.Only the polyps these are found by two more experts can be considered as true positive samples.Then we use the proposed panorama technology to construct panorama image and the global truth is marked by clinical experts.During the data training, apart from clinical experts' marking, we adopt data enhancement technology to obtain more training samples.The enhancement method is like this: for any panorama image originally captured, we take advantage of Gaussian function for local smoothing with different scales.
Experimental evaluation and results
So as to verify the accuracy of polyps detection method based on panorama image that we proposed in this paper, we embed the developed system into gastroscopy system of the gastroenterology in Zhejiang University Affiliated Hospital.The endoscope device is GIF-QX-420 from Olympus, Japan.The frame rate of the images collected by this endoscope is 30 Frame per Second.And the image resolution is 560*480.All the volunteers recruited have a history of moderate or severe gastrointestinal diseases, 43 cases in total.Patients provide written informed consent.The clinical data collected can be used for evolution and follow-up visits.During the experiment, there is not any adverse event.Volunteer information is described in Table 1.
Gastroenterologists can construct panorama image without extra interoperations.During the examination, this system extracts HPFT feature descriptors in real time, on basis of gastric inner wall texture information collected by doctors.Then, it estimates real-time position of camera.The panorama image is gradually constructed and unfolded, as well as the polyps in panorama image are detected in real time, shown in Figure 5.
Panorama Assessment
First we evaluate the performance of the panoramic result.Accordingly, our method is easier compared with the ones that construct gastroscopic panorama image based on electromagnetic guided tracking device [19].This paper adopts the proposed texture metric error [16] to estimate the accuracy of panorama image, and the results are demonstrated in Table 2. Generally speaking, the average error of the results is 0.33, and the effect is better than the published papers [19].TABLE 2. Quantitative Evaluation about Panorama Results (43 volunteers).We evaluate the error score between Liu's [19] method and our method.The overall texture error of ours is 0.33, which is much better than [19].Moreover, we also evaluate the results on angularis, antrum, and stomach body respectively, and our method is better.
Polyp Detection Evaluation
To evaluate the accuracy of the polyp detection, we compare the results marked by clinical doctors to the ones of automatic detection for polyps (see table 3).We consider the IOU >0.5 as positive samples, and IOU indicates that the intersection area between the area clinical doctors marks and the detection area that our method generates.
Finally the comparing results show in the case of a recalling rate close to 100%, the accuracy is 95%, which meets the requirements of clinical auxiliary diagnosis.Furthermore, we develop the selective SSD object detection framework for dealing with panoramic images, as a result, we also evaluate the performance of the selective SSD and other object detection method.In table 4, we can see that the selective SSD outperforms over the other published method.And there is a significant improvement between selective SSD and original SSD, which indicates prove that our method is effective.
Discussion and Conclusion
Gastroscopy is one of the most routine solutions in the current gastric diseases.It is of importance to improve the efficiency of diagnosis and reduce the risk of misdiagnosis clinically.Compared with published papers, we first put forward an end-to-end full-view automatic detection technology.Specifically, we research a method to assist doctors in understanding the whole picture of human stomach by means of panorama gotten without extra device.Then on the basis of panorama image, we propose a polyp auxiliary diagnosis method based on deep learning framework, which can improve the efficiency of doctors' diagnosis.The method is a good reference for endoscope intervention diagnosis in other human soft organs theoretically.
FIGURE 1 :
FIGURE 1: Polyps detection in traditional procedure.The lesions are determined by ink injection, but the ink may fade away before second-examination.And reflective areas can be found on the captured images.
FIGURE 2 :
FIGURE 2: The pipeline of our method.Original endoscopic images are used to generate panoramic result.Then polyps are detected with our deep learning framework.
FIGURE 3 :
FIGURE 3: (a) is originally captured by endoscope, the chessboard is badly distorted (red line can be considered as reference).(b) is calibrated results.
TABLE 1 .
Volunteer Information
TABLE 3 .
Polyp Detection Compared with Clinical Diagnosis.We evaluate the recall percentage and accuracy from different physiological location
TABLE 4 .
Different Deep Learning Framework for Polyp Detection. | 2019-10-19T04:07:00.000Z | 2019-10-19T00:00:00.000 | {
"year": 2019,
"sha1": "af35dcd6acefd438cb76ace7d43054fb678b2929",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/4393124",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "af35dcd6acefd438cb76ace7d43054fb678b2929",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Mathematics",
"Medicine"
]
} |
55161975 | pes2o/s2orc | v3-fos-license | Long-distance singularities in multi-leg scattering amplitudes
We report on the recent completion of the three-loop calculation of the soft anomalous dimension in massless gauge-theory scattering amplitudes. This brings the state-of-the-art knowledge of long-distance singularities in multi-leg QCD amplitudes with any number of massless particles to three loops. The result displays some novel features: this is the first time non-dipole corrections appear, which directly correlate the colour and kinematic degrees of freedom of four coloured partons. We find that non-dipole corrections appear at three loops also for three coloured partons, but these are independent of the kinematics. The final result is remarkably simple when expressed in terms of single-valued harmonic polylogarithms, and it satisfies several non-trivial constraints. In particular, it is consistent with the high-energy limit behaviour and it satisfies the expected factorization properties in two-particle collinear limits.
Introduction
Long-distance singularities are a central feature of gauge-theory scattering amplitudes, and a detailed understanding of their structure is key to precision collider physics . Owing to their factorization properties, the singularities are largely independent of the hard scattering process. Furthermore, they exponentiate and can therefore be compactly summarised by the so called soft anomalous dimension.
Until recently, the soft anomalous dimension for the scattering of any number of massless coloured particles was known to two loops. To this order it admits a remarkably simple structure consisting of a sum over colour dipoles formed by any pair of external legs [8,13,[15][16][17]. In this talk we report on the recent computation of the three-loop corrections to the soft anomalous dimension [1]. The calculation we performed confirmed the expectation [15][16][17][18][19][20][21] that three-loop corrections depart from the above dipole structure, and correlate between the kinematic and colour degrees of freedom of up to four partons. We find that a non-vanishing correction appears already for three coloured partons, but it is a constant, involving no kinematic dependence. The new threeloop result also contributes to understanding factorization properties of scattering amplitudes in the collinear and high-energy limits.
Factorization at fixed-angles and the soft anomalous dimension
We are interested in the infrared (IR) structure of a scattering amplitude for n massless partons. Given external legs with momenta p i , for i = 1..n, where p 2 i = 0, we consider the kinematic limit of fixed-angle scattering, where all Lorentz invariants p i · p j are taken large. Infrared singularities (both soft and collinear) can then be factorized as follows where µ is a factorization scale, α s ≡ α s (µ 2 ) is the renormalised D-dimensional running coupling, H n is a finite hard scattering function, and Z n is an operator in colour space collecting all IR singularities as poles in the dimensional regularization parameter ε = (4 − D)/2. These singularities originate in loop momenta becoming either soft or collinear to any of the scattered partons (see e.g. Ref. [6]). Collinear singularities depend on the spin and momentum of that particle, and decouple from the rest of the process; their contribution is known to three-loops [24,45], and will not be discussed here in detail. In contrast, soft singularities are independent of the spin, but they depend on the relative directions of motion and the colour degrees of freedom of all scattered particles. Hence, soft singularities are sensitive to the colour flow in the entire process. Nevertheless, they are significantly simpler than finite contributions to the amplitude, opening a unique possibility to explore multi-leg gauge-theory amplitudes at the multi-loop level. The simplification of the soft limit is apparent already at the level of the Feymann rules: emission of a soft gluon with momentum k off an energetic particle with momentum p i k, taken at leading order in the soft gluon momentum, amounts to a factor of g s T a i p µ i where we replaced the momentum of the emitting particle by its four-velocity, emphasising the rescaling symmetry of this Feynman rule. This symmetry is responsible for the main features of soft singularities. The soft approximation can be equivalently formulated in configuration space, as emission from a Wilson line following the classical trajectory of the particle with momentum p i and carrying the same colour charge: where P orders the colour matrices along the path. To avoid collinear singularities we perform our calculation with non-lightlike velocities β 2 i = 0. Considering fixed-angle scattering of n legs, soft singularities are fully captured by the following Wilson-line correlator, the so-called soft function, where the kinematic dependence appears through cusp angles, γ i j ≡ 2β i · β j / β 2 i β 2 j , which are invariant under velocity rescaling.
The factor Z n containing all soft and collinear singularities in Eq. (2.1) can be written as a solution of a renormalization-group equation as where Γ n is the so-called soft anomalous dimension matrix for multi-leg scattering, and P stands for path-ordering of the matrices according to the order of scales λ . Γ n itself is finite, and IR singularities are generated in Eq. (2.4) through the dependence of Γ n on the D-dimensional coupling, which is integrated over the scale down to zero momentum. Factorization and the rescaling symmetry of the Wilson line velocities [15][16][17] put stringent constraints on the functional form of Γ n , which through three loops, must take the form where −s i j = 2 p i · p j e −iπλ i j , with λ i j = 1 if partons i and j both belong to either the initial or the final state and λ i j = 0 otherwise; T i is the colour generator in the representation of parton i, acting on the colour indices of the amplitude as described in Ref. [7]; γ K (α s ) is the universal cusp anomalous dimension [2,43,44], with the quadratic Casimir of the appropriate representation scaled out 1 ; γ J i are the anomalous dimensions of the fields associated with external particles, which govern hard collinear singularities, currently known to three loops [24,45]. Equation (2.6) is known as the dipole formula, and captures the entirety of the soft anomalous dimension up to two loops. Finally, ∆ n ρ i jkl represents the correction going beyond the dipole formula, which starts at three loops, and depends on the kinematics via conformally-invariant cross ratios (CICRs), which are invariant under a rescaling of any of the momenta. In the following we report on the calculation of the three-loop function ∆ (3) n ρ i jkl . With the exception of hard collinear singularities (γ J i (α s ) in Eq. (2.6)), one may compute the soft anomalous dimension Γ n ({p i } , λ ) to any order through the renormalization of the soft function in Eq. (2.3): in dimensional regularization, loop corrections to the soft function are scaleless integrals, which vanish in the absence of a cutoff. Hence, one may directly infer the infrared poles in ε from the ultraviolet ones. This calculation strategy has marked advantages over the alternative of extracting the infrared poles from an amplitude, since one never needs to evaluate finite corrections, and one may make direct use of the known iterative structure of renormalization along with the exponentiation properties of Wilson line correlators [35][36][37][38][39][40][41][42].
We note that ∆ (3) n is independent of the details of the underlying theory and completely determined by soft gluon interactions. In particular, this implies that ∆ (3) n is the same in QCD and in N = 4 Super Yang-Mills, and it is therefore expected to be a pure polylogarithmic function of weight five. Its functional form has been constrained by considering collinear limits and the Regge limit [14][15][16][17][18][19][20][21][22], but despite progress in understanding these limits it remained unclear whether threeloop corrections to the dipole formula are in fact present. The situation changed with the completion of the direct computation of ∆
Computing connected graphs
We set up the calculation of the soft anomalous dimension through the renormalization of a product of semi-infinite Wilson lines with four-velocities β k , with β 2 k = 0. By considering nonlighlike lines we avoid collinear singularities, and obtain kinematic dependence via cusp angles n for massless scattering by considering the asymptotic lightlike limit β 2 k → 0, where the kinematic dependence reduces to CICRs as in Eq. (2.8). Considering the set of contributing diagrams at three loops, it is clear at the outset that the diagrams that connect the maximal number of Wilson lines, that is four lines, shown in Fig. 1, have a special status: these are the only diagrams that depend on all six cusp angles γ i j with 1 ≤ i < j ≤ 4. Hence these four diagrams are expected to involve non-trivial dependence on CICRs (defined in Eq. (2.8)). Importantly, this kinematic dependence remains in place upon taking the simultaneous lightlike limit, γ i j → −∞. In contrast, all other webs reduce in this limit to a sum of products of logarithms of γ i j . This applies in particular to the webs of Fig. 2: these webs connect all of the four lines, but they never involve any set of four angles that may form a cross ratio as in Eq. (2.8). It is clear that webs connecting three or two lines out of the four, as in Figs. 3 and 4, cannot give rise to cross ratios, and so they reduce to polynomials in logarithms of γ i j for near lightlike kinematics. Of course, cross ratios may be formed upon summing the webs of Figs. 2, 3 and 4, but these contributions are necessarily polynomial in logarithms of the CICRs.
It follows that the primary ingredient in deriving ∆ ρ i jkl is the computation of the fourline connected diagrams in Fig. 1. Below we briefly describe the strategy of the calculation and the result we obtain for these diagrams, before presenting the complete result for the anomalous dimension. The computation of all diagrams will be discussed in dedicated publication [46].
We set up the calculation in configuration space, with four non-lightlike Wilson lines with fourvelocities β k . The position of the three-and four-gluon vertices off the Wilson lines are integrated over in D = 4 − 2ε dimensions. Following Ref. [35,41], we introduce an infrared regulator which exponentially suppresses contributions far along the Wilson lines. This is necessary to capture the ultraviolet singularity associated with the renormalization of the vertex where the Wilson lines meet. Upon performing the integral over the overall scale, we observe that each of the diagrams in Fig. 1 has a single 1/ε ultraviolet pole, without any subdivergences. The contribution of each diagram to the soft anomalous dimension is the coefficient of that pole, which is finite in D = 4 dimensions.
Next, considering the leftmost diagram in Fig. 1, we observe that for fixed gluon-emission vertices along the Wilson lines, the integral over the position of the four-gluon vertex gives rise to a four-mass one-loop box integral in 4 dimensions; Similarly, in each of the remaining three diagrams in Fig. 1, the integrals over the positions of the two three-gluon vertices yield a four-mass diagonalbox two-loop integral 2 . We proceed by deriving multifold Mellin-Barnes (MB) representations for each of these off-shell four-point functions.
Next we integrate over the position of the gluon emission vertices along the Wilson lines, obtaining a MB representation of each of the connected graphs for the general non-lightlike case, depending on all of the six cusp angles γ i j . We proceed by applying standard techniques [48] to perform a simultaneous asymptotic expansion near the lightlike limit γ i j → −∞, where we neglect any term suppressed by powers of 1/γ i j , obtaining a sum of lower-dimensional MB integrals. These are converted into parametric integrals using the methods of Ref. [49], which we then performed by means of modern analytic integration techniques [50]. The result for the leftmost diagram in Fig. 1 reads: and the one for the second diagram takes the form where g 0 and g 1 are pure polylogarithmic functions of uniform weight five in the variables z ≡ z i jkl andz ≡z i jkl which are related to the CICRs of Eq. (2.8) via 3) The remaining two diagrams in Fig. 1 can be obtained from w (12) (34) by appropriate permutations of the lines. The sum over all four connected graphs, w con. = w 4g + w (12)(34) + w (13)(24) + w (14) (23) , displays a drastic simplification as compared to individual diagrams, namely, individual graphs are not pure functions but the sum is. Specifically, the function g 1 , which appears in all of them, exactly cancels in the sum, and one is left with three permutations of the function g 0 , which has no rational prefactor. This is in agreement with the expectation that (maximally helicity-violating) amplitudes in N = 4 Super Yang-Mills are pure and have a uniform maximal weight. The next simplification occurs upon applying the Jacobi identity to the sum of connected 4-line webs: where Crucially, the functions G 1,2 (z,z, γ i j ) separate as follows: where P 1,2 (z,z) is a sum of harmonic polylogarithms (of weight 5) depending exclusively of on CICRs via z andz, while Q 1,2 log (γ i j ) is a polynomial in the logarithms of γ i j . This split must have happened for the full result for ∆ n to be a function of CICRs: indeed Q 1,2 log (γ i j ) cancels against contributions of the remaining diagrams 3 -which are also polynomial in log (γ i j ) -leaving behind pure CICR dependence.
Colour structure and colour conservation at three loops
Let us now turn to discuss the colour structure of the soft anomalous dimension for n coloured lines. According to the non-Abelian exponentiation theorem [42] the colour factors in ∆ n must all correspond to connected graphs 4 . Thus, at three loops we expect the "quadrupole" colour structures of Fig. 1, i.e., T a i T b j T c k T d l f abe f cde plus permutations, where the four lines connected (i, j, k and l) are any subset of four out of the n lines.
The next question is then whether any other colour factor is admissible in ∆ n , namely ones that involve fewer than four lines. One possibility could be tripole corrections correlating three partons, with colour factors proportional to i f abc T a i T b j T c k . Such tripoles appear starting from two loops for non-lightlike Wilson lines [25][26][27][28][29][30][31][32][33][34][35], but are excluded in the lightlike case at any order because the corresponding kinematic dependence on the three momenta is bound to violate the rescaling symmetry constraints [15][16][17]. While a constant correction proportional to i f abc T a i T b j T c k is excluded by Bose symmetry, kinematic-independent corrections involving three lines of the form f abe f cde T a i , T d i T b j T c k as the first diagram on Fig. 3, are admissible and do indeed appear. 3 One notes that the separation in (3.5), while highly constraining, is not unique: powers of logarithms of CICRs can be expressed in either way. The computation of the remaining diagrams of Figs. 2, 3 and 4, uniquely fixes the answer. 4 In this context a "connected graph" is one that remains connected upon removing all Wilson lines, so for example all diagrams in Fig. 1 are connected while all those of Fig. 2 are non-connected. This does not imply that the latter do not contribute -they do, but with the colour factors of the former. For further details see Refs. [35,[38][39][40][41][42].
We conclude that the general form of the non-dipole correction to the soft anomalous dimension for n coloured lines is given by where C is a constant and F is a function of two CICRs. Note that the contribution proportional to the constant C is present starting from the three-line case, n = 3. Both C and F are independent of the colour degrees of freedom. The terms in this sum are not all independent, because of the antisymmetry of the structure constants and the Jacobi identity. We emphasise that C and F are independent of the number of legs n. We can therefore determine these functions by considering the simplest case of four Wilson lines, ∆
.
In organising the calculation we made use of non-Abelian exponentiation, and computed webs, namely diagrams that contribute directly to the exponent. A web can be either an individual connected diagram, as in Fig. 1, or a set of non-connected diagrams which are related by permuting the order of gluon attachments to the Wilson lines [38][39][40][41][42]; representative diagrams from such webs are shown in Fig. 2. In either of these cases, the contribution to ∆ (3) 4 is associated with fully connected colour factors. The classification of webs connecting four and three Wilson lines was done in Ref. [42].
Another important element in organising the calculation is colour conservation. The anomalous dimension Γ n is an operator in colour space that acts on the hard amplitude, which is a colour singlet and must therefore satisfy [8] Let us consider next diagrams that connect fewer Wilson lines. The sum of all two-line threeloop diagrams may be written as where the first term represents the dipole T 1 · T 2 contribution to Γ dip. n of Eq. (2.6). In contrast, the second term involving an anti-commutator on each of the lines is relevant for the calculation of ∆ (3) n ; its kinematic dependence is contained in H 2 (1, 2) = H 2 (2, 1). Similarly, the sum of all three-line diagrams takes the form We omitted here the tripole term, proportional to f abc T a 1 T b 2 T c 3 , which vanishes for lightlike kinematics where γ i j → −∞. Note that in this limit H 2 and H 3 are necessarily polynomials in log(−γ i j ).
Summing over all subsets of two and three lines out of four and using colour conservation, we have where The three-and two-line contributions of Eq. (4.7) must be added to the contribution of the fourline diagrams in Eq. (4.4) to obtain the final, gauge-invariant result for the anomalous dimension, (1, 2, 3, 4). This may then be contrasted with the general form for ∆ (3) 4 in Eq. (4.1). Upon applying colour conservation to the latter, the comparison leads to the following conclusions: • The combination multiplying the two-line colour factor in Eq. (4.7) must be proportional to the constant C in Eq. (4.1): • The function F is obtained through the following combination of four-, three-and two-line kinematic functions H n : (4.10) The above equations put strong constraints on the kinematic functions H n : the function F depends on CICRs, while the individual functions H n on the right-hand side of Eq. (4.10) depend on logarithms of cusp angles. These must therefore conspire to combine into logarithms of CICRs. In addition, C is a constant, so the kinematic dependence of the functions H 3 must cancel in the sum in Eq. (4.9). Our computation satisfies all these constraints, providing a strong check of the result.
The three-loop correction to the soft anomalous dimension
Adding up all contributing webs according to Eqs. (4.10) and (4.9), we find the following results for the function F and the constant C of Eq. (4.1): where we recall that z = z i jkl andz =z i jkl are related to the CICRs by Eq. (3.3) and where the functions L w (z) are Brown's single-valued harmonic polylogarithms (SVHPLs) [51] (see also Ref. [53]), where w is a word made out of 0's and 1's. Note that we kept implicit the dependence of these functions onz. SVHPLs can be expressed in terms of ordinary harmonic polylogarithms (HPLs) [52] in z andz. The result for F in terms of HPLs is attached in computerreadable format to Ref. [1].
Let us now briefly discuss the main features of the result. First, we note that while F(z) is defined everywhere in the physical parameter space, it is only single-valued in the part of the Euclidean region (the region where all invariants are spacelike, p i · p j < 0) where z andz are complex conjugate to each other. Single-valuedness ensures that ∆ (3) n has the correct branch cut structure of a physical scattering amplitude [53,54]: it is possible to analytically continue the function to the entire Euclidean region while the function remains real throughout [55]. Next note that if one considers F(z) as a function of two independent variables z andz (not a complex conjugate pair) this function has branch points for z andz at 0, 1 and ∞. Crossing momenta from the final to the initial state is realized by taking monodromies around these points.
Making the permutation (Bose) symmetry manifest, the final answer may be written as: where, as in Eq. An additional symmetry group Z 2 arises from the definition of (z,z) in Eq. (3.3), which is invariant under swapping the two, z ↔z. Hence F(z) must be invariant under this transformation, i.e. F(z) = F(z). This symmetry is realised on the space of SVHPLs by the operation of reversal of words, namely, if w is a word made out of 0's and 1's, and w the reversed word, then we have L w (z) = L w (z) + . . ., where the dots indicate terms proportional to multiple zeta values. Even functions then correspond to 'palindromic' words (possibly up to multiple zeta values), and indeed Eq. (5.2) is 'palindromic'.
Finally, let us comment on the momentum conserving limit of ∆ (3) 4 , which corresponds to twoto-two massless scattering. In this limit we havez = z = s 12 /s 13 = −s/(s + t). It follows that for two-to-two massless scattering F(z) can be expressed entirely in terms of HPLs with indices 0 and −1 depending on s/t, in agreement with known results for on-shell three-loop four-point integrals [34,56,57].
A further consistency check of the result is available upon specialising to the Regge limit 5 . By expanding Eq. (5.2) at large s/(−t) we find no α 3 s ln p (s/(−t)) for any p > 0: ∆ 4 simply tends to a constant in this limit. This is entirely consistent with the behaviour of a two-to-two scattering amplitude in the Regge limit [19,20,58]; indeed, the dipole formula alone is consistent with predictions from the Regge limit through next-to-next-to-leading logarithms at three loops [58].
Two-particle collinear limits
Finally, let us comment on the behaviour of ∆ (3) n in the limit where two final-state partons become collinear. A well-known property of an n-parton scattering amplitude is that the limit where any two coloured partons become collinear can be related to an (n − 1)-parton amplitude: where one of the partons in M n−1 (P, {p j }) replaces the collinear pair, and has a colour charge T = T 1 + T 2 and momentum P = p 1 + p 2 , while the remaining (n − 2) partons {p j } are the noncollinear ones in the original amplitude, which we refer to as "the rest of the process" below. The splitting amplitude Sp (p 1 , p 2 ) is an operator in colour space which captures the singular terms for P 2 → 0. All elements in Eq. (6.1) have infrared singularities, and these must clearly be related. Furthermore, Sp is expected to only depend on the quantum numbers of the collinear pair [59] to all orders in perturbation theory. Hence also its soft anomalous dimension, Sp + ∆ Sp , (6.2) must be independent of the momenta and colour degrees of freedom of the rest of the process. This property is automatically satisfied for the dipole formula, but it is highly non-trivial for it to persist when quadrupole corrections are present. Indeed, the quadrupole interaction might introduce correlations between the collinear pair and the rest of the process. In Refs. [16,18] this property was used to constrain ∆ n , but this was done under the assumption that C in Eq. (4.1) vanishes. Given our result for ∆ We note that ∆ Sp only depends on the colour degrees of freedom of the collinear pair, and is entirely independent of the kinematics, and hence fully consistent with general expectations 6 [59]. We emphasise that ∆ (3) Sp is independent of the value of n that was used to compute it. In particular, ∆ Sp agrees with ∆
Conclusions
To conclude, we computed [1,46] all connected graphs contributing to the soft anomalous dimension in multi-parton scattering and determined the first correction going beyond the dipole formula. We find that such corrections appear at three-loops already for three coloured partons, but they only involve kinematic dependence in amplitudes with at least four coloured partons, when conformally-invariant cross rations can be formed. The final result is remarkably simple: it is expressed in terms of single-valued harmonic polylogarithms of uniform weight five. Finally, we recover the expected behaviour of amplitudes in both the Regge limit and in two-particle collinear limits, and make further concrete predictions in both these limits. | 2016-06-17T22:49:54.000Z | 2016-06-17T00:00:00.000 | {
"year": 2016,
"sha1": "290a30c0955b95bfdbe2cb867569fb5a300eea66",
"oa_license": "CCBYNCND",
"oa_url": "https://pos.sissa.it/260/058/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "32616a13cdc62b0f06295d4f3b7c4310d9a1506d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
226249221 | pes2o/s2orc | v3-fos-license | Effectiveness of the fetal pillow to prevent adverse maternal and fetal outcomes at full dilatation cesarean section in routine practice
The fetal pillow has been suggested to reduce maternal trauma and fetal adverse outcomes when used to disimpact the fetal head at full dilatation cesarean section.
| INTRODUC TI ON
Cesarean section at full cervical dilatation (FDCS) occurs in 1.24%-2.1% of all deliveries 1,2 and is associated with both maternal and fetal complications, including maternal trauma and low cord pH. 3 With the rate of cesarean section continuing to rise and some concerns raised about decreasing skills in complex instrumental delivery, it is likely that this trend will continue.
The fetal pillow is a one-use disposable silicone device consisting of a soft flat base with a balloon compartment, which can be inserted into the vagina and then inflated in order to elevate the fetal head before FDCS (Safe Obstetric Systems, Essex, UK). The use of the fetal pillow has been suggested to reduce maternal trauma from bleeding and uterine or vaginal tears at FDCS. 4 However, a recent meta-analysis of the limited data available suggests that reverse breech extraction for delivery of the fetus at FDCS is superior to vaginal push methods. The authors of that review were unable to formally assess the use of a fetal pillow because of a lack of evidence. 5 Use of the fetal pillow has been suggested to reduce blood loss, improve cord pH, and reduce duration of hospital admission when compared with no manipulation or elevation of the fetal head manually by an assistant with their hand within the vagina. 6,7 A recent study also suggested that fetal pillow usage reduced hospital stay by 9 hours. 8 A subsequent randomized controlled trial did suggest that use of the fetal pillow at FDCS reduced the incidence of uterine extensions but no other outcome. 4 The present study was designed to review whether use of the fetal pillow at FDCS reduced estimated blood loss and need for transfusion when compared with cases where a fetal pillow had not been used.
| MATERIAL AND ME THODS
We performed a retrospective cohort study of all cases where a cesarean section for a singleton pregnancy was performed at full dilatation between September 2014 and March 2018 at Liverpool Women's Hospital, a large UK teaching hospital. Each case was categorized by whether a fetal pillow was or was not used. All outcomes were obtained from the hospital electronic patient record systems Meditech and neonatal outcomes were obtained from Badger.net.
The fetal pillow was introduced at Liverpool Women's Hospital as an option for the management of FDCS in September 2014. All senior trainees and consultants were fully trained in the use of the fetal pillow, FDCS, and rotational vaginal deliveries. The decision to use a fetal pillow or not was left to the individual clinician.
Statistical analysis was performed using a t-test for normally distributed values and relative risk when comparing between interventions.
| Ethical approval
Ethical approval was provided as part of hospital audit processes on 2 June 2015 (Liverpool Women's Hospital 2016/016).
| RE SULTS
There were 471 cases of FDCS during the study period. We excluded 80 cases: 48 twin pregnancies, 18 breech presentations, 13 misreported as being at full dilatation, and a single antepartum stillbirth.
This left 391 cases for assessment of which 170 had used a fetal pillow and 221 where a fetal pillow had not been used ( Figure 1).
All neonates were born alive during the study period irrespective of fetal pillow usage. There was no statistical difference in neonatal outcomes by use of fetal pillow for Apgar score <7 at 5 minutes (RR 1.30, 95% CI 0.60-2.82, P = .51) and admission to the neonatal unit (RR 0.72, 95% CI 0.40-1.31, P = .29) ( Table 3). There was no statistically significant effect on arterial pH < 7.1 in those neonates managed with fetal pillow compared with those without fetal pillow (RR 0.54, 95% CI 0.28-1.02, P = .06). There were no episodes of fetal skull fracture or other birth trauma in the study period.
We further assessed those deliveries potentially thought to be most likely to benefit from the use of the fetal pillow, namely those at or below the level of the ischial spines (fetal pillow used 139, no fetal pillow 126) ( Table 4) or below the ischial spines (fetal pillow used 72, no fetal pillow 40 (Table 5). There was no benefit from the use of the fetal pillow on the maternal outcomes of estimated blood loss >1000 mL either at or below the spines (RR 1.61, 95% CI 0.95-2.72, P = .0747) or below the spines (RR 2.78, 95% CI 0.64-12.06, P = .1726). There was no reduction in estimated blood loss >1500 mL either at the spines (RR 2.72, 95% CI 0.90-8.22, P = .0761) or below the spines (RR 2.22, 95% CI 0.26-19.21, P = .726). There was no effect upon the need for maternal blood transfusion either at the spines (RR 1.81, 95% CI 0.56-5.88, P = .3213) or below the spines (RR 2.22, 95% CI 0.26-19.21, P = .726). There was no effect
Key Message
The use of the fetal pillow at full dilatation cesarean section appears to be ineffective at reducing significant maternal or fetal complications. on the frequency of uterine extensions when the fetal head was at or below the spines (RR 0.87, 95% CI 0.56-1.36, P = .56) or when it was below the spines (RR 0.90, 95% CI 0.41-1.99, P = .80).
Fetal outcomes from use of the fetal pillow at or below the ischial spines showed no effect on Apgar score <7 at 5 minutes (RR 1.30, 95% CI 0.51-3.30, P = .5881), or admission to the neonatal unit (RR 0.67, 95% CI 0.31-1.39, P = .2793), but did reduce the chance of the neonate being born with an arterial pH < 7.1 (RR 0.39, 95% CI 0.20-0.80, P = .0094). There was no benefit in outcome from fetal pillow usage when station was below the spines for Apgar score <7
| D ISCUSS I ON
The management of a fetus that does not deliver vaginally after full dilatation represents a common and complex intrapartum issue.
Changes in clinician skills may have led to an increased use of FDCS with its associated morbidity rather than more complex vaginal deliveries. 9 It is therefore attractive to look to devices such as the fetal pillow to improve clinical outcomes in these cases.
Our study represents the largest study to date on the routine clinical use of the fetal pillow to assist delivery of the fetus at FDCS. We were not able to demonstrate any clinically relevant benefit from the use of the fetal pillow over normal methods either in maternal outcomes, such as hemorrhage, or in fetal outcomes. There was no benefit when the presenting part was at the level of the ischial spines or lower, suggesting a more deeply impacted fetal head. The only factor reaching statistical significance was a reduction in low arterial pH with fetal pillow use but without a concomitant reduction in admission to the neonatal unit it is unclear what, if any, clinical relevance there is from this. Our findings are consistent with those of Hanley et al but we were not able to identify a meaningful difference in length of hospital stay. 8 The limitations of our study include its non-randomized retrospective nature, and as such there may be some unknown factors concerning patient selection. It is perfectly plausible that patient selection for who received the fetal pillow had an influence on these findings, as demonstrated by the increased usage of the fetal pillow when station was below the ischial spines. Likewise, we did not assess clinician experience; although, because of the size and duration of the study the authors feel that this is unlikely to have been a major influencing factor. However, this study is large and does reflect the experience of routine clinical use of the fetal pillow within a large UK maternity unit where its use was well established. There were more nulliparous women in the fetal pillow group than in those managed without and this may have been an influencing factor on the decision The only randomized study of fetal pillow use demonstrated a significant difference in blood loss >1000 mL, need for blood .0807 Not stated 0 2 (0.9) 0.14 (0.01-2.98); .2102 Abbreviations: BMI, body mass index; CS, cesarean section; OA, occipito-anterior; OP, occipitoposterior; OT, occiput transverse; RR, relative risk.
TA B L E 1 Population characteristics by use of fetal pillow transfusion, and surgeon grading of major uterine extension at cesarean section. 4 We observed no effect on uterine extensions between those women managed with a fetal pillow and those without in any subgroup.
| CON CLUS ION
We demonstrate the largest study of use of the fetal pillow at FDCS. | 2020-11-05T09:09:35.951Z | 2020-11-03T00:00:00.000 | {
"year": 2020,
"sha1": "4491eda30602ab7270d56493e6c69206f45ffc28",
"oa_license": "CCBYNC",
"oa_url": "https://obgyn.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/aogs.14038",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "dfff3b3c84fc4a790f8a93af94cb0f27667154d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
145003 | pes2o/s2orc | v3-fos-license | A Simple Method for Detecting Interactions between a Treatment and a Large Number of Covariates
We consider a setting in which we have a treatment and a large number of covariates for a set of observations, and wish to model their relationship with an outcome of interest. We propose a simple method for modeling interactions between the treatment and covariates. The idea is to modify the covariate in a simple way, and then fit a standard model using the modified covariates and no main effects. We show that coupled with an efficiency augmentation procedure, this method produces valid inferences in a variety of settings. It can be useful for personalized medicine: determining from a large set of biomarkers the subset of patients that can potentially benefit from a treatment. We apply the method to both simulated datasets and gene expression studies of cancer. The modified data can be used for other purposes, for example large scale hypothesis testing for determining which of a set of covariates interact with a treatment variable.
Introduction
To develop strategies for personalized medicine, it is important to identify the treatment and covariate interactions in the setting of randomized clinical trial [Royston and Sauerbrei, 2008]. To confirm and quantify the treatment effect is often the primary objective of a randomized clinical trial. Although important, the final result (positive or negative) of a randomized trial is a conclusion with respect to the average treatment effect on the entire study population. For example, a treatment may be no better than the placebo in the overall study population, but it may be better for a subset of patients. Identifying the treatment and covariate interactions may provide valuable information for determining this subgroup of patients.
In practice, there are two commonly used approaches to characterize the potential treatment and covariate interactions. First, a panel of simple patient subgroup analyses, where the treatment and control arms are compared in different patients subgroups defined a priori, such as male, female, diabetic and non-diabetic patients, may be performed following the main comparison. Such an exploratory approach mainly focusses on simple interactions between treatment and one dichotomized covariate. However it will often suffer from false positive findings due to multiple testing and will not find complicated treatment and covariates interaction.
In a more rigorous analytic approach, the treatment and covariates interactions can be examined in a multivariate regression analysis where the product of the binary treatment indicator and a set of baseline covariates are included in the regression model. Recent breakthroughs in biotechnology makes a vast amount of data available for exploring for potential interaction effect with the treatment and assisting in the optimal treatment selection for individual patients. However, it is very difficult to detect the interactions between treatment and high dimensional covariates via direct multivariate regression modeling. Appropriate variable selection methods such as Lasso are needed to reduce the number of covariates having interaction with the treatment. The presence of main effect, which often have bigger effect on the outcome than the treatment interactions, further compounds the difficulties in dimension reduction since a subset of variables need to be selected for modeling the main effect as well.
Recently, Bonetti and Gelber [2004] formalized the subpopulation treatment effect pattern plot (STEPP) for characterizing interactions between the treatment and continuous covariates. Sauerbrei et al. [2007] proposed a efficient algorithm for multivariate modelbuilding with flexible fractional polynomials interactions (MFPI) and compared the empirical performance of MFPI with STEPP. Su et al. [2008] proposed the classification and regression tree method to explore the covariates and treatment interactions in survival analysis. Tian and Tibshirani [2010] proposed an efficient algorithm to construct an index score, the sum of selected dichotomized covariates, to stratify patients population according to the treatment effect. In a more recent work, Zhao et al. [2012] proposed a novel approach to directly estimate the optimal treatment selection rule via maximizing the expected clinical utility, which is equivalent to a weighted classification problem. There are also rich Bayesian literatures for flexible modeling nonlinear and nonadditive/interaction relationship between covariates and responses [LeBlanc, 1995, Chipman et al., 1998, Gustafson, 2000, Chen et al., 2012. However, most of these existing methods excepting that proposed by Zhao et al. [2012], are not designed to deal with high-dimensional covariates.
In this paper, we propose a simple approach to estimate the covariates and treatment interactions without the need for modeling main effects. The idea is simple, and in a sense, obvious. We simply code the treatment variable as ±1 and then include the products of this Figure 1: Example of the modified covariate approach, applied to gene expression data from multiple myeloma patients who were given one of two treatments in a randomized trial. Our procedure constructed a gene score based on 20 genes, to detect gene expression-treatment interactions. The numerical score was constructed on a training set, and then categorized into low, medium and high. The panels show the survival curves for a separate test set, overall and stratified by the score.
variable with centered versions of each covariate in the regression model. Figure 1 gives a preview of the results of our method. The data consist of gene expression measurements from multiple myeloma patients, who were randomized to one of two treatments. Our proposed method constructs a numerical gene score on a training set to reveal gene expression-treatment interactions. The panels show the estimated survival curves for patients in a separate test set, overall and stratified by the score. Although there is no significant survival difference between the treatments overall, we see that patients with medium and high gene scores have better survival with treatment PS341 than those with Doxyrubicin.
In section 2, we describe the methods for continuous, binary as well as survival type of outcomes. We also establish a simple casual interpretation of the proposed method in several cases. In section 3, the finite sample performance of the proposed method has been investigated via extensive numerical study. In section 4, we apply the proposed method to a real data example about the Tamoxifen treatment for breast cancer patients. Finally, potential extensions and applications of the method were discussed in section 5.
The proposed method
In the following, we let T = ±1 be the binary treatment indicator and Y (1) and Y (−1) be the potential outcome if the patient received treatment T = 1 and −1, respectively. We only observe Y = Y (T ) , T and Z, a q−dimensional baseline covariate vector. We assume that the observed data consist of N independent and identically distributed copies of (Y, T, Z), {(Y i , T i , Z i ), i = 1, · · · , N}. Furthermore, we let W(·) : R q → R p be a p dimensional functions of baseline covariates Z and always include an intercept. We denote W(Z i ) by W i in the rest of the paper. Here the dimension of W i could be large relative to the sample size N. For simplicity, we assume that Prob(T = 1) = Prob(T = −1) = 1/2.
Continuous response model
When Y is continuous response, a simple multivariate linear regression model for characterizing the interaction between treatment and covariates is where ǫ is the mean zero random error. In this simple model, the interaction term γ ′ 0 W(Z)·T models the heterogeneous treatment effect across the population and the linear combination of γ ′ 0 W(Z) can be used for identifying the subgroup of patients who may or may not be benefited from the treatment. Specifically, under model (1), we have i.e., γ ′ 0 W(z) measures the causal treatment effect for patients with baseline covariate Z. With observed data, γ 0 can be estimated along with β 0 via the ordinary least squares method.
On the other hand, noting the relationship that one may estimate γ 0 by directly minimizing We call this the modified outcome method, where 2Y T can be viewed as the modified outcome,which has been first proposed in Ph.D thesis of James Sinovitch, Harvard University.
Under the simple linear model (1), both estimators are consistent for γ 0 , and the full least squares approach in general is more efficient than the modified outcome method. In practice, the simple multivariate linear regression model often is just a working model approximating the complicated underlying probabilistic relationship between the treatment, baseline covariates and outcome variables. It comes as a surprise, that even when model (1) is misspecified, multivariate linear regression and modified outcome estimators still converge to the same deterministic limit γ * and furthermore W(z) ′ γ * is still a sensible estimator for the interaction effect in the sense that it seeks the "best" function of z in a functional space F to approximate ∆(z) by solving the optimization problem: where the expectation is with respect to Z.
The Modified Covariate Method
The modified outcomes estimator defined above is useful for the Gaussian case, but does not generalize easily to more complicated models. Hence we propose a new estimator which is equivalent to the modified outcomes approach in the Gaussian case and extends easily to other models. This is the main proposal of this paper.
We consider the simple working model where ǫ is the mean zero random error. Based on model (3), we propose the modified covariate estimatorγ as the minimizer of The fact that we can directly estimate γ 0 in model (3) without considering the intercept α 0 is due to the orthogonality between W(Z i ) · T i and the intercept, which is the consequence of the randomization. That is, we simply multiply each component of W i by one-half the treatment assignment indicator (= ±1) and perform a regular linear regression. Now since the modified outcome and modified covariate estimates are identical and share the same causal interpretation for the simple Gaussian model. Operationally, we can omit the intercept and perform a simple linear regression with the modified covariates. In general, we proposed the following modified covariate approach 1. Modify the covariate based on the modified observations 3.γ ′ W(z) can be used to stratify patients for individualized treatment selection. Figure 2 illustrates how the modified covariate method works for a single covariate Z, in two treatment groups. The raw data is shown the left, and the data with modified covariate is shown on the right. The slope of the regression line computed in the right panel estimates the treatment-covariate interaction.
The advantage of this new approach is twofold: it avoids having to directly model the main effects and it has a causal interpretation for the resulting estimator regardless of the adequacy of the assumed working model (3). Furthermore, unlike modified outcome method, it is straightforward to generalize the new approach to other types of outcome.
Binary Responses
When Y is a binary response, in the same spirit as the continuous outcome case, we propose to fit a multivariate logistic regression model with modified covariates W * = W(Z) · T /2 generalized from (5): Noting that if model (7) is correctly specified, then , and thus γ ′ 0 W(z) has an appropriate causal interpretation. However, even when model (7) is not correctly specified, we still can estimate γ 0 by treating (7) as a working model.
In general, the maximum likelihood estimator (MLE) of the working model, converges to a deterministic limit γ * and W(z) ′ γ * /2 can be viewed as the solution to the following optimization problem Figure 2: Example of the modified covariate approach. The raw data is shown the left, consisting of a single covariate Z and a treatment T = −1 or 1. The treatment-covariate interaction has slope γ approximately equal to 1. On the right panel we have plotted the response against Z · T /2. The the regression line computed in the right panel estimates the treatment effect for each give value of covariate Z.
where the expectation is with respect to (Y, T, Z). Therefore, where W(z) forms a "rich" set of basis functions, W(z) ′ γ * /2 is an approximation to the minimizer of E Y f (Z)T − log(1 + e f (Z)T ) .
In the appendix, we show that the latter can be represented as under very general assumptions. Therefore, may serve as an estimate for the covariate-specific treatment effect and used to stratify patients population, regardless of the validity the working model assumptions.
As described above, the MLE from the working model (7) can always be used to construct a surrogate to the personalized treatment effect measured by the "risk difference" On the other hand, different measures for individualized treatment effects such as relative risk may also be of interest. For example, if we consider an alternative approach for fitting the logistic regression working model (7) by lettinĝ thenγ converges to a deterministic limitγ * and W(z) ′γ * (z)/2 can be viewed as an approximation to log{∆(z)}, where∆ which measures the treatment effect based on "relative risk" rather than "risk difference". The detailed justification is given in the Appendix 6.1.
Survival Responses
When the outcome variable is survival time, we often do not observe the exact outcome for every subject in a clinical study due to incomplete follow-up. In this case, we assume that the outcome Y is a pair of random variables (X, δ) = {X ∧ C, I(X < C)}, whereX is the survival time of primary interest, C is the censoring time and δ is the censoring indicator. Firstly, we propose to fit a Cox regression model where λ(t|·) is the hazard function for survival timeX and λ 0 (·) is a baseline hazard function free of Z and T. When model (8) is correctly specified, can be used to stratify patient population according to ∆(z), where Λ 0 (t) = t 0 λ 0 (u)du is a monotone increasing function (the baseline cumulative hazard function). Under the proportional hazards assumption, the maximum partial likelihood estimatorγ is a consistent estimator for γ 0 and semiparametric efficient. Moreover, even when model (8) is misspecified, we still can "estimate" γ 0 by maximizing the partial likelihood function. In general, the resulting estimator,γ, converges to a deterministic limit γ * , which is the root of a limiting score equation [Lin and Wei, 1989]. More generally, W(z) ′ γ * /2 can be viewed as the solution of the optimization problem where N(t) = I(X ≤ t)δ i and the expectation is with respect to (Y, T, Z). Therefore, W(z) ′ γ * /2 can be viewed as an approximation to In appendix 6.1, we shown that the minimizer f * satisfies for a monotone increasing function Λ * (u). Thus, when censoring rates are balanced between two arms, can be used for characterizing the covariate-specific treatment effect and stratifying the patient population even when the working model (8) is misspecified.
Regularization for high dimensional data
When the dimension of W * , p, is high, we can easily apply appropriate variable selection procedures based on the corresponding working model. For example, L 1 penalized (Lasso) estimators proposed by Tibshirani [1996] can be directly applied to the modified data (6). In general, one may estimate γ by minimizing where It might be reasonable to suppose that the covariates interacting with the treatment will more likely be the ones exhibiting important main effects themselves. Therefore, one could also apply the adaptive Lasso procedure [Zou, 2006] with feature weightsŵ j proportional to the reciprocal of the univariate "association strength" between the outcome Y and the jth component of W(Z). Specifically, one may modify the penalty in (9) as whereŵ j = |θ i | −1 or (|θ −1i | + |θ 1i |) −1 , whereθ j1 ,θ j(−1) andθ j , are the estimated regression coefficients of the jth component of W(Z) in appropriate univariate regression analysis with observations from the group T = 1 only, from the group T = −1 only, and from both groups, respectively. Other regularization methods such as elastic net may also be used [Zou and Hastie, 2005]. Interestingly, one can treat the modified data (6) just as generic data and hence couple it with other statistical learning techniques. For example, one can apply a classifier such as prediction analysis of microarrays (PAM) to the modified data for the purpose of finding subgroup of samples in which the treatment effect is large. We also can do large scale hypothesis testing on the modified data to determine which gene-treatment interactions have a significant effect on the outcome.
Efficiency Augmentation
When the models (5, 7 and 8) with modified covariates is correctly specified, the MLE estimator for γ * is the most efficient estimator asymptotically. However, when models are treated as working models subject to mis-specification, a more efficient estimator can be obtained for estimating the same deterministic limit γ * . To this end, noting the fact that in generalγ is defined as the minimizer of an objective function motivated from a working model:γ Noting that for any function a(z) : R q → R p , E{T i a(Z i )} = 0 due to randomization, the minimizer of the augmented objective function converges to the same limit asγ, when N → ∞. Furthermore, by selecting an optimal augmentation term a 0 (·), the minimizer of the augmented objective function can have smaller variance than that of the minimizer of the original objective function. In appendix 6.2, we show that are optimal choices for continuous and binary responses, respectively. Therefore, we proposed the following two-step procedures for estimating γ * : 1. Estimate the optimal a 0 (z) : (a) For continuous response, fit the linear regression model E(Y |Z) = ξ ′ B(Z) for appropriate function B(Z) with OLS. Appropriate regularization will be used if (b) For binary response, fit the logistic regression model logit{Prob(Y = 1|Z)} = ξ ′ B(Z) for appropriate function B(Z) by maximizing the likelihood function. Appropriate regularization will be used if the dimension of B(Z) is high. Let For survival outcome, the log-partial likelihood function is not a simple sum of i.i.d terms. However, in Appendix 6.2 we show that the optimal choice of a(z) is Unfortunately, a 0 (z) depends on the unknown parameter γ * . On the other hand, on highdimensional case, the interaction effect is usually small and it is not unreasonable to assume that γ * ≈ 0. Furthermore, if the censoring patterns are similar in both arms, we have G 1 (u, z) ≈ G 2 (u, z). Using these two approximations, we can simplify the optimal augmentation term as Therefore, we propose to employ the following approach for implementing the efficiency augmentation procedure,:
Remarks 1
When the response is continuous, the efficient augmentation estimator is the minimizer of This equivalence implies that this efficiency augmentation procedures is asymptotically equivalent to that based on a simple multivariate regression with main effectξ ′ B(Z i ) and interaction γ ′ W(Z) · T. This is not a surprise. As we pointed out in section 2.1, the choice of the main effect in the linear regression does not affect the asymptotical consistency of estimating the interactions. On the other hand, a good choice of main effect model can help to estimate the interaction, i.e., personalized treatment effect, more accurately.
Another consequence is that one may directly use the same algorithm solving standard optimization problem to obtain the augmented estimator when lasso penalty is used. For binary or survival response, the augmented estimator under lasso regularization can be obtained with slightly modified algorithm designed for lasso optimization as well. The detailed algorithm is given in the appendix 6.3.
Remarks 2
For nonlinear models such as logistic and Cox regressions, the augmentation method is NOT equivalent to the full regression approach including main effect and interaction terms. In those cases, different specification of the main effects in the regression model result in asymptotically different estimates for the interaction term, which, unlike the proposed modified covariate estimator, in general can not be interpreted as the personalized treatment effect.
Remarks 3
With binary response, the estimating equation targeting on approximating the relative risk is and the optimal augmentation term a 0 (z) can be be approximated by when γ * ≈ 0. The efficiency augmentation algorithm can be carried out accordingly.
Remarks 4
The similar technique can also be used for improving other estimators such as that proposed by Zhao et al. [2012], where the surrogate objective function for the weighted misclassification error can be written in the form of (2.6) as well. The optimal function a 0 (z) needs to be derived case by case.
Numerical Studies
In this section, we perform an extensive numerical study to investigate the finite sample performance of proposed method in various settings: the treatment may or may not have marginal main effect between two groups; the personalized treatment effect may depend on complicated function of covariates such as interactions among covariates; the regression model for detecting the interaction may or may not be correctly specified. Due to the limitation of the space, we only present simulation results from the selected representive cases. The results for other scenarios are similar to those presented.
• new: The second method is to fit a multivariate linear regression with the modified covariate W * = (1, Z) ′ ·T /2 as the covariates, i.e., the dimension of the covariate matrix is p + 1. Again, the Lasso is used for selecting variables having treatment interaction.
• new/augmented: the proposed method with efficiency augmentation, where E(Y |Z) is estimated with lasso-regularized ordinary least squared method and B(z) = z.
For all three methods, we selected the Lasso penalty parameter via 20-fold cross-validation.
To evaluate the performance of the resulting score measuring the individualized treatment effect, we estimated the Spearman's rank correlation coefficient between the estimated score and the "true" treatment effect in an independently generated set with a sample size of 10000. Based on 500 sets of simulations, we plotted the boxplots of the rank correlation coefficients between the estimated scoresγ ′ Z and ∆(Z) under simulation settings (1), (2), (3) and (4) in top left, top right, bottom left and bottom right panels of Figure 3, respectively. When the main effect is moderate and covariates are independent (setting 1), the performance of the proposed method is better than that of the full regression approach. However, when the main effect is relatively big compared to interactions (settings 3 and 4), the proposed method is unable to estimate the correct individualized treatment effect well and is actually inferior to the simple regression method. On the other hand, the performance of the "new/augmented" is the best or nears the best across all the four settings and is sometimes substantially better than its competitors.
Binary responses
For binary responses, we used the same simulation design as that for the continuous response. Specifically, we generated N independent binary samples from the regression model where all the model parameters were the same as those in the case of continuous response. Noting that the logistic regression model is misspecified under the chosen simulation design. We also considered the same four settings with different combinations of β j and ρ. For each of the simulated data set, we implemented three methods: 1. full regression: The first method is to fit a multivariate logistic regression with full main effect and covariate/treatment interaction terms, i.e., the dimension of the covariate matrix was 2(p + 1). The Lasso was used to select the variables.
new:
The second method is to fit a multivariate logistic regression (without intercept) with the modified covariate W * = (1, Z) ′ · T /2 as the covariates. Again, the Lasso was used for selecting variables having treatment interaction.
To evaluate the performance of the resulting score measuring the individualized treatment effect, we estimated the Spearman's rank correlation coefficient between the estimated score and the "true" treatment effect where Φ was the cumulative distribution function of standard normal. Although the scores measuring the interaction from the first and second/third methods were different even when the sample size goes to infinity, the rank correlation coefficients put them on the same footing in comparing performances.
In top left, top right, bottom left and bottom right panels of Figure 4, we plotted the boxplots of the correlation coefficients between the estimated scoresγ ′ Z and ∆(Z) under simulation settings (1), (2), (3) and (4), respectively. The patterns are similar to that for the continuous response. The "new/augmented method" performed the best or close to the best in all the four settings. The efficiency gain of the augmented method in setting 4 where the main effect was relative big and covariates were correlated was more significant than that in other settings.
In additional simulation study, we also evaluated the empirical performance of the generalized modified covariate approach with nearest shrunken centroid classifier. In one set of the simulation, the binary response is simulated from model (13) with p = 50, n = 200, β j = I(j ≤ 20)/2, γ j = I(j ≤ 4)/2 and σ 0 = √ 2. Here the first four predictors have covariate treatment interaction. We applied the nearest shrunken centroid classifier [Tibshirani et al., 2001] to the modified data (6) with the shrinkage parameter selected via 10 fold crossvalidation. This produced a posterior probability estimator for {Y = 1}. We then applied this estimated posterior probability interaction score, to a independently generated test set of size 400. We dichotomized the observations in the test set into high and low score groups according to the median value and calculated the differences between two treatment arms in high and low score groups separately. With 100 replications, the boxplots of the differences in high and low score groups were shown in the right panel of Figure 5. For comparison purposes, the empirical differences between two arms in high and low score groups determined by the true interaction score p j=1 γ j Z j were shown in the left panel of figure 5. It can be seen that modified covariate approach, coupled with nearest shrunken centroid classifier, provided reasonable stratification for differentiating the treatment effect.
Survival Responses
For survival responses, we used the same simulation design as that for the continuous and binary responses. Specifically, we generated N independent survival time from the regression modelX where all the model parameters were the same as in the previous subsections. The censoring time was generated from uniform distribution U(0, ξ 0 ), where ξ 0 was selected to induce 25% censoring rate. For each of the simulated data set, we implemented three methods: 1. full regression: The first method was to fit a multivariate Cox regression with full main effect and covariate/treatment interaction terms, i.e., the dimension of the covariate matrix was 2p + 2. The Lasso was used to select the variables 2. new: The second method was to fit a multivariate Cox regression with the modified covariate W * = (1, Z) ′ · T /2 as the covariates. Again, the Lasso was used for selecting variables having treatment interaction.
3. new/augmented: the proposed method with efficiency augmentation. To model the E{M(τ )|Z}, we used linear regression with lasso regularization method.
To evaluate the performance of the resulting score measuring the individualized treatment effect, we estimated the Spearman's rank correlation coefficient between the estimated score and the "true" treatment effect based on survival probability at t 0 = 5 In top left, top right, bottom left and bottom right panels of Figure 6, we plotted the boxplots of the correlation coefficients between the estimated scoresγ ′ Z and ∆(Z) under simulation settings, (1), (2), (3) and (4), respectively. The patterns were similar to those for the continuous and binary responses and confirmed our findings that the "efficiencyaugmented method" performed the best among the three methods in general.
Examples
It has been known that the breast cancer can be classified into different subtypes using gene expression profile and the effective treatment may be different for different subtypes of the disease [Loi et al., 2007]. In this section, we apply the proposed method to study the potential interactions between gene expression levels and Tamoxifen treatment in the breast cancer patients.
The data set consists of 414 patients in the cohort GSE6532 collected by Loi et al. [2007] for the purpose of characterizing ER-positive subtypes with gene expression profiles. The dataset including demographic information and gene expression levels can be downloaded from the website www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE6532. Excluding patients with incomplete information, there are 268 and 125 patients receiving Tamoxifen and alternative treatments, respectively. In addition to the routine demographic information, we have 44, 928 gene expression measurements for each of the 393 patients. The outcome of the primary interest here is the distant metastasis free survival time, which subjects to right censoring due to incomplete follow-up. The metastasis free survival times in two treatment groups are not statistically different with a two-sided p value of 0.59 based on the log-rank test (Figure 7). The goal of the analysis is to construct a score using gene expression levels for identifying subgroup of patients who may or may not be benefited from the Tamoxifen treatment in terms of the distant metastasis free survival. To this end, we select the first 90 patients in the Tamxifen arm and an equal number of patients in the alternative treatment arm to form the training set and reserve the rest 213 patients as the independent validation set. In selecting the training and validation sets, we use the original order of the observations in the dataset without additional sorting to ensure an objective analysis.
We first identify 5,000 genes with highest empirical variances and then construct an interaction score by fitting the Lasso penalized Cox regression model with modified covariates based on the 5,000 genes in the training set. The Lasso penalty parameter is selected via 20-fold cross-validation. The resulting interaction score is a linear combination of expression levels of seven genes. Here, a low interaction score favors Tamoxifen treatment. We apply the gene score to classify the patients in the validation set into high and low score groups according to if her score is greater than the median level. In the high score group, the distant metastasis free survival time in the Tamoxifen group is shorter than that in the alternative group with an estimated hazard ratio of 3.52 for Tamoxifen versus non-Tamoxifen treatment group (logrank test p = 0.064). In the low score group, the distant metastasis free survival time in the Tamoxifen group is longer than that in the alternative group with an estimated hazard ratio of 0.694 (p = 0.421). The estimated survival functions of both treatment groups are plotted in the upper panels of Figure 8. The interaction between constructed gene score and treatment is statistically significant in the multivariate Cox regression based on the validation set (p = 0.004).
Furthermore, we implement the efficiency augmentation method and obtain a new score, which is based on expression level of eight genes. Again, we classify the patients in the validation set into high and low score groups based on the constructed gene score. In the high score group, the distant metastasis free survival time in the Tamoxifen group is shorter than that in the alternative group with a p value of 0.158. The estimated hazard ratio is 2.29 for Tamoxifen versus non-Tamoxifen treatment group. In the low score group, the distant metastasis free survival time in the Tamoxifen group is longer than that in the alternative group with an estimated hazard ratio of 0.828. The p value from the logrank test is not significant (p = 0.697). The estimated survival functions of both treatment groups are plotted in the middle panels of Figure 8. The separation is slightly worse than that based the gene score constructed without augmentation. The interaction between constructed gene score and treatment is also statistically significant at 0.05 level (p = 0.025).
For comparison purpose, we also fit a multivariate Cox regression model with treatment, the gene expression levels, and all treatment-gene interactions as the covariates. Lasso penalty is selected via 20-fold cross validation. The resulting gene score is a single gene based on the estimated treatment-gene interaction term of the Cox model. However, the interaction score fails to stratify the population according to the treatment effect in the validation set. The results are shown in the lower panel of Figure 8. The interaction between the constructed gene score and treatment is not statistically significant (p = 0.29).
To further objectively examine the performance of the proposal in this data set, we randomly split the data into training and validation sets and construct the score measuring individualized treatment effect in the training sets with three methods: "new", "new/augmented" and "full regression". Patients in the test set are then stratified into high and low score groups. We calculate the difference in log hazard ratio for Tamoxifen versus non-Tamoxifen treatment between high and low score groups. A positive number indicates that women in low score group benefitted more from Tamoxifen treatment than those in high score group as the model indicates. In Figure 9, we plot the boxplot of the differences in the log hazard ratio based on 100 random splitting. To speed the computation, all scores are constructed using only 2500 genes with top empirical variances. The results indicate that the proposed and the corresponding augmented methods tend to perform better than the common full regression method and this observation is consistent with our previous findings based on simulation studies.
As a limitation of this example, the treatment is not randomly assigned to the patients as in a standard randomized clinical trial. Therefore, the results need to be interpreted with caution. In addition, the sample size is limited and further verification of the constructed gene score with independent data sets is needed.
Discussion
In this paper we have proposed a simple method to explore the potential interactions between treatment and a set of high dimensional covariates. The general idea is to use W(Z) · T /2 as new covariates in a regression or generalized regression model to predict the outcome variable. The resulting linear combinationγ ′ W(Z) is then used to stratify the patient population. A simple efficiency augmentation procedure can be used to improve the performance of the method.
The proposed method can be used in much broader way. For example, after creating the modified covariates W(Z) · T /2, other data mining techniques such as PAM and support vector machines can also be used to link the new covariates with the outcomes [Friedman, 1991, Tibshirani et al., Hastie andZhu, 2006]. Most dimension reduction methods in the literature can be readily adapted to handle the potentially high dimensional covariates. For univariate analysis, we also may perform large scale hypothesis testing on the modified data, to identify a list of covariates having interaction with the treatment; one could for example directly use the Significance Analysis of Microarrays (SAM) method [Gilbert et al., 2002] for this purpose. Extensions in these directions are promising and warrant further research.
Lastly, the proposed method can also be used to analyze data from observational studies. However, the constructed interaction score may lose the corresponding causal interpretation. On the other hand, if a reasonable propensity score model is available, then we still can implement the modified covariate approach on matched or reweighted data such that the resulted score still retains the appropriate causal interpretation [Rosenbaum and Rubin, 1983]. 6 Appendix 6.1 Justification of the objective function based on the working model Under the linear working model for continuous response, we have where m t (z) = E(Y (t) |Z = z) for t = 1 and -1. Therefore Therefore, the minimizer of this objective function for all z ∈ Support of Z.
Under the logistic working model for binary response, we have Thus . Therefore which implies that the minimizer of L(f ) for all z ∈ Support of Z or equivalently Alternatively, under the logistic working model with binary response, we may focus on the objective functionl
Therefore
E{l ( Thus Therefore ) which implies that the minimizer of L(f ) for all z ∈ Support of Z.
Under the Cox working model for survival outcome, we have where λ t (u; Z) is the hazard function forX (t) given Z for t = 1/ − 1. Since Setting the derivative at zero, the minimizer f * (z) satisfies for all z ∈ Support of Z, where Λ * (u) = Λ(u, f * ). When censoring rates are the same in two arms for all given z, f * (z) = − 1 2 log E{Λ * (X (1) )|Z = z} E{Λ * (X (−1) )|Z = z} 6.2 Justification of the optimal a 0 (z) in the efficient augmentation Let S(y, w * , γ) be the derivative of the objective function l(y, γ ′ w * ) with respect to γ.γ is the root to an estimating equation Similarly, the augmented estimatorγ a can be viewed as the root of the estimating equation Since E{T i · a(Z i )} = 0 due to randomization, the solution of the augmented estimating equation always converges to the γ * in probability. It is straightforward to show that where A 0 is the derivative of E{S(Y i , W * i , γ)} with respect to γ at γ = γ * . Selecting the optimal a(z) is equivalent to minimizing the variance of where a 0 (z) satisfies the equation for any function η(·), a 0 (·) is the optimal augmentation term minimizing the variance ofγ a .
Since a 0 (·) is the root of the equation For continuous response, For binary response, For survival response, the estimating equation based on the partial likelihood function is asymptotically equivalent to the estimating equation Thus,
Lasso algorithm in the efficient augmentation
In general, the augmentation term is in the form of a 0 (Z i ) = W(Z i ) ′r (Z i ), wherer(Z i ) is a simple scalar. The lasso regularized objective function can be written as In general, this lasso problem can be solved iteratively. For example, when l(·) is the loglikelihood function of the logistic regression model, then with we may updateγ iteratively by solving the standard OLS-lasso problem whereγ is the current estimator for γ, Figure 8: Survival functions of the Tamoxifen and alternative treatment groups stratified by the interaction score in the test sets: red line, Tamoxifen treatment group; black line, alternative treatment group. Upper panels: the score based on the "new" method; middle panels: the score based on "new/augmentated" method; lower panel: the score is based on "full regression" method. Difference in log(hazard ratio) between high and low score groups Figure 9: Boxplots for differences in log(hazard ratio) between high and low risk groups based on 100 random splitting on GSE6532. The big positive number represents high quality of the constructed score in stratifying patients according to individualized treatment effect. | 2012-12-12T21:57:03.000Z | 2012-12-12T00:00:00.000 | {
"year": 2012,
"sha1": "b0f29161f64afe7b9d2b24f4cf15ff77359beb11",
"oa_license": null,
"oa_url": "https://europepmc.org/articles/pmc4338439?pdf=render",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b0f29161f64afe7b9d2b24f4cf15ff77359beb11",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
39203441 | pes2o/s2orc | v3-fos-license | Small-Scale Die-Offs in Woodrats Support Long-Term Maintenance of Plague in the U.S. Southwest
Abstract Our longitudinal study of plague dynamics was conducted in north-central New Mexico to identify which species in the community were infected with plague, to determine the spatial and temporal patterns of the dynamics of plague epizootics, and to describe the dynamics of Yersinia pestis infection within individual hosts. A total of 3156 fleas collected from 535 small mammals of 8 species were tested for Y. pestis DNA. Nine fleas collected from six southern plains woodrats (Neotoma micropus) and from one rock squirrel (Otospermophilus variegatus) were positive for the pla gene of Y. pestis. None of 127 fleas collected from 17 woodrat nests was positive. Hemagglutinating antibodies to the Y. pestis-specific F1 antigen were detected in 11 rodents of 6 species. All parts of the investigated area were subjected to local disappearance of woodrats. Despite the active die-offs, some woodrats always were present within the relatively limited endemic territory and apparently were never exposed to plague. Our observations suggest that small-scale die-offs in woodrats can support maintenance of plague in the active U.S. Southwestern focus.
Introduction
U nderstanding the epidemiology of plague requires knowledge of the natural history of this rodent-and fleaassociated zoonosis. Despite a long history of field-based investigations of plague in many parts of the U.S. Southwest and Pacific Coast, some very fundamental questions remain on the ecology of Yersinia pestis in its natural reservoirs, specifically, discussions about the continuity of the plague maintenance cycle in nature, a topic that has remained largely hypothetical. Relatively few studies have investigated the course of plague infection in individual rodent hosts over multiyear periods. The limitation of such studies makes it difficult to suggest ways for proving or rejecting hypotheses pertaining to resistance of some rodent individuals to plague (Brinkerhoff et al. 2010, Friggens et al. 2010.
It is widely accepted that plague enzootic cycles occur within wild rodent populations, from which epizootics arise of varying duration and extent, and, therefore, are of varying risk to humans. The enzootic hosts of plague are believed by many to be rodent species that are moderately resistant to Y. pestis or have some specific biological characteristics that prevent a complete elimination of the infection from their populations. This concept was originally proposed by scientists working in endemic areas of Asia and remains relevant today (Fenyuk 1940, Rall 1965, Gage and Kosoy 2005.
Some American researchers, including those who initiated field investigations of plague in California during the 1960s, shared this view and reported some data that directly or indirectly support such a concept. For instance, occasional observations indicated that individual voles (Microtus ssp.) and mice (Peromyscus ssp.) can survive plague epizootics based on repeated captures of seropositive animals and thus presumably play a crucial role in the maintenance of this infection by continuing to produce offspring, some of which will be susceptible and develop sufficiently high bacteremias to infect fleas before succumbing to infection. According to this concept, plague occasionally spills over to other much more susceptible hosts (epizootic hosts) that often die in rapidly spreading epizootics that infect large numbers of fleas and also pose threats to humans and other mammals (Goldenberg et al. 1964, Kartman et al. 1964, Poland and Barnes 1979, Lang and Wills 1991. Later, Davis et al. (2002) provided evidence of plague circulation in populations of Merriam's chipmunks (Tamias merriami) and dusky-footed woodrats (Neotoma fuscipes) in Ventura County, California. Observations of Y. pestis-specific antibodies in healthy appearing rodents or previously bacteremic animals provide some interesting points for the discussion; however, this is not ultimate evidence that survivors of infection with these bacteria, demonstrated by the presence of antibodies, indeed contribute to long-term persistence of plague (Salkeld and Stapp 2008). Other contributing scenarios may include long-term infection of vector-competent flea species or dispersal of fleas (and Y. pestis) among different rodent genera or rodent metapopulations in geographically overlapping distribution ranges (Gage and Kosoy 2005).
In this study, we wished to evaluate this and other mechanisms that could be responsible for maintaining plague in a specific system, including the idea that plague is maintained by low transmission rates and low mortality rates among highly susceptible hosts. We focused our efforts on a relatively small site in north-central New Mexico, which lies within a region that has been well known since the mid-1960s as one of two areas in the United States (Southwest and Pacific Coast plague foci) with relatively high levels of human and animal plague (Poland et al. 1979). Available reports demonstrated evidence of circulation of Y. pestis among local animal populations in the area within or in close proximity to the study area. Starting from 1988, when the first human plague case was reported in this area, plague activity was documented in 15 of 18 following years (plague activity was not reported in 1989, 1990, and 1994).
The reports of plague activity from New Mexico Department of Health included findings of Y. pestis bacterium or Y. pestis antibodies in humans, animals, and fleas. Human cases were reported in July 2001 (one case) and in November 2002 (two cases) (Perlman et al. 2003, Colman et al. 2009). Animal surveillance during the period of 1995-1997 resulted in identification of Y. pestis in two cats, one dog, seven woodrats, and three fleas (one from a deer mouse and two from woodrats). These epidemiological data presumptively supported a substantial role of woodrats in maintenance of plague in this area (Colman et al. 2009).
Our goals were to collect a sufficient amount of data on the distribution of infected or previously infected animals within the study area, and to observe noticeable differences between this particularly persistent plague system and enzootic areas where intensity of the disease was much lower or the circulation of Y. pestis was observed only occasionally. The criteria for selection of the study sites included the following: (1) previous reports of plague activities such as human cases, laboratory-confirmed deaths in small mammals, laboratoryconfirmed infection of rodent fleas, or demonstrated specific antibodies in small mammals or carnivores in this area; (2) high and stable density of rodents based on previous observations; (3) potential for rodent-human interactions; and (4) relatively easy access of a vehicle, permitting public acceptance for conducting the long-term and mark-release animal studies. The study was planned and conducted in close collaboration with the New Mexico Department of Health.
The study design included a mark-release-recapture platform and provided the following types of data and estimates of parameters: (1) temporal and spatial dynamics of rodent host population; (2) interrelationships between dynamics of rodent hosts and their fleas; (3) territorial movement of individually marked rodents and their association with specific burrows; (4) detection of antibodies to Y. pestis among rodents; (5) presence of Y. pestis DNA in fleas collected from rodents and selected rodent nests; (6) bacteriological investigation of rodent carcasses found during epizootics; and (7) spatial proximity between observed rodent die-offs and laboratory-confirmed Y. pestis evidence within the rodent community. A specific emphasis was placed on investigation of two species of woodrats (Neotoma micropus and N. albigula), the most common rodent species in this area and primary suspects as sources of human plague in this area based on preliminary observations.
Study sites
The longitudinal study of plague dynamics was conducted within a residential subdivision in Santa Fe County, located in north-central New Mexico with an elevation of *2000 meters. Two study sites were selected approximately one mile from each other. Overall, the sites are ecologically similar and typical for a pinyon-juniper ecosystem within a suburban area. Differences between the sites were mostly in terms of the anthropogenic modification of the environment. Study site 1 was located in close proximity to residential houses, on largesized lots, with a significant human-wildlife interface, while study site 2 contained no man-made structures and was primarily used as a recreational hiking area for local residents. The same TS were used in all consecutive trapping sessions except TS 41-60 within the site 1 that were added to the study starting June 2003. TS were selected by visual identification of a freshly occupied woodrat den within both sites. Global Positioning System coordinates were collected for each TS and clearly mapped to geographically identify the collection point for each captured rodent. During each trapping episode, the status of woodrat burrows in terms of animal occupancy and activity was recorded by trapping success, presence of fresh feces, freshly disturbed soil, and absence of spider webs.
Rodents were live-trapped using a combination of three traps of different sizes that were placed at each TS: (1) a small Sherman trap (2 · 2.5 · 6.5 †) targeting mice-size rodents (HB Sherman Trap Company, Tallahassee, FL); (2) a large Sherman trap (3 · 3.5 · 9 †) targeting rat-size rodents (HB Sherman Trap Company); and (3) a Tomahawk trap (4 · 4 · 10 †) targeting squirrel-size rodents and rabbits (Tomahawk Live Trap Company, Hazelhurst, WI). Traps were baited with a mixture of oats and peanut butter, set in the afternoon and checked the following morning and afternoon. Cotton balls were placed in each trap during cold weather to reduce the potential for hypothermia. All traps with rodents were collected in individual plastic bags and brought to the nearby processing site.
Processing rodents
Rodents were anesthetized in a chamber with a mixture of isoflurane inhalation anesthetic (Halocarbon Products Corporation, Peachtree Corners, GA) and oxygen. The Isoflurane mixture was delivered to the chamber through a nonrebreathing portable vaporizer (Seven Seven Anesthesia, Fort Collins, CO). Animals were either removed from the larger traps before placing a face mask for anesthesia on their heads or, if sufficiently small, the entire trap was placed into the induction chamber (modified tool box with a transparent plastic window). When an anesthetized animal lost its coordination, it was removed from the induction chamber and the face mask was placed on the animal's nose to maintain proper anesthetic exposure.
Blood was collected by retro-orbital bleed using heparinized microhematocrit capillary tubes coated with ethylenediaminetetraacetic acid (MWI Veterinary Supply Co., Denver, CO). Nobuto filter paper strips (Toyo Roshi Kaisha Ltd., Tokyo, Japan) with a 5-by 30-mm section for adsorption of 0.1 mL of whole blood were also used for serological analysis. In the event of an occasional death of an animal in a trap or during a procedure, spleen and liver samples were aseptically collected for culturing and immunofluorescence testing. All samples were quickly placed on dry ice and held until being returned to the laboratory where samples were stored at -80°C until tested. Species and sex identification, reproductive status, body measurements, and weight were recorded for each animal. Captured animals were marked individually by ear-tag or/and subcutaneous transponder (AVID, Folsom, LA) and were released at the location of capture. Animal handling procedures were approved by CDC's Division of Vector-Borne Diseases Institutional Animal Care and Use Committee (protocol number 06-008).
Flea collection
The rodents were thoroughly combed over a metal pan and ectoparasites were collected in one tube per animal and placed on dry ice until arrival at the CDC laboratory where they were stored at -80°C until identified and tested. In addition to fleas collected from small mammals, fleas were collected from 20 woodrat nests within the area 2 at the end of the study. The dens were partially or fully dissembled to find nest material and the associated burrows were swabbed using a piece of white flannel cloth attached to the end of a steel cable (plumber's snake). The nest material and the cloth were placed separately into plastic bags for later flea recovery.
Fleas were identified to species and sex using published taxonomic keys (Hubbard 1947, Stark 1970, Hopkins and Rothschild 1971, Furman and Catts 1982. Fleas were evaluated for the presence of a fresh blood meal (visual observation of blood in the digestive tract) using a dissecting microscope (Olympus SZ-11, Center Valley, PA).
Serology
Plague serological analysis was conducted by using a passive hemagglutination (PHA) assay, which is based on detection of antibodies against the capsule antigen (F1) specific to Y. pestis (Chu 2000). The antibodies were eluted from dried Nobuto strips overnight into a sodium borate buffer solution. Twenty-five microliter of the eluent was used for the agglutination test. Because of possible nonspecific cross-reactivity, all PHA-positive samples were further tested with a passive inhibition (PHI) test. Final endpoint titers were determined by subtracting the PHI wells showing agglutination from the last PHA well showing full agglutination.
Bacteriological analysis
Blood samples from live animals and triturated tissues from dead animals were plated on agar with sheep blood and incubated at 28°C for 48 h to recover Y. pestis, if present. The plates were examined for characteristic colonies after 24 and 48 h. The bacteriophage lysis test was performed on a pure culture to confirm the presence of the plague bacilli using a plague bacteriophage-impregnated filter paper strip (Chu 2000). Tissue samples (spleen and liver) from dead animals were also tested for plague by direct fluorescent antibody (DFA) assay using polyclonal rabbit anti-F1 serum tagged with fluorescein isothiocyanate (Chu 2000). Antigen-positive samples were further tested by mouse inoculation to determine whether viable Y. pestis was present.
Detection of Y. pestis DNA in fleas
All fleas were individually analyzed for Y. pestis. Fleas were triturated in sterile tubes containing 200 lL of sterile brain-heart infusion broth and 3 mm sterile glass beads in a mixer mill (Retsch MM300, Newton, PA) at 20 bps for 8-10 min. The triturate was then placed in a heat block and boiled at 95°C for 10 min to heat extract the DNA. After trituration, 2.5 lL of the flea suspension was used for polymerase chain reaction (PCR) testing. The PCR primers were Yp1 (5¢-ATCTTACTTTCCGTGAGAAG-3¢) and Yp2 (5¢-CTTGGATGTTGAGCTTCCTA-3¢), which targeted a 478bp fragment of the pla gene and PCR procedures followed Stevenson et al. (2003).
variegatus (14). Total number of samples collected from recaptured animals, including the first sample, was 851. The number of samples collected from individual animals varied from 1 to 19. Thirty-one individual woodrats (Neotoma) were captured and sampled more than five times.
Woodrat populations
Woodrats of two species (N. albugula and N. micropus) comprised 41% of small mammals captured in this area (490/ 1194). The dominance of woodrats in the rodent community varied by year (44-59%) and season (55-80%), but woodrats remained prominent during the entire period of the study. The demography of woodrats within the study area was previously published (Morway et al. 2008).
Fleas on mammals
Of 1147 live-captured small mammals (including recaptures), which were carefully checked for ectoparasites, fleas were found on 535 (46.6%) animals. The level of flea infestation varied significantly between observed mammalian species ( Table 2). The highest rate was recorded for O. variegatus (57/75; 76.0%) and S. audubonii (17/22; 72.3%). The woodrat species (N. albigula and N. micropus), the most prevalent species of the rodent community, also had a high rate of flea infestation (combined 414/668; 62%). Fleas were found on only 7.0% of mice of the genus Peromyscus. Rodents of some species (M. musculus, P. flavus, and R. megalotis) were free from fleas, which is not unusual for these species, although the number of these mice captured was low.
Nineteen species of fleas were identified among those collected from small mammals ( Table 3). The largest numbers of fleas belonged to three species: Orchopeas sexdentatus (1195) and O. neotomae (311), both species typical for woodrats, and Oropsylla montana on rock squirrels. Overall, most mammals were infested with typical fleas specific for that host. Considering prevalence, O. sexdentatus (68.4%) and O. neotomae (18.5%) had the highest rates on woodrats collectively (N. albigula and N. micropus).
Fleas in woodrat nests
One hundred twenty-seven fleas were collected from 17 of the 20 (85%) woodrat nests inspected. The number of fleas collected from a nest varied from zero to 29. The most common flea species collected from the woodrat nests were Anomiopsyllus nudatus (71/127, 56%) and Megarthroglossus divisus (48/127, 38%), both of which belong to genera known to be found commonly in nests (Holland 1985). Only five (4%) fleas of O. sexdentatus, the most common species on woodrat hosts, were found in woodrat nests. Three more flea species were represented by single individuals: Amaradix euphorbia, Epitidea wemmani, and Malaraeus sinomus. Y. pestis antigen-positive tissues were identified by DFA in carcasses of four dead animals found near trapping sites. Three plague-positive dead animals belonged to woodrat species N. albigula (2) and N. micropus (1), and one to S. audubonii. Although antigen detection produced positive laboratory (Figs. 1 and 2). The number of TS with no rodents captured during this period varied from 4 to 7 (7-12%) and these vacant TS were scattered across the study site without clear clustering. The first incidence of plaguerelated disappearance of woodrats from some TS was observed in March 2003, when fleas from two woodrats collected close to each other in the north-eastern part of the site ( Fig. 1; TS 24 and 26) were found to be PCR positive. In May 2012, two more positive fleas were found from a woodrat captured from one of these nests and from a neighboring nest ( Fig. 1; TS 24 and 23). In addition, a woodrat from one of these nests ( Fig. 1; TS 23) was serologically positive.
Beginning in June 2003, nearly half (48%) of the nests within the site were found free of woodrats, with the concentration of empty nests being particularly abundant in the north-eastern and eastern parts of the site (Figs. 1 and 3; TS 16-21 and 42-49). Two dead woodrats were found within these TS (Figs. 1 and 3; TS 45 and 46) and both carcasses tested positive by DFA. Only three woodrats were captured in the north-eastern part of study site 1, but one of those was serologically positive. By July 2003, the area that was completely devoid of woodrats had extended and covered 39 empty nests within study site 1. The same situation remained in September 2003. During the period from July to September in 2003, two serologically positive grasshopper mice and a positive flea from a woodrat were detected close to the die-off area (Figs. 1 and 3; TS 15, 16, and 53).
In October 2003, most of the woodrat nests remained empty (38 of 60, 63%), but three nests, which were deserted during the summer, had been reoccupied. In November and December 2003, the area of woodrat die-off extended to the southern part of the site (Figs. 1 and 3; TS 1 to 10). Overall, 42 of 60 nests (70%) were unoccupied. Two carcasses of woodrats, which tested positive for Y. pestis antigen, were found near TS 50 and 51 in the western part of the site, the same nests which were recolonized in October (2003) Site 2. During the period from May to October 2003, 13 of 25 (52%) TS were occupied and all vacant nests were distributed across the site with no clusters or trends noted. In November 2003, a cluster of nine nests ( Fig. 2; TS 5-5 to 5-10) were found free of rats, although no plague-positive specimens were identified. During 2004, some of these same nests were recolonized, but by April 2005, the reoccupied nests were again found empty and a plague-positive animal was found in the area ( Fig. 2; TS 5-2). In the summer of 2005, woodrats disappeared from 6 nests in the central part of the site (TS 4-5 to 4-10) in June and from 10 nests in August (TS 4-1 to 4-7 and 5-1 to 5-3) (Fig. 2). A positive animal was also found in this area. At the same time, two nests ( Fig. 2; TS 5-8 and 5-9) in the very southern part had been reoccupied. By September 2005, only three of 25 (12%) nests were occupied. In October 2005 (the last time point of the study) four nests had been recolonized by woodrats.
Discussion
The purpose of this study was to better define woodrat population dynamics that might contribute to the long-term persistence of plague activity within small-scale ecosystems. Previous reports were based on a single time frame, descriptions of individual human plague case investigations and associated small mammal and flea observations (Perlman et al. 2003, Colman et al. 2009). In this study, data were collected for key parameters of the plague system (rodent population and flea ectoparasite dynamics, ectoparasite population dynamics in nests, and ectoparasite and rodent infection rates) in a systematic long-term study. Longitudinal studies of zoonotic pathogens and their hosts using markrelease-recapture techniques are widely regarded as indispensable approaches for understanding population and temporal dynamics of zoonotic infections.
Just how plague manages to persist over long periods (years to decades) of time in a given ecosystem despite massive die-offs of some host species, and how it persists through lengthy quiescent periods, is poorly understood and worthy of investigation. Generally speaking, the main competing hypotheses to explain how plague can survive during epizootics have been summarized in various reviews as follows: (1) resistant rodent species remain chronically infected with plague (Kartman et al. 1964, Poland andBarnes 1979); (2) Y. pestis can persist in a chronic form (e.g., in granulomas) being activated or disseminated within individuals by a change of immune status or physiologic condition of the animal, thus allowing infection of feeding flea vectors (Gage and Kosoy 2006); 3) low virulent variants of Y. pestis can circulate among rodent hosts and be transformed into highly virulent variants under specific environmental conditions (Domaradsky 1999); (4) the pathogen can be maintained in the guts of fleas for periods of time sufficient for bridging periods of epizootics in mammals; (5) a high diversity rodent community is required for persistence of plague by switching host species; (6) a population of highly susceptible hosts occurs over a sufficiently large area that at least some subpopulations within a larger metapopulation survive and continue to reproduce and later are available to be infected; and (7) Y. pestis can persist in soil or in nest material within rodent burrows possibly in dormant, uncultivable form or in association with soil protozoa or nematodes (Domaradsky 1999, Gage andKosoy 2006).
We demonstrated that the transmission of a highly virulent strain of Y. pestis between rodents, indeed, persisted over a multiyear study period within the relatively small study area. This may not be the situation when plague survives in a ''cryptic'' form for years or decades without a visible manifestation.
Our study also has not identified any new evidence that the rodent species studied are fully or partially resistant to plague. The mere presence of some small rodents (Peromyscus and Dipodomys species) in the vicinity of woodrat nests affected by plague epizootics is not sufficient evidence to conclude that these animals contributed to the maintenance of plague circulation after a die-off of woodrats. These rodents had very low numbers of fleas on them, reducing their chance of exposure and possibly explaining their survival for a prolonged period of time after visiting, or cohabitating in, die-off areas. More likely, these rodents did not even go inside woodrat nests.
Our data do not support the hypothesis that maintenance of plague in this area involves the participation of multiple rodent species. Nonetheless, our data cannot refute the hypothesis that high diversity in a rodent community is a crucial factor for persistence of the infection by switching host species. Although the number of rodent species recorded at least once within the study area was quite high (14 species), overall this rodent community was characterized by the strong dominance of woodrats, particularly by one woodrat species, N. micropus. Although rock squirrels (O. variegatus) are often recognized as principal hosts of plague and the major mammalian source of human infection in some areas of the U.S. Southwest, and are widespread across New Mexico (including Santa Fe County), they were few in number in this particular area and were not trapped during our study.
Deer mice were found (42 of 1194 small mammal individuals) during this study, but the observed low level of flea infestation suggests their role as active players in plague transmission in this ecological system is minimal. Detection of Y. pestis-antibodies in the blood of mice (Peromyscus sp. and Onychomys sp.) can serve as a good demonstration of potential exposure of these mice to the plague pathogen; however, it does not present sufficient evidence of their significant involvement in the transmission cycle. The range of rodent species that can produce Y. pestis bacteremia at levels sufficient for flea transmission is likely narrow compared to those rodent species that can develop Y. pestis-specific antibody (Gage and Kosoy 2006).
Overall, our data suggest that the populations of woodrats contribute substantially to the continuous transmission of plague and allow its persistence in this ecosystem, regardless of observed die-offs among woodrats. Two important observations were made during the study period of 2.5 years. First, all parts of the investigated area were subjected to epizootics followed by a local disappearance of woodrats (Fig. 4). Second, despite the active die-offs, some woodrats always were present within the relatively limited endemic territory and apparently were never exposed to plague (negative antibody titers). The number of nests occupied by woodrats during this period varied from 18 (30%) to 56 (93%) of the monitored 60 TS within site 1 and from 4 (16%) to 17 (68%) of the 25 TS within site 2 (Fig. 3).
One plausible hypothesis for explaining the persistence of plague is the long-term survival of Y. pestis in the guts of infected fleas. Apparently, from our observations, some woodrats died after reoccupying the nests emptied by recent plague epizootics. For example, at site 1, two plague-positive carcasses were found in November 2003 near nests that were recolonized just a month earlier after a die-off was observed there in the summer months. It would be logical to expect that infectious fleas survived within the nest and could contribute to the rat infection. Athough we cannot exclude the possibility that the woodrats died after exposure to infected substrate within the nest, we can perhaps assume the survival of infected fleas within the abandoned nests for a period of weeks to a few months (Gage and Kosoy 2005). However, the length of survival of infectious fleas probably did not exceed this period since several recolonizations of the nests by woodrats after 3-4 months or more did not lead to the death of newcomers.
Likely, a successful recolonization of nests by woodrats could not happen if infectious fleas survive for a longer period. Our study demonstrated that the species composition of fleas in the investigated nests was drastically different from those collected on hosts. While O. sexdentatus was the most common flea species (68.4%; 1341/1961) on hosts, only 3.9% (5/127) of the fleas collected from woodrat nests and burrows belonged to this species. The two genera most commonly recovered from the nests and burrows were Anomiopsyllus and Megarthroglossus, both of which are known to be ''nest fleas'' that spend the majority of their time in this habitat rather than on their hosts as does O. sexdentatus (Stark 1970).
The question of how plague can be maintained within a relatively small area is still open to debate, but our longitudinal study has provided insights that support the view that a complex of woodrat nests can support continuous small-scale plague die-offs within this endemic territory with little, or minimal, contributions from other species. The slow movement of plague between patches can be explained by the strong territoriality of woodrats.
Our observations demonstrated many recaptures of the same woodrats at the same dens during several years, suggesting that individual woodrats may occupy one den throughout their entire adult life. Other studies have shown that the availability of a nest in close proximity to another den has not resulted in exchange of the occupants (Braun and Mares 1989). Most of the woodrat dens in this particular study area are quite large (Fig. 4) and construction of such dens is undoubtedly time and labor intensive. Radiocarbon dating indicated that woodrat nests can be preserved up to 40,000 years and although woodrats seem to abandon their middens (an archeological term meaning garbage pile), after a few years or decades they can build new middens on top of old ones ( Jackson, personal communication, Jackson et al. 2005). Since dens are continuously used by many generations of woodrats, it is very unlikely their houses are voluntarily deserted by their occupants.
Local die-offs of woodrats can serve as an indicator of active plague epizootics. This assumption is based on several points: (1) populations of woodrats are relatively stable and not subjected to dramatic fluctuations observed in other small rodents (e.g., Peromyscus mice); (2) deaths from other sources, for example, from a predator, usually do not occur simultaneously in many neighboring nests; (3) the plague pathogen was detected in dead woodrats found close to the inactive nests; and (4) no other pathogen has been proven to cause massive die-offs of woodrats.
Spatial structure of rodent populations has been recognized as an important factor for the invasion and spread of infectious agents (Rotshild 1975, Keeling 1999, Wilschut et al. 2015. Analyzing the distribution of the burrows of great gerbils (Rhombomys opimus) in Kazakhstan, Wilschut et al. (2015) concluded that spatial clustering of occupied burrows should be considered to assess its significance for plague transmission. In another study, an investigation of plague outbreaks in prairie dog (Cynomys ludovicianus) populations, the authors argued that plague can persist in the highly susceptible hosts because their movement is very constrained (Salkeld et al. 2010).
Theoretical modeling by Caraco et al. (2001) clearly demonstrated that the spread of vector-borne infection is particularly low when host spatial heterogeneity (small neighborhood size and clumping) limits the vector's advance. The conservative territoriality of both N. albigula and N. micropus (Macedo andMares 1989, Braun and also may have significant implication for persistence of plague in their populations. An important parameter for such a model is survival time of the pathogen in the ''nest environment.'' This time period should be long enough (months) to support maintenance of plague in the area, but not too extensive (years) that recolonization of the nests by new naive animals is prohibited. In the latter case, migrant individuals would be quickly killed by bites of infectious fleas or from exposure to infected nest substrate (''hot nest'').
Ecological analysis can provide enough insights to explain long-term maintenance of plague in some relatively small areas characterized by specific environmental conditions, socalled ''natural foci of plague.'' Because the micro-ecosystems of such foci may vary widely, further longitudinal studies are badly needed. Such studies also can provide information that can be helpful in designing improved prevention and control measures that can be tailored to specific locations. | 2018-04-03T04:46:04.947Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "004894b48156f8cb6a8a17cf6ddfcd640ee5734a",
"oa_license": null,
"oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/vbz.2017.2142",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "004894b48156f8cb6a8a17cf6ddfcd640ee5734a",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221095853 | pes2o/s2orc | v3-fos-license | Effects of local decoherence on quantum critical metrology
The diverging responses to parameter variations of systems at quantum critical points motivate schemes of quantum metrology that feature sub-Heisenberg scaling of the sensitivity with the system size (e.g., the number of particles). This sensitivity enhancement is fundamentally rooted in the formation of Schr\"odinger cat states, or macroscopic superposition states at the quantum critical points. The cat states, however, are fragile to decoherence caused by local noises on individual particles or coupling to local environments, since the local decoherence of any particle would cause the collapse of the whole cat state. Therefore, it is unclear whether the sub-Heisenberg scaling of quantum critical metrology is robust against the local decoherence. Here we study the effects of local decoherence on the quantum critical metrology, using a one-dimensional transverse-field Ising model as a representative example. Based on a previous work [Phys. Rev. Lett. 94, 047201 (2005)] on the critical behaviors of the noisy Ising model, which shows that the universality class of the quantum criticality is modified by the decoherence, we find that the standard quantum limit is recovered by the single-particle decoherence, which is equivalent to local quantum measurement conducted by the environment and destroys the many-body entanglement in the ground state at the quantum critical point. Following the renormalization group analysis [Phys. Rev. B 69, 054426 (2004)], we argue that the noise effects on quantum critical metrology should be universal. This works demonstrates the importance of protecting macroscopic quantum coherence for quantum sensing based on critical behaviors.
I. INTRODUCTION
Quantum metrology distinguishes parameters using the distinguishability of quantum states [1,2]. Quantum states that have suppressed fluctuations of certain observables (such as squeezed states and globally entangled states) and quantum evolutions that are sensitive to parameter changes can enhance the sensitivity of parameter estimation [3][4][5]. An N -body entanglement state, e.g., a GHZ state, can realize a sensitivity with the Heisenberg limit scaling 1/N , which is far beyond the standard quantum limit scaling 1/ √ N for a product state [6]. Systems around quantum critical points have many-body entanglement [7,8] and their evolutions have high susceptibility to external fields [9,10]. These features motivate schemes of using quantum critical systems for parameter estimation [11][12][13][14], which are known as quantum critical metrology (QCM). Approaching to the critical point, the ground-state fidelity susceptibility [12] presents a super-Heisenberg scaling with the number of particles (N α , α > 1), which alludes to a sensitivity beyond the Heisenberg limit N −1 [15][16][17]. Further analysis considering the time consumption of the evolution shows that the parameter sensitivity has actually a sub-Heisenberg scaling (N −α , 1/2 < α < 1) , lying between the standard quantum limit and the Heisenberg limit [18,19], which nonetheless still represents a significant enhancement of sensitivity.
Decoherence due to coupling to environments, how- * Electronic address: chongchenn@gmail.com ever, may reduce or even eliminate the sensitivity enhancement by many-body entanglement. For a finite-N non-interacting system prepared in a macroscopic superposition state (a Schrödinger cat state), the environmental noise sets the best possible scaling between the standard quantum limit and the Heisenberg limit [20,21]; even worse in the thermodynamic limit N → ∞, an infinitesimal noise can reduce the scaling from the Heisenberg limit to the standard quantum limit [22][23][24][25]. The QCM, different from the entanglement-based metrology using interaction-free systems [1,6,23], takes advantage of the fact that both quantum state and evolution in sensing process are based on the same many-body system [14,18,26]. A natural question for QCM is: How would the noise affect the sensitivity scaling? One clue is that the quantum entanglement of the ground state at the critical point due to the formation of a long-range order is fragile to local measurements by environments [27] (which is the underlying mechanism of spontaneous symmetry breaking [28]). Recently an intriguing study on the p-body Markovian dephasing dynamics of N -spin GHZ states evolving under a k-body Hamiltonian shows an N −(k−p/2) scaling of the estimation error [29].
Here we study the effects of local decoherence on QCM, focusing on the scaling of sensitivity with the number of particles. We first consider the one-dimensional transverse-field Ising model as a representative example. This model has exact solutions and has been used to show that the sensitivity of parameter estimation is enhanced by the many-body entanglement at the critical point [11][12][13][14]. To investigate the effects of local decoherence, we couple the spins to local bosonic environments. The coupling to local boson baths dramatically modifies the phase diagram of the model [30]. Consistent with the modified phase diagram, we find that the sub-Heisenberg scaling of the sensitivity is reduced to the standard quantum limit. Then, using the universal scaling law established in Ref. [31] by renormalization group analysis, we demonstrate that the criticality-enhancement of sensitivity being suppressed by local decoherence is applicable to many universality classes of quantum phase transitions whose low-energy excitations are described by a φ 4 effective field theory.
II. QCM AND APPLICATION TO ISING CHAINS
Consider a parameter J of a system with a quantum critical point J C . We assume the quantum phase transition is caused by the formation of a long-range order characterized by a local order parameter M . Near the critical point, the correlation length diverges as ξ ∼ |J − J C | −ν and therefore becomes the only relevant scale. According to the scaling hypothesis, thermodynamic quantities scale by power laws with the correlation length. Examples are the order parameter M ∼ ξ −β/ν and the susceptibility χ = ∂M ∂h ∼ ξ γ/ν , where h is an external field coupled to the order parameter. The diverging susceptibility at the critical point is the basis of QCM for estimating the parameter h [18]. First, the system is prepared at the ground state at quantum critical point; then, a small field h is applied; finally, after the free evolution of time t, a quantum measurement is performed and the result is compared with that obtained without applying the field h. The sensitivity is defined as the smallest h that yields measurement difference greater than the quantum fluctuation for an evolution time t, i.e., η h = h min √ t. The sensitivity has a theoretical lower bound, known as the Cramér-Rao bound [32] , where the quantum Fisher information F (h) is related to the spectral function χ (q = 0, ω) = πN n =0 | 0|M |n | 2 δ(ω − E n ) by [33] Here |0 and |n are the ground state and the n-th excited state, respectively, and E n is the excitation energy. The critical behaviors are determined by the lowenergy excitations. Around the critical point, the gap of low-energy excitations scales with the correlation length by ∆ ∼ ξ −z ,where z is the dynamic critical exponent. Applying this scaling equation to Eq. (1) and using the fluctuation-dissipation theorem χ = dω πω χ (q = 0, ω) ∼ ξ γ/ν , we get [18] Specifically, we consider a one-dimensional transversefield Ising model with the Hamiltonian whereσ x/y/z i are the Pauli matrices of the i-th spin along the x/y/z directions. The order parameter operator iŝ M ≡ 1 N N i=1σ z . The parameter to be estimated couples to the spins by h N iσ z . This model has exact solution and the critical point and the critical exponents are derived as J C = |B|, γ = 7/ 4, ν = 1, and z = 1 [34]. Applying these exponents to Eq. (2b), we obtain a super-Heisenberg quantum Fisher information scaling [35] F (h) ∼ N 15/4 at the long-time limit (t > N ), where the condition ξ ∼ N for the one-dimensional system has been used. The super-Heisenberg scaling comes from the fact that the evolution time is absorbed into the scaling of N [18]. If the evolution time t < ξ z , the scaling of the quantum Fisher information becomes F (h) ∼ t 2 N 7 4 , which yields a sub-Heisenberg limit η h ∼ t − 1 2 N − 7 8 [18]. To relate the sensitivity enhancement to many-body entanglement, we use the average variance N e ≡ N Var(M ) to characterize the multipartite entanglement [33], which means the number of neighboring spins that are entangled. The average variance is related to the two-site entanglement, which can be used to characterize quantum phase transitions [36]. The Fisher information can be written as F (h) ≈ 4N t 2 N e . As proved in Refs. [37,38], F (h) / (4t 2 ) ≤ N k if the state is k-producible (i.e., there are at most k neighboring particles entangled), so the critical ground state has at least N e ∼ N 3/4 neighboring particles entangled. The fact that N e → O (N α ) with α > 0 indicates long-range correlation and hence long-range order.
III. EFFECTS OF LOCAL DECOHERENCE
We couple each spin of the Ising chain in Eq. (3) to an independent bosonic environment [30]. The total Hamiltonian readŝ is the annihilation (creation) operator of the k-th mode of the bosonic bath coupled to the i-th spin, with frequency ω k and coupling strength g k (both assumed site independent). The noise spectrum, J(ω) = k g 2 k δ(ω − ω k ), is the same in all sites by assumption. We set the environment temperature to be zero and take the noise spectrum as Ohmic, i.e., J(ω) = αωe −ω/ωc , with a cutoff ω c and a dimensionless coupling constant α. The effects of other types of noise spectra are discussed later. Unlike the studies based on non-equilibrium steady state transitions [26,39], here the formulation of critical quantum metrology in equilibrium quantum transitions (via adiabatic switching-on of couplings) allows a general critical scaling analysis.
The local decoherence can be understood as quantum measurement "performed" by the local bosonic environments. When the system state is such that a spin has expectation value σ z i = ±1, the boson mode k gets a ±g k displacement. Thus, the boson modes measure the spinσ z i and the spin collapses to one of the basis states. Due to the long-range entanglement, the whole spin chain collapses into a state robust against local decoherence (essentially a product state). Such a state collapse process is the mechanism of spontaneous symmetry breaking. The parameter estimation via a product state has a sensitivity scaling in the standard quantum limit 1/ √ N . To quantitatively study the decoherence effects, we map the one-dimensional quantum transverse-field Ising model to a two-dimensional classical Ising model using Suzuki-Trotter decomposition with discrete imaginary time τ = 1, · · · , N τ [40], which is exact for N τ → ∞. After integration of the bosonic modes, an effective Ising action is obtained as where s i,τ = ±1 are classical spins and Γ = − 1 2 ln (tanh B) with the lattice constant along the τ direction taken as unity. There emerges a long-range effective interaction along the imaginary time direction.
We reproduce the phase diagram and the critical exponents presented in Ref. [30], as shown in [41]. With the transverse field B fixed, the coupling to the local environments extends the phase boundary from a critical point to a critical line in the α − J plane. When J > J C , the system is always in the ferromagnetic phase as expected. When J ≤ J C , a transition between the paramagnetic and the ferromagnetic phases occurs at a critical noise strength α C , which increases with decreasing J.
The destruction of the long-range entanglement by the local decoherence is evidenced by the decay of the spin (5)). Around the critical point, the scaling theory gives C(r [30]. The numerical fitting C(r 1, 0) = ar −b + c yields the critical exponent z + η − 1 = 0.25(2) for the quantum critical point without decoherence and z + η − 1 = 1.0(2) for the critical point with decoherence. The faster decay of the correlation C(r, 0) in the decoherence case indicates that the coupling to the local environments destroys the multipartite entanglement. Indeed, the average variance N e ≈ N 0 dr C(r, 0) changes from the power law scaling N e ∼ N 3/4 to the logarithm scaling N e ∼ log N .
The quantum Fisher information, in term of the spin correlation, is reduced by decoherence to up to a logarithm modification log N , at the critical point J < J C and α = α C . Equation (7) is consistent with the scaling analysis in Eq.
Here we have used the fact that z + η ≈ 2 and the Fisher scaling law γ/ν = 2 − η. In conclusion, the local decoherence recovers the standard quantum limit.
IV. EFFECTS OF NON-OHMIC NOISES
Above we have assumed that the noise spectrum has the Ohmic form in Eq. (4), where the scaling law z + η = 2 holds. Here we consider a more general powerlaw noise spectrum J (ω) = αω s ω 1−s c e −ω/ωc with a cutoff ω c . This spectrum has density ∼ ω s at the low-energy limit. The effective Ising model, after integration of the bosonic modes as in Eq. (5), has a long-range interaction ∼ (τ − τ ) −(1+s) in the imaginary time dimension. The previous studies of the long-range Ising model show that the critical exponents have three regimes depending on the value of s: the mean-field regime 0 < s < 2/3 where z = 2/s and η = 0, the continuous regime 2/3 ≤ s < 2 where z + η varies continuously from 3 to 5/4, and the Ising universality regime s ≥ 2 where z = 1 and η = 1/4 [42][43][44][45]. The Fisher information F (h) ∼ N t 2 ξ 2−η(s)−z(s) is a monotonically decreasing function of s. The standard quantum limit F (h) ∼ N t 2 is reached at the threshold s = 1 where z + η = 2. For s > 1, there is always enhancement by the quantum criticality. In particular, the noises become irrelevant when s ≥ 2 where z = 1 and η = 1/4 are constants. Physically, a bosonic bath with s ≥ 2 is essentially a gapped system and therefore has no effects on the spin decoherence at zero temperature.
On the other side of the threshold, s < 1, the local decoherene can even reduce the Fisher information to a sub-standard quantum limit, F (h) ∼ N 1−x t 2 with x > 0. Physically, this is because the strong damping of the spins by the sub-Ohmic noise at low frequencies would make the spin dynamics insensitive to the field h (similar to the case of over-damped oscillators). Near the critical point, the correlation between the spins enhances the over-damping and therefore leads to the sub-standard quantum limit scaling of sensitivity.
V. UNIVERSAL DECOHERENCE EFFECTS ON QCM
Now we demonstrate that the effects of local decoherence on the QCM is universal, using the renormalization group analysis in Ref. [31].
The low-energy excitations of a broad range of quantum critical systems can be captured by a φ 4 effective theory described by the action where φ(x, τ ) is the ordering field, D is spatial dimension (D = 1 for the Ising chain in Eq. (5)), the coefficients before the ∇ x φ (x, τ ) and ∂ τ φ (x, τ ) terms are absorbed into the definitions of x and τ , A ∝ α is the noise strength, the gap∆ and the interaction strength µ 0 are phenomenological parameters that can be determined by fitting to experimental or numerical data. With the φ 4 term neglected, the free propagator is C −1 0 (q, ω) = (∆ + A |ω| + ω 2 + q 2 ). From the free propagator one can see a crossover of the dynamical critical exponent between z = 1 and z = 2 with lowering the energy scale at the critical point where∆ = 0. When ω A, the resonance has a linear dispersion ω 2 ∼ q 2 (i.e., z = 1); when ω A, the dispersion becomes ω ∼ q 2 /A (i.e., z = 2). At the critical point, the system behaviors are dominated by the low energy excitation with divergent wavelength (q → 0), therefore z = 2.
When the φ 4 interaction is taken into consideration, the dimension analysis shows that [φ (x, t)] ∼ ξ 2−D−z 2 [31]. Such an analysis yields an upper critical dimension D = 2 with the assumption that z = 2. When D ≥ 2, e.g., for a 2D transverse-field Ising model, dx D dτ φ 4 = ξ 4−D−z ∼ O (1) and the mean-field theory that neglects the φ 4 fluctuation becomes exact. Consequently z = 2 and η = 0 hold for D ≥ 2. Below the upper-critical dimension an = 2 − D expansion method can be used to analyze the effect of the φ 4 term. The analysis [41] shows that the susceptibility has a universal expressionC (q, iω) = q −2+η φ( ω cq 2−η ), which leads to the universal scaling relation [ω] ∼ ξ 2−η and therefore the scaling law for the quantum phase transition with the decoherence effects [31] z + η = 2.
This scaling law indicates that the equal imaginary time correlation C (r, 0) ∼ r −d and hence F (h) ∼ N t 2 . Therefore, the standard quantum limit is restored for noisy quantum critical systems that satisfy the scaling law (9). The quantum-to-classical mapping in Eq. (5) and the φ 4 theory (8), according to Refs. [46,47], are correct for generic quantum phase transitions as long as only the low-energy excitations (i.e., low-temperature physics) are considered. The scaling law in Eq. (9) and hence the conclusion that the local decoherence recovers the standard quantum limit of QCM hold if the effective φ 4 action without the noise term (i) has a linear dispersion at the critical point and (ii) has a real value (without an imaginary term, e.g., a topological θ-term). Note that the renormalization group analysis applies also to multicomponent φ 4 theory [31]. Therefore, the recovery of the standard quantum limit of QCM by local decoherence is the case for a broad range of universality classes of quantum phase transitions. Examples include the superfluidinsulator transitions of Bose-Hubburd models and the Neel transitions of antiferromagnetic Heisenberg models in different dimensions.
We conjecture that the conclusion may even hold more generally for all quantum phase transitions that involve the formation of long-range orders. The enhanced sensitivity scaling of QCM results essentially from the manybody entanglement in the ground state at the critical points. The spontaneous symmetry breaking in the formation of long-range order means the many-body entanglement is fragile to local measurement or coupling to local environments, which infers that any sensitivity scaling beyond the standard quantum limit would be diminished by local decoherence.
VI. CONCLUSIONS
Using the one-dimensional transverse-field Ising model coupled to local bosonic environments as a representative example, we find that the local decoherence reduces the scaling of the sensitivity of quantum critical metrology from the sub-Heisenberg limit to the standard quantum limit. Such reduction is understood by the picture that the coupling to the local environments amounts to a local measurement of the spins, which causes a globally entangled state collapse into a product state. Using universal scaling laws we demonstrate that the conclusion should hold for general quantum phase transitions that have a φ 4 low-energy theory. Since the symmetrybreaking quantum phase transitions are in general associated with the macroscopic superposition of short-range entangled states (such as product states) [28], the diverging susceptibility at the quantum criticality is inevitably associated with the fragility of the cat states in noisy environments.
It is intriguing to ask whether quantum criticality that does not involve symmetry breaking (e.g., those due to the formation of topological orders [48]) could offer QCM robust against local decoherence [49] since the topological cat states are macroscopic superposition of locally indistinguishable, long-range entangled states and are therefore immune to local perturbations. However, the insensitivity to local noises of topological cat states means insensitivity to local parameters (such a field coupled uniformly to individual particles). It remains an open, in-teresting question whether and how a measurement of a non-local parameter (which would require quantum measurement on a non-local basis) could be designed to exploit the local-decoherence-resilience of topological quantum criticality.
Acknowledgements. This work was supported by RGC/NSFC Joint Research Scheme Project N CUHK403/16.
Appendix A: Phase diagram by Monte Carlo simulation
The phase diagram and the correlation function of the Ising action in Eq. (5) of main text are obtained from the Monte Carlo simulation. An example is the average of an observable O ({s x,y }) as x,y } is the l-th spin configuration sampling according to the probability distribution P eq ({s i,τ }) , and L the total number of sampled configurations. When L is large enough, the sampling result becomes exact.
The sampling with a given probability distribution P eq ({s i,τ }) is generated from the Metropolis algorithm, in which the (l + 1)-th spin configuration is generated from the l-th configuration s (l) i,τ by a stochastic walk with an acceptance probability P s . The proper choice of the conditional probability makes the (l + 1)-th spinconfiguration sampling satisfies the same equilibrium distribution P eq as that of the l-th spin-configuration sampling.
The initial spin configuration sampling that satisfies P eq is also generated by similar stochastic walks, in which, starting from an arbitrary, random spin configuration, after m 1 steps of stochastic walks, the probability distribution of the spin configuration s (m) i,τ converges to the equilibrium distribution P eq , as a result of the detailed balance condition.
For each fixed value of J/J C , we gradually scan the noise strength α to find the critical point α C .
The magnetic susceptibility, , peaks at the noise strength α max , whose value depends on the system size N (see Fig. 1(a)). Here, N τ is chosen to be a large enough number. According to the finite-size scaling hypothesis, |α max − α C | scales with N by A numerical fitting α max = α C + aN −1/νa yields the critical point α C and the critical exponent ν α (see Fig. 1 (b)). The critical noise strength α C as a function of the ferromagnetic coupling strength defines the phase diagram, shown in Fig. 2 (a). In Fig. 2 (b), we compare the critical correlation function between the noisefree quantum critical point (J = J C and α C = 0) and the noisy one (J = 0.6J C and α C = 1.141). At the critical point, the scaling hypothesis dictates a scaling relation C (r 1, 0) ∼ r −(z+η−1) . The numerical fitting C (r, 0) = ar b + c yields the critical exponent z +η −1 = 0.25 (2) for the the noise free critical point and z + η − 1 = 1.0(2) for the noisy one. With the noise the correlation decays faster, indicating that the many-body correlation or entanglement is weaker.
Appendix B: Renormalization group analysis
The low-energy effective field theory for the Ising model coupled to local bosonic baths is captured by the φ 4 action From the dimensional analysis, we get [φ (x, t)] ∼ ξ Ferro. | 2020-08-12T01:01:34.289Z | 2020-08-11T00:00:00.000 | {
"year": 2020,
"sha1": "072e110930af9bbddc510c6018ea5ee86ac91f0b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.04879",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "072e110930af9bbddc510c6018ea5ee86ac91f0b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248251494 | pes2o/s2orc | v3-fos-license | Optimum Radiation Fractionation Schedule in Advanced Cancer Cervix; A Study from Low Resource North Indian Cancer Center
Introduction: Cervical cancer is the The primary curative treatment option for women suffering from locally advanced cervical cancer includes external beam radiotherapy (EBRT) with concurrent weekly platinum-based radiosensitizing chemotherapy and brachytherapy (BT) to obtain finest treatment outcomes. Aim: The purpose of this study is to assess tumor response following HDRBT preceded by EBRT and to identify optimum radiation fractionation schedule for better response with tolerable radiation toxicity. Material and methods: 91 patients diagnosed with carcinoma of uterine cervix were screened for inclusion in the study. Patients were categorized according to FIGO Staging system The patient was put in lithotomy position and given anesthesia and fletcher suit applicator was applied to the patient push the bladder and rectum and finally shifted to the HDR treatment unit. All the associations were tested by using chi square test. Results: In our study it was found that majority (44%) of the cases were aged 51-60 years. Vaginal bleeding and squamous cell carcinoma was found in most of the cases. The acute radiation toxicities were more in elderly age group and the association was found to be statistically significant (p<0.05) except Genitourinary – cystitis. Age group 51-60 years was significantly associated with response (p=0.034) with lower risk of partial response (OR=0.31 (0.10-0.95), while the symptom Backache/Pain in abdomen had significantly higher risk of partial response (p=0.002, OR=16.24 (1.7-154.8). Conclusion: When compared to traditional techniques, HDR can achieve very high rates of local control while lowering morbidity.
suffering from locally advanced cervical cancer includes external beam radiotherapy (EBRT) with concurrent weekly platinum-based radiosensitizing chemotherapy and brachytherapy (BT) to obtain finest treatment outcomes [10]. BT enables for dose escalation of the tumour and acts as cornerstone thereby minimizing the toxicity of nearby organs at-risk (OARs). Through multiple past clinical reports it was concluded that BT plays an essential role in the curative treatment paradigm, as it confers both local control and survival advantage when compared to cohorts who were treated through EBRT alone as a radiation treatment modality [11,12]. BT, being a highly conformal form of radiation technique, allows delivery of high doses to the tumour, and is the cornerstone for optimal clinical outcomes and toxicities.
Various guidelines have been established for the treatment of cervical cancer which also includes brachytherapy [13,14]. However; these guidelines are chiefly designed and applicable for the western world and are of limited value in low and middle-income countries (LMICs) including India. Furthermore, LMICs have an exclusive ethnic and cultural background, disease patterns, health care systems and access to treatment facilities [15]. The treatment practices are commonly influenced by regional variances in cultural and socioeconomic factors, resources availability and expertise, knowledge and technology improvements etc., resulting in extremely heterogeneous patterns of care [16,17]. The majority of the centers in India, currently, practice high-dose-rate (HDR) brachytherapy for cervical cancers [18].
The dose distribution will be manually calculated by changing relative dwell time values before an appropriate solution is reached, computer being used only to measure the dose distribution after the dosimetrist has agreed on the method. This method, or its combination with traditional optimization algorithms like geometrical or dose point optimization, necessitates time and expertise. It's necessary to distinguish between a planning method that optimizes doses based on anatomic structures and optimized planning systems that optimize doses based on the position of active dwells or a few other dose points. The final step toward fully anatomy-based conformal dose preparation is to use an anatomy-based optimization.
The purpose of this study is to assess tumor response following HDRBT preceded by EBRT and to identify optimum radiation fractionation schedule for better response with tolerable radiation toxicity.
Materials and Methods
This study was conducted in patients attending outpatient department of radiotherapy at North Indian hospital. 91 patients diagnosed with carcinoma of uterine cervix were screened for inclusion in the study. All patients underwent complete evaluation by history taking, gynaecological examination and systemic examination. Certain symptoms such as vaginal bleeding and discharge, pain in lower abdomen, backache difficulty in micturition and defecation etc. were also noted. All patients underwent complete evaluation by history taking, gynaecological examination and systemic examination. Malignancy was histologically proven through biopsy in all patients. Complete blood count, liver function tests, renal function tests, chest radiograph and, ultrasound abdomen and pelvis were some of the examinations carried out for all patients. CECT abdomen and pelvis or MRI pelvis was also done. Patients were categorized according to the stages on the basis of FIGO Staging system [5]. Ethical clearance was obtained from institutional ethical committee prior to the study. Written informed consent was taken from the patients before start of treatment (Table 1).
Treatment Allocation
All enrolled patients in this study after histopathological confirmation carcinoma; surface marking was done on the pelvis of the patients for teletherapy usually 15 X 15 field size and treated by APPA fields by the Telecobalt unit. Those patients who had separation of more than 20 cm were treated by four-field box technique. Two orthogonal X-rays pelvis and/ or CT assisted scanogram and slices and plato treatment planning system was utilized. After the completion of the external beam RT, the patients were evaluated for regression of tumor and given symptomatic treatment if required. Procedure was done under strict aseptic conditions under conscious sedation. The patient was put in lithotomy position and examined without anaesthesia for reassessment. The part was prepared followed by short anaesthesia ketamine). An assessment of fornices was done to decide upon the size of ovoids to be used whether half, small, medium and large.
The length of the uterus was assessed with uterine sound. The treatment was done for uterine length from 4 cm to 6 cm. Then the fletcher suit applicator was applied to the patient and adequate packing was done with gauge or placement of tungsten retractor to push the bladder and rectum. The patient was shifted to the HDR treatment unit where the catheters were connected and individualized treatment was received by the microselection afterloading system.
Treatment protocol
Combined external beam therapy and high dose brachy therapy in stages IIA,IIB and IIIB, IB>3 cm.
EBRT-ICRT
External beam therapy was followed for intracavitory high dose rate HDR micro-selection application at a gap of 2-3 weeks after completion of external beam radiotherapy Cases of cancer uterine cervix randomly received 4600cG in 23fractions, 45000cG in 20 fractions and 5000cG in 25 fractions preceded by ICRT. HDR dose-800 cGy at point a repeated after 1 week ie; 2 fractions of HDR application with dose at point A being 800 cGy in each. None of the patients were given chemotherapy and all the cases were untreated prior to the study period. External beam therapy was delivered by theratron 780 C cobalt teletherapy unit and ICRT by HDR microselector after loading system using Ir 192 as the radioactive source.
Logistic Regression Analysis to Find Relationship of Treatment Response Outcome with General & Clinical Profile of Patients
The Logistic Regression Analysis to find relationship of treatment response outcome with general & clinical Profile of Patients (Table 4) revealed minimum risk of partial response or maximum chances of complete response are those which have minimum beta coefficient in the category of the study variable which in this case was corresponding to age group 51-60 yr, complaints of burning micturition, squamous cell type and lower stage (IIa).
Association of Toxicities with Age
The acute radiation toxicities were more in elderly age group and the association was found to be statistically significant (p<0.05) except Genitourinary -cystitis. However complaint, Histopath and stage did not show significant association with acute radiation toxicities except moist desquamation with complaint which was seen more in Backache/Pain in abdomen and No specific (p<0.001). (Table 5A, 5B, 5C, 5D) Confluent mucositis was seen more in proportion among partial response cases compared to complete response (2.9% vs 14.3%,
Follow up
The patients were studied according to age, presenting complaints, clinical manifestation, histology, haemoglobin and treatment response acute radiation reactions. The patients were followed first after 2 weeks; 4 weeks and then 8 week till the completion of study period.
Statistical analysis
The results were analyzed using descriptive statistics and making comparisons among various groups. Categorical data were summarized as proportions and percentages (%) while discrete (quantitative) as mean (SD). All the associations were tested by using chi square test. Logistic regression analysis was performed for making model of treatment response outcome with general & clinical Profile of Patients. Statistical analyses were performed using SPSS version 23.0 (SPSS Inc., Chicago, IL, USA). A value of p<0.05 was considered statistically significant.
General & Clinical Profile of Patients
Majority (44%) of the cases were aged 51-60 years, most common symptoms was P/v bleeding (60.4%) followed by white discharge (29.7%). Other symptoms were Backache/Pain in abdomen (5.5%), burning micturition (2.2%) while in 2.2% cases it was non-specific. According to histopathology squamous cell carcinoma was found in majority 84.6% cases while in remaining 15.4% cases adenocarcinoma was observed. IIIb was the most common stage as found in 70.3% cases, while stage IIa and IIb was found in 2.2% and 27.5% cases respectively (Table 2) (Figure 1).
Association of Treatment Response with General & Clinical Profile of Patients
The analysis to find any association of treatment response with General & Clinical Profile of Patients (Table 3) revealed that the age group 51-60 years was significantly associated with response (p=0.034) with lower risk of partial response (OR=0.31 (0.10-0.95), while the symptom Backache/Pain in abdomen had significantly higher risk of partial response (p=0.002, OR=16.24 (1.7-154.8) By histopathology squamous cell carcinoma had significantly lower risk of partial response (p=0.009, OR=0.22 (0.07-0.74)) compared to adenocarcinoma (p=0.009, OR=4.50 (1.36-14.90)).
Inclusion criteria Exclusion criteria
Histo-pathologically confirmed carcinoma of uterine cervix cases.
Patients with any kind of pelvic infection, fibroid, ascites and other concurrent systemic illness.
Discussion
By virtue of HDR brachytherapy, high dose of radiation can be given in a shorter period of time (outpatient department) which reduces patient discomfort and inconvenience. Regardless of its practical benefits, HDR brachytherapy has experienced significant resistance because of worries regarding its possible toxicity and theoretical radiobiologic disadvantage as HDR involves a greater probability of late effects for a given level of tumor control. Certain crucial factors that aid in lowering the frequency of complications without compromising the treatment results are fractionation and dose adjustment of total dose. In our study it was found that majority (44%) of the cases were aged 51-60 years. According to Globocan reports In India the peak age for cervical cancer incidence is 55-59 years [19]. The major symptoms that were discovered included P/v bleeding (60.4%) followed by white discharge (29.7%). Other symptoms were Backache/ Pain in abdomen (5.5%), burning micturition (2.2%) while in 2.2% cases it was non-specific. P/v bleeding and white discharge have been the common symptoms of cervical carcinoma according to Shah et al and Nganwai et al however the percentage is contradictory to our results 86.9 and 77.7 % menstrual abnormality and 94.2% and 92.4% abnormal vaginal discharge respectively [20,21].
According to histopathology squamous cell carcinoma was found in majority 84.6% cases while in remaining 15.4% cases adenocarcinoma was observed. IIIb was the most common stage as found in 70.3% cases, while stage IIa and IIb was found in 2.2% and 27.5% cases respectively. Histopathological analysis done by Bhandari et al revealed that 92.5 % were Squamous cell carcinoma (SCC), and 7.5 % were Adenocarcinoma which were again not concomitant with our results. However our results were consistent with his study in terms of stages as he also found that the common stage was 67 % IIIB [22].
Our results revealed that revealed that the age group 51-60 years was significantly associated with response (p=0.034) with lower risk of partial response (OR=0.31 (0.10-0.95), while the symptom Backache/ Pain in abdomen had significantly higher risk of partial response (p=0.002, OR=16.24 (1.7-154.8). However the results were consistent with the outcomes attained by Saibishkumar et al. revealed that age > 50 y was linked with higher rates of no residual tumor [15] while Rahakbauw et al found no statistical relationship [23]. By histopathology squamous cell carcinoma had significantly lower risk of partial response (p=0.009, OR=0.22 (0.07-0.74)) compared to adenocarcinoma (p=0.009, OR=4.50 (1.36-14.90)). Similar study done by Fletcher et al. revealed that the squamous cell carcinoma group responded similarly to those with non-squamous cell carcinoma. Among stages, IIb had significantly lower risk of partial response (p=0.008, OR=0.10 (0.01-0.76)) while IIIb had significantly higher risk of partial response (p=0.004, OR=11.82 (1.50-93.29)) [24].
The Logistic Regression Analysis to find relationship of treatment response outcome with general & clinical profile of patients revealed minimum risk of partial response or maximum chances of complete response are those which have minimum beta coefficient in the category of the study variable which in this case was corresponding to age group 51-60 year, complaints of Burning micturition, squamous cell type, lower stage (IIa) and Hb level more than 10 mg/dl. Similar results were found by Rahakbauw et al which revealed that 26-50-year-olds tended to exhibit decreased response, by 0.87 times, compared to those older than age 50 [23].
The acute radiation toxicities were more in elderly age group and the association was found to be statistically significant (p<0.05) except Genitourinary -cystitis. However complaint, Histopath and stage did not show significant association with acute radiation toxicities except moist desquamation with complaint which was seen more in Backache/Pain in abdomen. No specific (p<0.001) Confluent mucositis was seen more in proportion among partial response cases compared to complete response (2.9% vs 14.3%, p=0.044).Consistently, Kunos [25] and Laurentius et al [26], also found higher haematological toxicity in elderly patients.
In conclusion, recent advances have been incorporated in brachytherapy for cervical cancer which allow for better demarcation and coverage of the tumor, as well as improved avoidance of OARs. As a result, when compared to traditional techniques, HDR can achieve very high rates of local control while lowering morbidity. This article gives a summary of a small effort in defining an optimum radiation schedule in Indian patients who present advanced stages and when there is heavy patient load. Taking into account of increased hospital burden of locally advanced cancer cervix patients in Indian context, increase in sample size and extending the follow-up duration may produce more reliable results. | 2022-04-20T15:14:31.541Z | 2022-02-27T00:00:00.000 | {
"year": 2022,
"sha1": "d46258eaffbf6702ff669d9a3b5cbcdff47e1b6a",
"oa_license": "CCBY",
"oa_url": "http://www.waocp.com/journal/index.php/apjcc/article/download/829/1909",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c9cb8c89959d1fd2819035d9de5b8cf7480dc42f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
2902273 | pes2o/s2orc | v3-fos-license | Isozymes of AMP-Deaminase in Muscles Myasthenia Gravis Patients
Similar symptoms observed in Myasthenia gravis (MG) can be also detected in the case of skeletal muscle AMP-deaminase deficiency. We compared the activity and expression of AMP-deaminase (AMPD) products in skeletal muscles of MG patients and MG-free individuals. The activity of AMP-deaminase in the muscles of MG patients was significantly higher than in the controls and was 2.05 µmol/min/mg protein (±0.31). The two groups differ in level of AMPD product expression. Furthermore in MG-group molecular size of isoform AMPD1 is 90 kDa in contrast to MG-free group where is present 70 kDa isoform of enzyme. The data suggests that the disturbances in transmission of neuronal signaling, taking place in the skeletal muscles of MG patients, may also change energetic metabolism of the affected muscles by changing molecular mass of isoform.
Introduction
Myasthenia gravis (MG) is an acquired autoimmune disease of human neuromuscular system. MG manifests clinically as abnormally easy fatigability of several groups of skeletal muscles. Usually the symptoms begin in the group of extraocular muscles which results in ptosis and diplopia. Sometimes MG may be limited to these muscles but usually progresses and involves other muscular groups, including the respiratory system. The weakness and fatigability of the muscles usually grow worse in time (Porth and Matfin 2009).
MG is a consequence of antibody-mediated decrease in the number of acetylcholine receptors and resultant impairment of neuromuscular transmission. Interestingly, the majority (approximately 75 %) of MG patients present with various degree of abnormalities in the morphological structure of the persisted thymus (tumorous). This substantiates surgical removal of the thymus is one of the widely used treatment methods of MG (Porth and Matfin 2009).
AMP-deaminase (AMPD-EC3.5.4.6) is an enzyme involved in purine metabolism. It catalyses irreversible deamination of AMP to IMP, and plays an important role in the energetic metabolism of tissues, influencing the value of the adenylate energy charge in the cell (Chapman and Atkinson 1973). Moreover, AMPD participates in purine nucleotide cycle of skeletal muscles (Lowenstein and Goodman 1978). Various tissue-and stage-specific isoforms of AMPD have been identified in humans (Kaletha and Nowak 1988;Ogasawara et al. 1982). Three main AMPD isozymes, designated as M (muscle), L (liver) and E (erythrocyte) forms, are encoded by AMPD1, AMPD2 and AMPD3 genes, respectively (Morisaki et al. 1990). At least in rodents, neuromuscular junctions possess ecto-AMP deaminase activity which can dissociate extracellular ATP catabolism from adenosine formation. Ecto-AMP deaminase blunts the ATP derived adenosine A2A receptor facilitation of acetylcholine release from stimulated motor nerve endings, which may contribute to tetanic failure in myasthenic individuals (Magalhaes-Cardoso et al. 2003, Noronha-Matos et al. 2011. Myoadenylate deaminase deficiency (mAMPDD) is a frequent, relatively benign disorder of skeletal muscles. Defect of AMPD in homo-and heterozygotic form occurs in 20 % of population. We distinguish primal and derivative form of enzymopathy. Derivative form of muscle AMP deaminase deficiency is result of skeletal muscle damage in disease muscle and nervous systems. At this people instead of reduction of AMPD activity is observed always decrease activity of creatine kinase and adenylate kinase. Whereas the primal form of defect is inherited autosomal and recessive and follows from changing in chromosome 1 (1p13-p21). It manifests as muscle fatigue following strenuous exercise (Fishbein et al. 1978, Fishbein 1985. The deficiency is an autosomal recessive disorder resulting from mutations of AMPD1. Its primary consequences include impairment of muscle purine metabolism and purine nucleotide cycle. Interruption of the cycle during muscle exercise decreases the adenylate energy charge of the myocyte and disturbs the rate of glycolysis and turnover of citric acid cycle (Flanagan et al. 1986;Sabina and Mahnke-Zizelman 2000;Sinkeler et al. 1987).
Results showing high level of AMPD2 gene in hyperplastic, tumorous thymus in relation to AMPD1 and AMPD3 (Rybakowska et al. 2015) may substantiates AMPD participation in neurodegenerative disorder (Akizu et al. 2013). Therefore, the aim of this study was to analyse the expression level and selected physicochemical and immunological properties of AMPD isolated from skeletal muscles of patients with MG and MG-free individuals operated in one surgery clinic.
Materials and Methods
We conduct our study on group of 25 MG-free individuals and 25 MG patients.
The data we introduce included representative samples of skeletal muscles (intercostal muscles) obtained intraoperatively from MG patients (18-38 years of age, subjected to thymectomy) and MG-free individuals (60-68 years of age, subjected to lobectomy due to lung cancer). The material was immediately washed in saline solution and frozen in liquid nitrogen.
The activity of AMPD was determined colorimetrically (Chaney and Marbach 1962) in tissue homogenates. The incubation medium, in the final volume of 0.5 ml, contained 0.1 M potassium-succinate buffer, pH 6.5, with 1 mM concentration of the substrate AMP. After equilibration of temperature at 30°C, 25 ll of enzyme solution (containg about 5 lg of enzyme protein) was added into the incubation medium to start the reaction. The incubations were carried out for 15 min, and initial velocity of reaction was determined from the mean amount of ammonia liberated in three parallel incubations.
Protein concentration was determined according to Bradford (Bradford 1976).
Total RNA was isolated according to Chomczynski and Sacchi (1987), using 1:1 phenol-chloroform as an extracting mixture. The extracted RNA was separated on 1 % agarose gel containing 30 % formaldehyde. Expression of the AMPD genes was determined by means of RT-PCR, as described elsewhere (Roszkowska et al. 2008).
To isolate the enzyme, the tissue samples were homogenized in 3 volumes (v/w) of extraction buffer (0.089 M phosphate buffer, pH 6.5, containing 0.18 M KCl and 1 mM mercaptoethanol with addition of 1 mM phenylmethylsulfonyl fluoride (PMSF) and trypsin inhibitor), and centrifuged (20 min at 3000 g). Subsequently, SDS-PAG electrophoresis and Western blot analysis (with the use of polyclonal antibodies kindly provided by Professor R. Sabina) were performed as described elsewhere (Szydłowska et al. 2004).
The measurements of enzyme activity and mRNA levels were described as mean ± standard deviation (SD). Figures were done in Sigma Plot program. Differences between groups were analyzed by 2-sided Student unpaired t test.
The protocol of the study was approved by the Local Ethics Committee at the Medical University of Gdansk (decision no. NKBBN/229/2012).
Results
The expressions of AMPD family genes in skeletal muscles of MG patients and MG-free individuals are presented on Fig. 1a-c. As shown on Fig. 1a, the expression of AMPD1 gene, physiologically most intensive in mature skeletal muscles, was even more enhanced in the material from MG patients. We did not observe similar phenomenon in the case of AMPD3 gene. The expression of this gene in muscles of MG patients showed trend of lower expression in comparison with MG-free controls (Fig. 1c). The expression of AMPD2 gene, extremely weak in mature skeletal muscles of healthy individuals, was also weak in the muscles of MG patients (Fig. 1b). Comparing all data showing on Fig. 1 we can say that as well as in muscles MG-free controls and MG patients the expressions of AMPD1 is highest in contrast to thymus (Rybakowska et al. 2015).
The activity of AMPD in skeletal muscle extracts of the two studied groups of patients is presented in Table 1. As shown in the Table, the activity of AMPD in the extracts obtained from MG patients (mean specific activity of about 2.05 lmol/min per mg of protein) was roughly 20 % higher than in the extracts from MG-free controls (mean specific activity about 1.64 lmol/min per mg of protein). Figure 2 illustrates the results of Western blot analysis performed with the prevailing enzyme isoform present in skeletal muscle extracts of the two analyzed groups of patients on representative samples (we observed the same in all studied samples). As shown on the figure, the two groups differ in immunological reaction with anti-AMPD1 antibodies. While a protein weighing about 90 kDa was labeled in the extracts from MG patients, another protein with ca. 20 kDa lower molecular mass was detected in the extracts from MG-free individuals.
Discussion
The intracellular pool of ATP changes in response to metabolic conditions, and strenuous muscle exercise was shown to be associated with a decrease in the ATP/ADP ratio (Flanagan et al. 1986;Sinkeler et al. 1987). A reduction of cellular pH, taking place in exercising skeletal muscles, stimulates the activity of purine nucleotide cycle in order to counteract the decrease in the cellular ATP/ADP ratio (AMPD is inhibited during the initial phase of exercise due to accumulation of orthophosphate, but is reactivated by lowered pH) (Hellsten et al. 1999;Makarewicz and Stankiewicz 1974). Stimulation of purine nucleotide cycle activity augments energy production in exercising skeletal muscles [to a degree dependent on the metabolic type of the muscle (Meyer and Terjung 1979)]; the energy comes from both glycolysis and anaplerotic reaction of the citric acid cycle (Sinkeler et al. 1987;Ś cisłowski et al. 1982). In view of its role in normalization of ATP to ADP ratio, prompt activation of AMPD seems vital for the maintenance of skeletal muscle contractility during periods Fig. 1 Expressions of AMPD family genes in representative skeletal muscles of MG patients (MG) and MG-free controls (C). mRNA of AMPD1 isoform (a); mRNA of AMPD2 isoform (b); mRNA of AMPD3 isoform (c). The expressions were defined as a ratio of constitutively expressed ACTIN. Expression levels were compared with the Student's t test, n = 4 in group, (p \ 0.05) The enzymatic activity presented as the mean value ± SD. Statistical significance verified with the Student's t test, n = 4 in group, * p \ 0.05 of higher energy demand (Flanagan et al. 1986;Sinkeler et al. 1987).
Previous experimental studies showed that binding of this enzyme to myosin during intense muscle contractions changes significantly its kinetic and regulatory properties (in a substrate concentration-dependent manner (Hisatome et al. 1998;Rundell et al. 1992). Moreover, the enzyme can change its subunit composition, forming oligomers composed of products encoded by different AMPD genes (Fortuin et al. 1996;Mahnke-Zizelman et al. 1998). All these regulatory mechanisms are noticeable for control of the enzymatic activity during muscle contractions in vivo. The activity of AMPD is generally associated with high energy demand and sustained ATP turnover (Hancock et al. 2006).
Mammalian AMPD undergoes limited proteolysis in vitro; the degradation is limited to the N-terminal regions of the AMPD isozymes. Proteolysis of the N-terminal fragments does not reduce significantly catalytic activity of the enzyme (Mahnke-Zizelman and Sabina 2001). The use of new recombinant technologies allowed to synthesize full-size AMPD proteins with intact N-terminal fragments (Sabina et al. 1984).
Similar to the other two products of AMPD family genes, also AMPD1 undergoes the process of limited proteolysis. This is normally observed during purification of the enzyme and its further storage. The molecular mass of AMPD1 subunit isolated freshly from autopsied human skeletal muscle (60-72 kDa) (Mahnke-Zizelman and Sabina 2001;Stankiewicz 1981) differs markedly from that predicted on the basis of cDNA sequencing (86-87 kDa) (Sabina et al. 1984). This discrepancy is most probably a result of proteolysis taking place during the process of purification (Sabina et al. 1984). While the in vitro proteolysis of the enzyme is irreversible (Haas and Sabina 2003). It is possible that removal of fragment about 20 kDa from control sample could be ablation of N-terminal determining tissue specific properties of AMPD isoform, which may be responsible for metabolism of muscle MG patients in observed disease symptoms.
MG alters energetic metabolism of exercising skeletal muscles. Compared to the controls, patients with moderate to severe MG were characterized by significantly higher end-exercise muscle Pi/ATP ratio and significantly lower end-exercise muscle pH (Lindquist 2008;Ko et al. 2008). While the muscular weakness is mainly a consequence of impaired neuromuscular transmission, it also partially results from reduced excitation-contraction coupling (Pagala et al. 1990).
It is possible that more stable isoform AMPD1 in MG muscles may be responsible for weakness and fatigability of skeletal muscles. Further studies, because of small sample size which limit the study, are necessary to explain if the changes in myocyte metabolism, induced by impaired neuromuscular signal transmission, may also influence the physiological function of muscular AMPD isozymes. | 2017-08-02T23:00:31.329Z | 2016-05-13T00:00:00.000 | {
"year": 2016,
"sha1": "67c5ae254f17c1fc6005a5da9a70cdd04c2b3230",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10989-016-9533-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c07b7488c4118bfa51d48e5b48b1b5a8a1eaea05",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
140294110 | pes2o/s2orc | v3-fos-license | Mechanism and therapeutic window of a genistein nanosuspension to protect against hematopoietic-acute radiation syndrome
There are no FDA-approved drugs that can be administered prior to ionizing radiation exposure to prevent hematopoietic – acute radiation syndrome (H-ARS). A suspension of synthetic genistein nanoparticles was previously shown to be an effective radioprotectant against H-ARS when administered prior to exposure to a lethal dose of total body radiation. Here we aimed to determine the time to protection and the duration of protection when the genistein nanosuspension was administered by intramuscular injection, and we also investigated the drug ’ s mechanism of action. A single intramuscular injection of the genistein nanosuspension was an effective radioprotectant when given prophylactically 48 h to 12 h before irradiation, with maximum effectiveness occurring when administered 24 h before. No survival advantage was observed in animals administered only a single dose of drug after irradiation. The dose reduction factor of the genistein nanosuspension was determined by comparing the survival of treated and untreated animals following different doses of total body irradiation. As genistein is a selective estrogen receptor beta agonist, we also explored whether this was a central component of its radioprotective mechanism of action. Mice that received an intramuscular injection of an estrogen receptor antagonist (ICI 182,780) prior to administration of the genistein nanosuspension had signi fi cantly lower survival following total body irradiation compared with animals only receiving the nanosuspension ( P < 0.01). These data de fi ne the time to and duration of radioprotection following a single intramuscular injection of the genistein nanosuspension and identify its likely mechanism of action. for a single IM injection of GEN was investigated. First, the timing of GEN administration before irradiation was examined. GEN was administered 24, 12, 6, 2, 1 or 0.5 h before 9.25 Gy total-body irradiation. Separate control groups were administered the nanosuspension vehicle at each time point. The results demonstrated that a single IM injection of GEN administered at 24, 12, 6, 2, 1 or 0.5 h before irradiation resulted in 30-day survival rates of 89%, 64%, 50%, 25%, 6% and 19%,
INTRODUCTION
Accidental exposure to high-dose radiation can lead to a variety of potentially lethal syndromes. Hematopoietic-acute radiation syndrome (H-ARS) is a primary medical concern for individuals exposed to total-body irradiation. In humans, death occurs within 30-60 days following >2 Gy total-body exposure due to hematopoietic insufficiency [1][2][3]. The development of countermeasures to prevent radiation toxicity, especially H-ARS, has been a focus of research for over 50 years. 'Radioprotectants' are defined as agents that provide a benefit to irradiated subjects when given prior to radiation exposure, whereas 'radiation mitigators' are agents that are administered after exposure. In both cases, the agents are given prior to the development of overt evidence of injury [4]. Both protectants and mitigators can be utilized for personnel at risk of accidental radiation exposure (e.g. military personnel responding to a nuclear attack, first responders to radiation incidents, or astronauts exposed to space radiation during periods of high solar activity), but also have applications in clinical oncology to minimize the toxicities of radiotherapy without interfering with the anti-cancer effects of radiation.
Although three cellular growth factor drugs (filgastim, pegfilgastim and sargramostim) have been repurposed as radiation mitigators to accelerate the recovery of radiation-induced bone marrow failure (such as is experienced in H-ARS), there are no FDA-approved radioprotectant drugs to protect an individual from the onset of H-ARS. Two intravenously administered radioprotectors, palifermin and amifostine, have however been approved by the FDA to decrease the incidence and duration of severe oral mucositis in conjunction with clinical radiotherapy for head-and-neck cancer [5,6]. Unfortunately, these drugs have significant adverse effects, and neither has been approved by the FDA for prevention of H-ARS. Palifermin administration can induce dermatitis at the radiation site, anemia, hypokalemia, dysgeusia and vomiting [5]. Amifostine has severe dose-limiting side effects, including nausea, vomiting and pronounced hypotension, with performance-degrading toxicity in animal models [7][8][9][10]. These side effects in conjunction with an intravenous route of administration would make these drugs unacceptable for use by healthy military or civilian populations. [11,12]. Therefore, there is an urgent need to develop non-toxic radiation protective agents that can be easily administered prior to potential radiation exposure.
It was previously demonstrated that genistein (5,7-dihydroxy-3-(4-hydroxyphenyl)chromen-4-one), a naturally occurring isoflavone found at low levels in soybeans, can function as a radioprotectant in a murine model of H-ARS [13,14]. Genistein was an effective radioprotectant for H-ARS when administered as either a single subcutaneous or intramuscular (IM) injection 24 h prior to radiation exposure in mice [13,14]. Initial reports utilized genistein in its native form, which has low bioavailability. Recently, a pharmaceutically acceptable nanosuspension of genistein was created by utilizing a wet-nanomilling process, improving its water solubility by reducing the average particle size by two orders of magnitude, and increasing its bioavailability [15][16][17][18]. This clinically relevant formulation of nanoparticle genistein could also be administered by a single IM injection 24 h prior to radiation exposure to increase survival from H-ARS in mice [15]. Both genistein preparations increased survival correlated with a protection of hematopoietic stem cells within the bone marrow, allowing robust hematopoietic recovery [14,15]. Moreover, the genistein nanosuspension (GEN) was also demonstrated to inhibit the production of inflammatory factors in irradiated mouse bone marrow and spleen cells, which may contribute to the survival of hematopoietic progenitors [15].
Genistein's mechanism of action as a radioprotectant is not completely understood, although it has been extensively studied due to its beneficial health effects [19]. Studies of genistein have revealed a variety of biological activities, including anti-oxidant and free radical scavenging effects [20][21][22], anti-inflammatory effects [15,23,24], and cell cycle effects [14], all of which may be relevant for radiation countermeasure effects [25]. Importantly, genistein also has a chemical (benzopyran) structure similar to estrogen and is classified as a phytoestrogen. Estrogens have been demonstrated to reduce mortality from total-body irradiation and to improve hematopoiesis in mice [26][27][28][29][30][31]. There are two estrogen-binding ligand-dependent transcription factors: estrogen receptor alpha (ERα) and estrogen receptor beta (ERβ) [32]. The ERs are intracellular receptors which, upon ligand binding, dimerize, translocate to the cell nucleus, and bind to estrogen response elements (EREs) in DNA sequences to facilitate site-specific gene transcription. The two transcriptional ERs (α and β) have antagonistic functions in normal/healthy cell types where they are both expressed. ERα activation primarily induces cellular growth (ERα is overactive in 50-80% of breast cancers). ERβ is a negative regulator of ERα and represses cell proliferation, a process believed to occur through the transcription of opposing genes rather than a direct inhibition [33]. ERβ's anticellular growth attributes conceptually categorize it as a tumor suppressor, and indeed it is commonly mutated in advanced human cancers [32]. Genistein is a selective agonist of ERβ, with approximately a 20-fold greater affinity compared with ERα, at a physiological concentration for 50% inhibition (IC 50 ) of only 8.4 nM [34]. It is currently not known whether the radioprotective effects of estrogen occur through ERα or ERβ, or whether a significant portion of the radioprotective effects of genistein require activation of ERβ.
In the present study, we investigated various parameters related to radioprotection by a genistein nanosuspension for H-ARS in mice, including: (i) the optimal time of single-dose administration, (ii) the duration of single-dose radioprotection, (iii) the dose reduction factor (DRF) of the genistein nanosuspension formulation for the prevention of lethality due to H-ARS, and (iv) the molecular mechanism of action of genistein protection. Our data show that the genistein nanosuspension has a wide duration for its protective effects, from 48 h through 12 h prior to radiation exposure. In these studies, the genistein nanosuspension was not effective as a radiation mitigator for H-ARS when given as single administration after radiation exposure. We confirmed that the genistein nanosuspension selectively activates ERβ in cells, and we provide evidence that a portion of the H-ARS radioprotective effects by genistein requires ER activation.
Animals
Adult male (12-14 weeks of age) CD2F1 mice were purchased from Harlan Laboratories (Indianapolis, IN, USA). Mice were acclimated upon arrival, and representative animals were screened for evidence of disease. Mice were housed in a facility accredited by the Association for the Assessment and Accreditation of Laboratory Animal Care International. Animal rooms were maintained at 21°C ± 2°C with 50% ± 10% humidity on a 12 h light/dark cycle. Commercial rodent ration (Harlan Teklad Rodent Diet 8604; Envigo, Dublin, VA, USA) was available freely, as was acidified (pH = 2.5) water to control opportunistic bacterial infections [35]. All animal handling procedures were performed in compliance with guidelines from the National Research Council [36] and were approved by the Institutional Animal Care and Use Committee (IACUC).
Irradiation, drug administration, and survival studies
Mice received total-body irradiation in a bilateral gamma radiation field in a 60 Co facility within Lucite jigs [14]. The midline tissue dose to the mice was 9.00-9.25 Gy at a dose rate of 0.6 Gy/min for all survival studies. The alanine/electron spin resonance (ESR) dosimetry system [37] was used to measure dose rates (to water) in the cores of acrylic mouse phantoms. After irradiation, mice were returned to their home cages. The day of irradiation was considered 'Day 0'. For 30-day prophylactic survival studies, separate control groups were administered the nanosuspension vehicle at each time point. For these studies, there were 16-36 mice/group. For determination of the DRF, there were 20-40 mice/radiation dose.
Genistein was administered as a wet-milled nanosuspension (average particle size of 200 nm) containing 50 nM phosphate-buffered saline with 5% PVP-K17 and 0.2% Polysorbate 80. This formulation is stable at room temperature and was provided by Humanetics Corporation, Edina, MN, USA. All experiments used an IM injection (in the quadriceps muscle) of GEN at a dose of 150 mg/kg in a total volume of 50 μl using a 25-gauge needle attached to a 250 μl syringe (Hamilton, Reno, NV, USA) [15]. The ER antagonist survival studies were based on a previously developed methodology [38]. Briefly, mice received an IM injection (in the quadriceps muscle) of the ER antagonist ICI 182,780 (10 mg/kg in 50 μl total volume) (Tocris, Ballwin, MO) or its vehicle (15% ethanol/85% corn oil), once a day for four consecutive days. On the fourth day, mice received a second injection, 2 min after the first injection, with the IM vehicle or the GEN (150 mg/kg). After 24 h, mice received a single total-body irradiation dose of 9.00 Gy 60 Co. For survival studies, irradiated animals were monitored two to four times daily for symptoms of moribundity, as described in the Armed Forces Radiobiology Research Institute (AFRRI) Institutional Animal Care and Use Committee (IACUC) policy. When animals displayed designated criteria, humane euthanasia was performed using 100% CO 2 inhalation followed by cervical dislocation, in accordance with the American Veterinary Medical Association (AVMA) Guidelines.
Cell-based estrogen-receptor activation assay
Samples were submitted to Indigo Biosciences, Inc. (State College, PA) to assay the ER agonist activation of either unformulated, synthetic genistein (Bonistein®, DSM Nutritional Products, Parsippany, NJ) or the GEN, using a proprietary assay system to quantify agonist activity in cell culture. Chinese Hamster Ovary (CHO) cells (reporter cells) were engineered to stably express the human ERα or human ERβ full-length isoforms, which also contain an engineered receptor-specific genetic response element (GRE) cis-linked to a firefly luciferase gene. ERα-or ERβ-agonist activation was quantified by luminescence. To perform the assay, a suspension of reporter cells were dispensed into wells of a white 96-well assay plate (~1 × 10 5 cells/well). In parallel, agent-containing test media was diluted to twice the target assay concentration in compound screening medium (CSM). CSM and suspended cells were combined to assay various concentrations of genistein in the nonformulated form or in the nanosuspension. Assay plates were incubated for 24 h in a cell culture incubator at 37°C, 5% CO 2 and 85% humidity. Following the 24 h incubation period, treatment media was discarded and luciferase detection reagent was added. Values of relative luminescence units (RLUs) from each assay well were used to determine receptor activity.
Statistical analysis
A two-tailed Fisher's exact test was used for analysis of survival data. P < 0.05 was considered statistically significant. The half-lethal dose at 30 days (LD 50/30 ) calculation for reduction factor (DRF) was determined by probit analysis [39].
Genistein nanosuspension administration prior to radiation exposure
GEN was previously shown to protect mice from a lethal dose of radiation when administered by IM injection 24 h before irradiation [15]. The radioprotective time-course for a single IM injection of GEN was investigated. First, the timing of GEN administration before irradiation was examined. GEN was administered 24, 12, 6, 2, 1 or 0.5 h before 9.25 Gy total-body irradiation. Separate control groups were administered the nanosuspension vehicle at each time point. The results demonstrated that a single IM injection of GEN administered at 24, 12, 6, 2, 1 or 0.5 h before irradiation resulted in 30-day survival rates of 89%, 64%, 50%, 25%, 6% and 19%, respectively (Fig. 1). The survival rates for the vehicle-treated groups at the corresponding time points were 11%, 28%, 25%, 6%, 19% and 25%, respectively. Only the groups that received GEN 24 h or 12 h before irradiation had significantly (P < 0.01) increased survival rates compared with their respective vehicle control group. Next, the time of GEN administration was extended so that mice received a single administration (150 mg/kg) either 5, 4, 3, 2 or 1 day before 9.25 Gy irradiation. The 30-day survival rates were 44%, 6%, 6%, 81% and 100%, respectively. The 30-day survival rates for the corresponding vehicle control groups were 38%, 19%, 12%, 6% and 31%, respectively. In this experiment, only the groups administered genistein either 2 days (48 h) or 1 day (24 h) before irradiation exhibited significantly (P < 0.01) higher levels of survival when compared with their vehicle control group (Fig. 2). Together, these data indicate that GEN has a wide window for time of administration for effective radioprotection, from 48 h through 12 h before radiation exposure, with the optimal time point being 24 h prior to radiation exposure.
Post-irradiation exposure administration of the genistein nanosuspension
We investigated the efficacy of GEN as a radiation mitigator, i.e. a drug capable of mitigating radiation injury if administered after exposure to radiation. Accordingly, GEN (150 mg/kg) was administered as a single IM injection 0.5, 1, 2, 4, 6 or 24 h after 9.25 Gy irradiation. (Fig. 3). In this experiment, vehicle or 150 mg/kg of GEN were also administered 24 h before irradiation to serve as negative and positive control groups, respectively. Mice administered GEN 0.5, 1, 2, 4, 6 and 24 h after irradiation had 30-day survival rates of 45%, 40%, 45%, 50%, 50% and 44%, respectively. These rates of survival for post-irradiation administration of genistein can be compared with the survival rate of 90% (P < 0.01) for the positive control group, which received GEN 24 h prior to irradiation. These findings indicate that a single IM injection of GEN at any of these time points after irradiation did not induce a significant increase in survival (Fig. 3). This suggests that GEN is not an effective radiomitigator for H-ARS when given as a single IM injection at the radiation dose used in this animal model.
Dose reduction factor for IM-administered genistein nanosuspension
Based on the optimal GEN dose (150 mg/kg) and the optimal time for IM administration (24 h before irradiation), the DRF for GEN given by a single IM administration was determined using probit analysis (Fig. 4)
Effects of the estrogen receptor antagonist on genisteininduced radiation protection
We wished to evaluate the contribution of ER activation in vivo to the radioprotective effects of genistein. To determine whether genistein provided radioprotection through an ER-dependent mechanism, mice were treated with the ER antagonist ICI 182,780 (ICI) (also known as fulvestrant), which binds ERα and ERβ indiscriminately with a high affinity, but does not have detectable estrogenic activity [40]. Mice were treated with GEN or vehicle with or without co-treatment with ICI (Fig. 5). The survival rate for the combination of the ICI and GEN vehicles (V-V) was 25%, indicating that neither the ICI vehicle nor the GEN vehicle had any radioprotective effects. The survival rate for the ICI plus vehicle-GEN group (ICI-V) was also 25%, demonstrating that the ER antagonist, ICI 182,780, by itself was not radioprotective. The 30-day survival for the GEN positive control group (V-GEN) was 90%, significantly higher than that for either of the control groups (P < 0.01). The group that received the ER antagonist before the administration of genistein nanosuspension (ICI-GEN) had a 30-day survival of 45%. This was a significant reduction in survival compared with the group that only received GEN (P < 0.01). These data indicate that the radioprotection mediated by GEN was significantly reduced by the ER antagonist, ICI-182,780. This suggests that ER activation is necessary for radioprotection by genistein.
Genistein is a selective agonist of ERβ
Estrogen is recognized as having effects as a radiation countermeasure [26][27][28][29][30][31], and based on the previous experiment we hypothesized that some portion of the radioprotective effects of genistein may be due its activity as an ER agonist. The molecular structure of genistein is similar to that of estrogenic steroids, which allows for its agonist activity, particularly its selective binding to ERβ [41]. Biochemical studies previously demonstrated that genistein exhibits a 20-fold selective binding to ERβ over ERα [34]. We compared the ability of genistein to activate ERβ over ERα, which would reflect the capacity of genistein to biologically activate the receptors, using a CHO cell reporter system that utilized luciferase. We also compared the activity generated by the GEN formulation with that generated by native genistein solubilized in dimethyl sulfoxide (DMSO). The results demonstrated that cells treated with either DMSO-solubilized native genistein or GEN only activated ERα at high concentrations, with EC 50 values for ERα at~2000 nM (Fig. 6). The reference agonist, 17β-estradiol, had an EC 50 of 0.3 nM for ERα. Activation of ERβ by either genistein formulation appeared to be biphasic, which we interpreted as effectively two dose response curves, each with an EC 50 value. EC 50-LOW corresponds to the inflexion in the curve at lower concentrations, and EC 50-HIGH corresponds to the inflection point at higher concentrations. The EC 50-LOW for both the genistein formulations was 0.9 nM, and the EC 50-HIGH was 3000 nM. Activation of ERβ by 17βestradiol had standard kinetics, with an EC 50 of 0.012 nM. Notably, the relative selectivity of both genistein formulations for activation of ERβ over ERα was~2000-fold, confirming that both standard genistein and GEN are selective agonists of ERβ.
DISCUSSION
Research for the discovery and development of radioprotectants has been ongoing for decades, but no drugs are yet approved to prevent radiation syndromes such as H-ARS. Our current work found that a nanosuspension of genistein (150 mg/kg) can be administered in a single IM injection between 48 h to 12 h prior to radiation exposure for the prevention of H-ARS. At the dosage used in these studies, no protection was observed when GEN was given in a single IM injection 30 min to 24 h after irradiation, indicating that in this model genistein is not effective as a post-exposure mitigator for H-ARS. When genistein was administered as a single IM injection at 150 mg/kg 24 h before irradiation, we determined the DRF of GEN to be 1.16. DRF is an important parameter for comparing various radioprotective agents because it is an unbiased predictor of efficacy at various doses of radiation exposure. While other compounds have been reported to have higher DRFs for radioprotection, these agents typically had significant toxic side effects [42]. Several laboratories have also investigated oral administration of genistein. Non-nanosuspension genistein was effective orally when given to mice in multiple daily gavages before irradiation [43][44][45]. However, a single oral dose (400 mg/kg) of non-nanosuspension genistein given 24 h before irradiation was not protective against H-ARS [44,45]. Taken together, our findings indicate that genistein is an effective radioprotector when administered by injection or when given orally prior to radiation exposure. Further studies are required to determine the optimal dosing regimen for GEN when given orally prior to radiation exposure.
While we have characterized and validated the prophylactic radioprotective efficacy of GEN, there has been consistent interest in exploring the potential utility of administering a genistein therapy after radiation exposure. Recent work has described how the GEN formulation (BIO 300) was effective in mitigating the delayed effects of acute radiation exposure (DEARE) to the lung (DEARE-lung) in mice when administered daily via oral gavage for up to 6 weeks, starting 24 h after whole-thorax lung irradiation [16,46]. This work was based on earlier findings by our group, which reported that genistein in a non-nanosuspension formulation protected against totalbody irradiation-induced lung damage and weight loss when administered subcutaneously 24 h before irradiation [47]. Because GEN was shown to mitigate DEARE-lung, we explored whether the drug could be administered after radiation exposure to mitigate H-ARS. In the present study, no evidence of such mitigation against H-ARS was observed. We would note several differences between the findings in the lung and our findings with H-ARS in the mouse model. First, is the distinct etiology between H-ARS and DEARE-lung. H-ARS occurs rapidly, with symptoms beginning 1-3 days after radiation, and lethality beginning~10 days post-irradiation exposure. DEARE-lung is characterized by a slow, progressive disease that manifests as potentially lethal pneumonitis~100 days post-exposure [47,48]. Second, in the study presented here, a single-dose of GEN was administered, in contrast with published lung studies, where GEN was administered daily for 6 weeks beginning 24 h postirradiation. Taken together, these findings indicate that GEN is effective as a radiation mitigator for DEARE-lung and as a radioprotector for H-ARS. Additional dose optimization and length of administration studies are warranted to further investigate the potential of GEN as a mitigator of H-ARS.
Our findings indicate that the estrogenic properties of genistein significantly contribute to the mechanism of its radioprotection. A component of estrogenic signaling has been known to be radioprotective since the 1940s [27,31], and genistein's classification as a phytoestrogen suggested that some portion of the radioprotective effects of genistein could be due to its estrogenic activity. Genistein was shown to selectively bind ERβ, with an~20-fold binding preference for ERβ over ERα [34]. A structural comparison of ERα and ERβ bound to genistein indicates that key amino acids of the ligand-binding domain, Leu 384 (ERα) vs Met 336 (ERβ), differentially interact with genistein, likely contributing to genistein's selectivity for ERβ [41]. Most notably, the activity of genistein-bound ERα is different from the activity of genistein-bound ERβ, based on the ability of the two genistein-bound receptors to recruit a coactivator, TIF2. Genistein-bound ERα recruited TIF-2 at 0.005% compared with estradiol-bound ERα. In contrast, genistein-bound ERβ recruited TIF-2 at~60% compared with estradiol-bound ERβ [49]. This work suggested that the functional activities of the two ERs bound to genistein can vary by as much as 10 000-fold [41], even if there is only an~20-fold difference in genistein binding to each receptor. Based on previous findings in addition to our current results with the CHO ER activity assay, we hypothesize that genistein's radioprotective effects occur via activation of ERβ. ERβ is found throughout the body of both males and females. Current studies suggest that ERβ may have biological activity other than simply as a negative regulator of classic ERα signaling (for which it was originally attributed) [50]. A potential molecular mechanism for the cellular radioprotective effects of genistein may be provided by studies that demonstrated that genistein-bound ERβ increased the expression of DNA repair genes, including RAD51, FANCA and BRCA1 [51]. Our previous studies of genistein treatment for H-ARS and radiation-induced lung injury demonstrated increased DNA repair in vivo by genistein [47]. DNA repair is central to the efficacy of a radiation countermeasure, and further studies are required to determine whether DNA repair genes are regulated by genistein in vivo via ERβ.
The in vitro data presented here confirm that genistein (either solubilized in DMSO or in our nanosuspension) had an~2000-fold selectivity for the activation of ERβ over ERα. Interestingly, in the case of ERβ, we observed that genistein activated this receptor in a biphasic manner. We cannot unequivocally determine the cause of this biphasic activity; however, biphasic activations have been reported in the literature for estrogen receptors. For example 17βestradiol has been demonstrated to have concentration-dependent effects on the estrogen signaling in cells (mitogenic at low concentrations and anti-cell growth at high concentrations) [52]. This report and others have noted that non-genomic signaling of estrogen receptors may play an important role in this regulatory biology, and that a significant amount of ER-dependent transcription is regulated by protein-protein interactions. Genistein has been described as a partial-estrogen receptor agonist because of the structural differences in the ERs when bound by genistein, compared with their conformation when bound by 17β-estradiol [33]. Therefore, we can only speculate that the biphasic activation of ERβ that we observed was due to concentration-dependent protein-protein interactions that govern the activity of ERβ as a DNA-binding transcription factor.
In addition to radiation protection for the hematopoietic acute radiation injury described above, genistein alone or in combination with other compounds has been demonstrated to protect a variety of organs, including liver [53], kidney [54], intestine [55], lung [16,47,56] and testes [57] from ionizing radiation injury. Genistein as well as other radioprotective agents have been the subject of multiple reviews [42,[58][59][60][61]. The dependence upon ER activation by genistein for radioprotection in other organ systems is currently unknown, and requires additional research.
Genistein is a very well-studied molecule, with over 11 000 articles listed on PubMed using the singular search term 'genistein.' However, the pharmacological use of genistein has been limited by its low solubility and bioavailability. Additionally, care should be taken when considering genistein's potential pharmaceutical applications, as a number of reports of its biological activity utilized high molar concentrations in cell culture that are not relevant in vivo. Moreover, in many instances genistein was dissolved in organic solvents such as DMSO to improve biological activity. These concentrations of DMSO are not pharmacologically applicable, as the clinical delivery of drugs is not typically mediated by dissolution solely in organic solvents. Nonetheless, because of its therapeutic potential, there is significant interest in and focus on improving the bioavailability of genistein using pharmaceutically acceptable formulations. The patented and proprietary GEN described in this paper was developed by physically grinding synthetically prepared genistein into nanometer-sized genistein particles, via a wet nanomilling process. These nanoparticles are carried in a specific pharmaceutically formulated suspension, termed a nanosuspension [17,18]. Other nano-technologies have been described to improve the biological activity of genistein, such as the use of lipid-based nanovesicles and nanoemulsions that serve as carriers of genistein. These lipid nanoparticles can be prepared using several different methodologies, which vary according to the orientation of the lipid layer(s) to the insoluble drug, as well as the presence of various surfactants, all of which change the molecular characteristics of the lipid nanoparticles [62]. Lipid nanoparticle technology has been applied to genistein in multiple studies for a variety of uses [63][64][65].
In summary, an ideal radioprotective agent should have several key characteristics: low toxicity, a practical mode of administration such as IM or oral, stability at ambient temperature, an extended shelf life, and a wide window for time of administration for efficacy. The results of the present study support the advanced development of a GEN as an effective medical radiation countermeasure. | 2019-05-01T13:04:06.587Z | 2019-04-30T00:00:00.000 | {
"year": 2019,
"sha1": "fe874e49d965a03c8fa6e18cc3993c3629fc3995",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jrr/article-pdf/60/3/308/28999964/rrz014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "acfcf3f7758ae21abae3ff175744b328c250525b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209164481 | pes2o/s2orc | v3-fos-license | Study on Pre-Exposure Prophylaxis Regimens among Men Who Have Sex with Men: A Prospective Cohort Study
Background: There are limited studies on the medication regimen of pre-exposure prophylaxis (PrEP) among men who have sex with men (MSM) in China. This study compared the effectiveness of and adherence to two prophylactic HIV medication regimens, which provided evidence and guidance for the application and promotion of the PrEP strategy in the MSM population in China. Methods: We conducted an open, non-randomized, multicenter, parallel, controlled clinical intervention study in western China. Subjects were recruited by convenience sampling at research centers in Chongqing, Guangxi, Xinjiang and Sichuan, China from April 2013 to March 2015, and they were categorized into the daily PrEP, event-driven and blank control groups. Tenofovir disoproxil fumarate (TDF; 300 mg/dose) was administered to subjects in the daily PrEP and event-driven groups, and all subjects were followed up every 12 weeks for 96 weeks. Demographic, behavioral, psychological characteristics and AIDS-related attitudes were assessed using self-completed questionnaires. TDF serum concentrations in subjects in Chongqing and Sichuan were quantified after systematic sampling. Results: Of the 2422 enrolled MSM, 856 were eligible for statistical analysis (PrEP group: 385 and event-driven group: 471); 30 and 32 subjects in daily PrEP and event-driven groups, respectively, were HIV-positive; the incidence of HIV infection was as follows: daily PrEP group, 6.60 cases/100 person-years and event-driven group, 5.57 cases/100 person-years (relative risk (RR) 95% confidence interval (CI) was 0.844 (0.492–1.449)); HIV incidence did not differ significantly when stratified by medication adherence or sites. When the medication adherence rate was ≥80%, the median TDF serum concentrations were 0.458 mg/L, and 0.429 mg/L in the daily PrEP, and event-driven groups, respectively (not significant; p > 0.05); Subjects who were in the event-driven PrEP group (OR = 2.152, 95% CI: 1.566–2.957), had fewer male sexual partners in the last two weeks (OR = 0.685, 95% CI: 0.563–0.834), were one year older on average (OR = 1.022, 95% CI: 1.002–1.043), considered that medication kept them safe, were less worried about others knowing they took medicine and were more likely to have high adherence. Conclusions: The efficacies of daily TDF and event-driven TDF use were not significantly different in preventing new infections among HIV-negative MSM. Event-driven TDF use is economical and effective and is worth popularizing. Our results provide evidence for the application and promotion of the PrEP strategy in the MSM population in China.
Introduction
Acquired immunodeficiency syndrome (AIDS) is an important global public health problem. By the end of 2017, 36.9 million people were living with human immunodeficiency virus (HIV) infection worldwide, with 1.8 million new infections and 940,000 AIDS-related deaths each year [1]. In China, among the newly reported HIV/AIDS cases, the proportion of MSM has been increasing yearly. The proportion of HIV infections transmitted through MSM increased from 3.4% in 2007 to 28% in 2017, and currently, HIV infection is spreading extremely rapidly among MSM in China [2]. Thus, the HIV infection situation is serious in this population and needs effective intervention and control.
Behavioral intervention such as condom use has been proved effective in preventing new HIV infections, however, the effectiveness of single behavioral intervention is not obvious, and needs the combined biomedical interventions such as PrEP. Currently, pre-exposure prophylaxis (PrEP) is an important biological prevention strategy among MSM. The PrEP regimen mainly includes the use of tenofovir disoproxil fumarate (TDF) monotherapy or a combination of TDF and emtricitabine (FTC) (also known as Truvada) [3]. The effectiveness and safety of a long-term use of TDF or TDF/FTC for preventing new HIV-1 infections have both been confirmed in healthy people [4][5][6][7][8]. However, no comparative studies have been conducted on the use of TDF monotherapy in the MSM population; prior studies have focused on comparing between daily TDF and TDF/FTC regimens or between event-driven TDF/FTC and daily TDF/FTC regimens among MSM. Since 2018, the China Center for Disease Control and Prevention (CDC) has attempted to popularize TDF/FTC in seven provinces. PrEP has not been popularized in China, and a low level of PrEP awareness has been reported in the Chinese MSM population [9]. Hence, it is valuable to explore the effectiveness and subjects' adherence pertaining to two intervention methods in preventing new HIV infections in the daily PrEP group and event-driven group.
This study aimed to investigate the factors influencing medication adherence, with a focus on the effect of a medication regimen upon adherence. The results from the study will inform the choice of medication regimen and strategies to promote PrEP among MSM in China.
Research Subjects
HIV-negative men who have sex with men (MSM) people were recruited into the study by convenience sampling at four research centers in Chongqing, Guangxi, Xinjiang and Sichuan. Recruitment was performed using the following methods: first, through non-governmental organizations (NGOs) which were established voluntarily by the MSM population, they were mainly responsible for recruitment and cohort maintenance; second, through peer introduction, and by the dissemination of information through announcements at public places and at places of entertainment frequented by the MSM population (usually park, public bathhouse, bar); third, through voluntary counseling and testing for AIDS (VCT) clinics; fourth, through the Internet.
Inclusion criteria were as follows: (1) signed informed consent; (2) age ≥ 18 and ≤ 65 years (3) HIV antibody-negative; (4) participated in sexual intercourse at least once every two weeks; (5) at least one homosexual partner within a month before the trial; (6) willing to use the study medication under guidance and obey follow-up arrangements; (7) willing to participate in the trial for 96 weeks.
Design
The study was an open, non-randomized, multicenter, parallel controlled clinical intervention study based on standard AIDS prevention interventions (study time: April 2013 to March 2015, registration number: ChiCTR-TRC-13003849). The subjects at each research center were assigned a random number and divided in a 1:1:1 ratio into each of the three groups: daily PrEP group, event-driven group and blank control group. A random number was generated by Statistical Analysis System (SAS) 9.4 software program (SAS Institute, Cary, NC, USA), whereupon each third of these subjects were respectively divided into the three groups using an ascending random number method. The daily PrEP group was given oral TDF (300 mg daily) (produced and provided by Gilead Sciences, Inc. (Foster City, CA, USA), specifications: 300 mg/per tablet. Lot: A818213). Subjects in the event-driven group took 300 mg TDF orally 48-24 h before sexual activity, and 300 mg TDF 2 h after sexual activity. The dosage was no more than 300 mg within 24 h. The blank control group was being organized, and did not receive any drugs or placebos. All subjects underwent HIV testing and counseling on reducing the risk of HIV infections, and received free condoms and standard AIDS prevention intervention services on the management of sexually transmitted diseases. However, subjects in Chongqing and Sichuan were completely randomized, whereas those in Xinjiang and Guangxi were allowed to voluntarily choose between daily or event-driven PrEP regimens in the practical operation, because the clinical trial was conducted by the local CDC and not the research center, therefore some subjects did not honor the grouping arrangement and entered their respective groups by self-selection. Subjects in the free-choice sites and random allocation sites were pooled for larger sample size in analyses.
Study Procedures
(1) Screening: An optimal screening process was established at each research center; screening numbers were complied, and subjects were selected according to established inclusion and exclusion criteria. (2) Cohort study initiation: Enrolled subjects were included within eight weeks; baseline clinical and laboratory tests such as HIV-1 serological tests, hepatitis B virus (HBV) serological tests, and blood biochemical tests were conducted after obtaining subjects' informed consent. (3) Follow-up during the medication period: face-to-face follow-up of subjects in two groups was conducted every 12 weeks to collect data on high-risk behavior and medication in the recent two weeks. Clinical and laboratory tests including an HIV test, blood biochemical examination, urine test and so on were also conducted. Subjects' serum was obtained and stored; the sample was collected before using medication on the day of follow-up. (4) Evaluation of serum TDF concentration: serum TDF concentrations in men in Chongqing and Sichuan were quantitated from stored subject samples. For subjects with conversion to HIV-positive status, all serum samples obtained from enrolment to detection of HIV-positive status were processed; among those who remained HIV-negative, subjects with at least one serum sample collected during the study period were selected by systematic sampling (after excluding subjects who were HIV-positive and who refused to undergo follow-up, subjects were selected every other line in the database when sorted by CRF number), all serum samples of selected HIV-negative subjects were tested.
Measures
The outcome measures mainly included the effectiveness of TDF regimens and the adherence of MSM. The effectiveness evaluation index was obtained based on the HIV incidence; the adherence difference evaluation index was evaluated based on the average serum concentration of TDF; the outcome variable index of adherence-influencing factors was evaluated using the medication adherence rate, which was calculated as follows: medication adherence rate = 100% − (missed medication/medication that should have been taken) × 100%; medication that the event-driven group should have taken = number of episodes of sexual intercourse × 2, and the missed medication was assessed by self-report.
The influencing factors for adherence were evaluated using cross-sectional baseline data and data obtained at the first follow-up for each subject. The adherence level in the first follow-up was used as a dependent variable. A medication adherence rate <80% indicated low adherence and a medication adherence rate >80% indicated high adherence. Behavioral characteristics included HIV counseling history, HIV testing history, frequency of sexual partners through the Internet, sexually transmitted disease (STD) history, drinking frequency, history of drug use and history of using commercial sexual services; psychological characteristics and attitudes of taking medicine included side effects, risk perception and efficacy cognition. The research methods primarily included cohort follow-up and laboratory testing. Cohort follow-up was conducted using self-completed questionnaires which were distributed to subjects. Data on individual demographic information, medication psychology, and self-reported sexual behavior in the last half month were collected at baseline and after every 12 weeks during follow-up; laboratory tests included routine tests, virological tests and serum TDF concentration monitoring. Sample collection methods were as follows: within one hour after 5 mL of venous blood was obtained from the subjects by vacuum blood collection, the blood samples were centrifuged in a test tube to separate the serum from the blood cells; 1mL of serum from each sample was immediately transferred to two cryopreserved tubes and stored at −70 • C or lower. Samples with hemolysis and those containing blood cells could not be analyzed. TDF concentration was quantitated by high performance liquid chromatography with ultraviolet detection (HPLC-UV) after solid phase extraction. In addition to the above data, individual safety was assessed and documented, which including information on the timing, severity, duration, measures taken and outcome of adverse events.
Quality Control and Ethics
The entire trial process was in conformity with the guidelines from the Helsinki Declaration and the Chinese clinical trial research norms and regulations, and was approved by the Ethics Committee of Chongqing Medical University (Ethical Approval code: 2012010). Subjects were informed of clinical review, and the privacy and data of the subjects was strictly protected. The inspectors and researchers involved in the study were trained to thoroughly inspect the work of the clinical research center regularly and to specify the verification results of the original data in accordance with standard operating procedures. Verification of informed consent for all subjects was mandatory.
Statistical Analysis
EpiData 3.0 software (EpiData Associations, Odense, Denmark) was used for data double entry and verification and SAS9.4 was used for statistical analysis. Descriptive statistics was used to analyze the basic data of the daily PrEP group and the event-driven group; the log-rank test was used to compare differences in overall HIV incidence between two groups; the relative risk (RR) value was calculated by the Woolf method, the daily PrEP group was considered the reference group; the rank test was used to compare the mean blood drug concentration of the two groups. Among univariate analysis, a χ 2 test was conducted for analyzing the medication adherence, the variables that included behavioral characteristics, the psychological characteristics of taking medicine and AIDS-related attitudes. The logistic, stepwise regression model was used in multivariate analysis, the criterion for variable entry and removal was 0.05, and the odds ratio (OR) value and the 95% confidence interval (CI) were calculated. The medication adherence rate used in the HIV incidence and serum TDF concentration analysis was computed with full observation period data, while univariate and multivariate analysis was computed using first follow-up visit data. Furthermore, missing data were removed from statistical analyses, and p < 0.05 indicates statistical significance.
Subject Characteristics
A total of 2422 subjects were screened for eligibility, and 1884 subjects were randomized. After randomization, there were 408 subjects each in the daily PrEP and event-driven groups in the random allocation sites; in the free-choice sites, there were 167 subjects in the daily PrEP group and 253 in the event-driven group; 648 subjects in the blank control group were not included in our analysis. Among those excluded before randomization, 376 were HIV-positive, 72 were Hepatitis C positive, 60 declined to participate, 30 were age <18 or >65 years old. Among those excluded after randomization, in the random allocation sites, 60 were asexual or did not provide information of sexual behavior within 96 weeks, and 241 refused to undergo follow-up examinations; in the free-choice sites, 49 were asexual or did not provide information, and 30 refused to undergo follow-up examinations. A total of 856 valid subjects were selected after screening. Figure 1 presents the subject flowchart per the CONSORT guidelines. In addition, after excluding subjects who were HIV-positive (N = 39) and who also refused to undergo follow-up examinations (N = 241) in the random allocation sites, 268 of the 536 HIV-negative subjects were selected by systematic sampling to undergo serum TDF concentration testing. During the study, no serious adverse events or unintended effects occurred in either group.
The overall average age of the subjects was 30.44 years (median age 29 years); the Han nationality accounted for 91.4% of the study subjects; 74.4% held urban household registrations; 39.5% were educated to the university level or above; 11.2% were educated to the junior high school and below level; the majority of the subjects were employed, accounting for 79.0% of the study population; the average monthly income was below 1000 yuan (ca 142 USD) for 14.6% of the subjects; it was 1000-3000 yuan for 35.9% of the subjects, and was 3000-5000 yuan for 36.9% of the subjects. The abovementioned basic demographic and behavioral characteristics were not statistically different between the two groups (p > 0.05), similar to that of the two groups in the random allocation and free-choice sites (Table 1). Meanwhile, the number of drugs that ought to have been taken was 18,606 in the daily PrEP group and 8166 in the event-driven group during the full observation period. Average follow-up times is 3.865, 4.093 separately in the daily PrEP and event-driven PrEP group. Among the baseline subject characteristics, age <30 years, being of Han nationality, and not having access to free HIV consultation and testing, were significantly more frequent in the 380 subjects who were excluded than in the 856 valid subjects. randomization, there were 408 subjects each in the daily PrEP and event-driven groups in the random allocation sites; in the free-choice sites, there were 167 subjects in the daily PrEP group and 253 in the event-driven group; 648 subjects in the blank control group were not included in our analysis. Among those excluded before randomization, 376 were HIV-positive, 72 were Hepatitis C positive, 60 declined to participate, 30 were age <18 or >65 years old. Among those excluded after randomization, in the random allocation sites, 60 were asexual or did not provide information of sexual behavior within 96 weeks, and 241 refused to undergo follow-up examinations; in the free-choice sites, 49 were asexual or did not provide information, and 30 refused to undergo follow-up examinations. A total of 856 valid subjects were selected after screening. Figure 1 presents the subject flowchart per the CONSORT guidelines. In addition, after excluding subjects who were HIV-positive (N = 39) and who also refused to undergo follow-up examinations (N = 241) in the random allocation sites, 268 of the 536 HIV-negative subjects were selected by systematic sampling to undergo serum TDF concentration testing. During the study, no serious adverse events or unintended effects occurred in either group.
The overall average age of the subjects was 30.44 years (median age 29 years); the Han nationality accounted for 91.4% of the study subjects; 74.4% held urban household registrations; 39.5% were educated to the university level or above; 11.2% were educated to the junior high school and below level; the majority of the subjects were employed, accounting for 79.0% of the study population; the average monthly income was below 1000 yuan (ca 142 USD) for 14.6% of the subjects; it was 1000-3000 yuan for 35.9% of the subjects, and was 3000-5000 yuan for 36.9% of the subjects. The abovementioned basic demographic and behavioral characteristics were not statistically different between the two groups (p > 0.05), similar to that of the two groups in the random allocation and freechoice sites (Table 1). Meanwhile, the number of drugs that ought to have been taken was 18,606 in the daily PrEP group and 8166 in the event-driven group during the full observation period. Average follow-up times is 3.865, 4.093 separately in the daily PrEP and event-driven PrEP group. Among the baseline subject characteristics, age <30 years, being of Han nationality, and not having access to free HIV consultation and testing, were significantly more frequent in the 380 subjects who were excluded than in the 856 valid subjects.
Sero-Conversion in Subjects
Positive sero-conversion was observed in 30 of 385 subjects in the daily PrEP group and in 32 of 471 subjects in the event-driven group. The overall HIV incidence was 6.60 cases/100 person-years in the daily PrEP group and 5.565 cases/100 person-years in the event-driven group. The overall RR with 95% CI was 0.844 (0.492-1.449). On performing the log-rank test using the life-table method, the log-rank statistic was found to be 0.168 (p > 0.05). In addition, Table 2 shows the HIV incidence stratified by medication adherence and research centers, and when stratified by medication adherence, 22 subjects were excluded from analysis for lacking a medication adherence rate; among those excluded, 18 were HIV-negative (4 in the event-driven, and 14 in the daily PrEP groups) and 4 were HIV-positive in the daily PrEP group. HIV incidence did not differ significantly between the two groups when stratified by medication adherence and site (Table 2 and Figure 2). Meanwhile, the overall HIV incidence in the blank control group was 6.175 cases/100 person-years. The overall HIV incidence did not differ significantly between the three groups (p > 0.05), however, when medication adherence was ≥80%, HIV incidences were both significantly lower in the two groups than in the blank control group (p < 0.05).
Sero-Conversion in Subjects
Positive sero-conversion was observed in 30 of 385 subjects in the daily PrEP group and in 32 of 471 subjects in the event-driven group. The overall HIV incidence was 6.60 cases/100 person-years in the daily PrEP group and 5.565 cases/100 person-years in the event-driven group. The overall RR with 95% CI was 0.844 (0.492-1.449). On performing the log-rank test using the life-table method, the log-rank statistic was found to be 0.168 (p > 0.05). In addition, Table 2 shows the HIV incidence stratified by medication adherence and research centers, and when stratified by medication adherence, 22 subjects were excluded from analysis for lacking a medication adherence rate; among those excluded, 18 were HIV-negative (4 in the event-driven, and 14 in the daily PrEP groups) and 4 were HIV-positive in the daily PrEP group. HIV incidence did not differ significantly between the two groups when stratified by medication adherence and site (Table 2 and Figure 2). Meanwhile, the overall HIV incidence in the blank control group was 6.175 cases/100 person-years. The overall HIV incidence did not differ significantly between the three groups (p > 0.05), however, when medication adherence was ≥80%, HIV incidences were both significantly lower in the two groups than in the blank control group (p < 0.05). Note: * indicates missing data.
Serum TDF Concentration
We quantified 268 serum samples from HIV-negative subjects and 39 serum samples from HIV-positive subjects in random allocation sites (151 and 156 subjects in the daily PrEP and event-driven groups, respectively), where 46 subjects were excluded in the analysis (9 lacked their medication adherence rate and 37 were asexual or did not provide information of sexual behavior), and we finally evaluated 261 serum samples for TDF concentrations (124 and 137 subjects in the daily PrEP and event-driven groups, respectively). When the medication adherence rate was <80%, the overall median (P25-P75) serum concentrations were 0.404 (0.237-0.661) mg/L and 0.472 (0.255-0.987) mg/L in the daily PrEP and event-driven groups, respectively. When the medication adherence rate was ≥80%, the overall median (P25-P75) serum concentrations were 0.458 (0.272-0.625) mg/L and 0.429 (0.278-0.648) mg/L in the daily PrEP and event-driven groups, respectively. Serum TDF concentrations did not differ significantly between the two groups on stratification by medication adherence (p > 0.05) ( Table 3).
Results of the χ 2 tests revealed that men who considered that the medication kept them safe, had a doctor-diagnosed STD history, were less worried about side effects or the ineffectiveness of the medicine, along with awareness by others of their use of this medication, and were more likely to have higher adherence than other men (p < 0.05) ( Table 4).
Multivariate Logistic Regression Analysis
Medication adherence rates computed with first follow-up visit data were taken as a dependent variable, while the independent variables were as follows: medication regimen; demographic characteristics including age, household registration, education, employment, marital status and monthly income; behavioral characteristics; psychological characteristics of taking medication; sexual behavior characteristics including sexual partners, number of episodes of sexual behavior and frequency of condom use. The variables with p < 0.15 in Table 4, as well as the medication regimen, demographic characteristics and sexual behavior characteristics were included as independent variables in the logistic stepwise regression model. The results showed that subjects who were in the event-driven PrEP group (OR = 2.152, 95% CI: 1.566-2.957), had fewer male sexual partners in the last two weeks (OR = 0.685, 95% CI: 0.563-0.834), were one year older on average (OR = 1.022, 95% CI: 1.002-1.043), considered that medication kept them safe, were less worried about others knowing they took medicine and were more likely to have high adherence. Data from subjects who disagreed with the above were taken as reference variables, and in the "medicine keeps me safer, and keeps me away from AIDS" item, each level compared with those subjects who disagreed had statistical significance. Adherence was better in people who agreed with the above item than in those who completely disagreed with this view; on the issue of "I am worried that others know I am taking medication", adherence was lower in people who were relatively or very worried than it was in those who were completely not worried. Meanwhile, according to the result of sub-analysis, in the random allocation sites, subjects who were one year older on average (OR = 1.044, 95% CI: 1.018-1.070), had fewer male sexual partners in the last two weeks (OR = 0.660, 95% CI: 0.522-0.835), and were more likely to have high adherence. In the free-choice sites, subjects who were in the event-driven PrEP group (OR = 13.137, 95% CI: 6.664-26.614) considered that medication kept them safe, were little more worried about others knowing they took medicine and were more likely to have high adherence (Table 5).
Discussion
As early as 2012, the United States (U.S.) Food and Drug Administration (FDA) approved daily TDF/FTC combined therapy for the prevention of new HIV infections [10]. Although the World Health Organization (WHO) and the US Centers for Disease Control and Prevention (CDC) still recommend daily PrEP medication for people at high risk of AIDS, the latest guidelines released by both the French and the European Clinical Societies for AIDS recommend TDF/FTC combined therapy before and after sexual behavior [11]. However, there are few studies on the use of TDF monotherapy in HIV prevention by on-demand and daily regimens. This study is based on the PrEP medication regimen of the MSM population in Western China, and primarily evaluated the effectiveness of and adherence to medication regimens. The results showed that there was no statistical difference in TDF serum concentration between the two groups after stratification according to adherence, and HIV incidence was not statistically different between the groups during the 96-week clinical intervention. The factors influencing the adherence of this group included medication regimen, age, the number of sexual partners in the last two weeks, the individual's cognition of drug effectiveness and worry that others knew that they were taking medication.
In terms of effectiveness, the overall HIV incidence was 6.60 cases/100 person-years in the daily PrEP group, while it was 5.57 cases/100 person-years in the event-driven group. When adherence was high, the HIV incidence in the daily PrEP group dropped to 2.72 cases/100 person-years and to 1.817 cases/100 person-years in the event-driven group. There was no statistical difference in HIV incidence between the two groups when stratified by sites or adherence. Moreover, according to serum samples collected from centers in Chongqing and Sichuan, there was no statistically significant difference in the mean TDF serum concentration between the two groups.
It can thus be concluded that on-demand taking of TDF has the same effect as taking TDF orally and daily to prevent new HIV infections in the Chinese MSM population. Additionally, for the reason that subjects in the event-driven PrEP group take medicine based on their sexual activity, the number of tablets using the event-based regimen was potentially smaller compared to that using the daily TDF regimen actually. This aspect may be used to reduce the economic burden of those taking this drug and improve cost-effectiveness [12]. Although the Ipergay study in France researched and confirmed the effectiveness of on-demand PrEP [13], it also indicated that the efficacy of on-demand PrEP to prevent new HIV infections depends upon the accuracy of an individual's prediction of their future sexual behavior. When the accuracy of the individual's prediction was low, the efficacy of on-demand PrEP prevention was also limited [14], which may be one of the reasons for the higher overall HIV incidence in the event-driven group compared to that in the daily PrEP group.
In accordance with results obtained from research conducted worldwide, the level of adherence between two groups determines the effectiveness of prevention, and subjects in high adherence groups have a lower attack rate [15]. Therefore, strengthening the drug adherence of the MSM population and achieving precise interventions is important and worthy of attention. In this study, cross-sectional data from the first follow-up of each subject were used to for analysis in the conjunction adherence level as a dependent variable. The results showed that the adherence of the subjects in the event-driven group was better than that in the daily PrEP group (OR = 2.152). If the daily TDF group did not have daily sexual behavior, it is possible that subjects in that group may have assumed that there is no need to take the medicine without daily sexual behavior, which may potentially have increased the missing rate. Subjects in the event-driven group only needed to take the medicine before and after sexual activity, and the likelihood of missing doses was thus relatively low in this group. The TDF serum concentration of the daily TDF group was theoretically higher than that of the event-driven group which had a medication regimen based on sexual behavior. However, the results of this study showed no statistically significant differences in the serum TDF concentration between the two groups, which indirectly proved that the adherence of the event-driven group was higher. In subgroup analysis of subjects in random sites, MSM participants in the event-driven group showed a higher adherence but the difference was not statistically significant, and this could be caused by the small sample size; furthermore, despite the lack of differences in adherence between the two regimens, we may still recommend event-driven PrEP owing to similar effectiveness and fewer costs compared with daily PrEP. Subjects who were one year older on average had higher adherence (OR = 1.022), which was consistent with that reported by several existing international studies. This was especially true for antiretroviral therapy, and younger HIV-infected individuals showed lower adherence than did older HIV-infected individuals. The possible reason for this observation may be that older people have a more stable life and a higher positive expectation of the drugs. More attention ought to be paid to Younger MSM in a further large-scale intervention study [16][17][18]. A greater the likelihood of high adherence was associated with a lower number of male sexual partners in the past two weeks (OR = 0.685). This may be because when there were fewer partners, the individual self-safety cognition and risk perception level was high. Conversely, with an increase in the number of sexual partners, indicating a greater randomness in life, adherence may have decreased. Studies have shown that adherence is generally low when the number of sexual partners exceeds four, so the intervention should also be strengthened in multi-partner populations; subjects with more than four sexual partners should be included as the focus of the intervention [19]. When individuals are more worried about others knowing that they are taking medicines and additionally think that the medicines do not work, the likelihood of missing doses increases. This is also in line with the theory of knowledge, belief and practice. Unfavorable opinion of PrEP and fearing the stigma caused by the discovery of medicine use by others can cause low adherence in order to avoid social discrimination. For this reason, researchers should work towards increasing the effectiveness of these medicines. Additionally, disseminating proper information about the effectiveness of these regimens by peer education among MSM subjects will help to reduce their psychological burden, increase self-identity and improve adherence in follow-up trials [20].
Lastly, this clinical trial was conducted in western China, because the MSM population in western China was concentrated and had high HIV prevalence in the past years, therefore they could be a representative MSM population in China.
Limitations
This study had several limitations. First, this study was not exactly a randomized trial, though it was designed as a completely randomized clinical intervention study. It could be called as a real-world study, which includes sites with randomization and sites where the subjects are not randomized. However, for a large sample size, we included data from the free-choice sites in the analysis. Second, the lack of significant differences in HIV incidence rates between the two groups did not mean that the two groups had the same incidence rates; however, as 1.612 cases/100 person-years was the maximum difference in HIV incidence rates, we may conclude that the two regimens show similar effectiveness for preventing new HIV infections. Third, the number of self-reported missing doses and serum concentration test results were used to evaluate subjects' adherence to the regimen. The advantages of self-reporting are that obtaining information is easy and convenient; however, the disadvantages of this method include susceptibility to subjective influences, such as intentional underreporting or recall bias. The advantage of serum concentration testing is that it objectively reflects drug intake in the subjects, but these drug concentrations are influenced by factors such as drug half-life; meanwhile, we compared serum TDF concentration stratified by medication adherence in the two groups, and both showed no significant difference (p > 0.05). Thus, serum levels may not accurately reflect long-term drug intake in the subjects. In subsequent clinical trials, for better accuracy, drug concentrations can be measured in hair samples or peripheral blood cells or electronic bottle cap levels can be used as adherence indicators [21].
Conclusions
Our study on the implementation of PrEP therapy in a Chinese MSM population showed that, compared to daily TDF use, a TDF-only regimen (on-demand PrEP) was associated with good effectiveness and adherence. For promoting PrEP in the future, event-driven PrEP can be an economic and effective option. | 2019-12-11T14:01:46.410Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "06f515387a60780bc04ae734ae4a5b7bd10ae07b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/24/4996/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "711123062d8d071836b7ee3cbdba951ef83812ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266587624 | pes2o/s2orc | v3-fos-license | From Practice to Performance: The Experience of Hotels in Adopting Strategic Human Resource Management Practices in the Upper West Region of Ghana
The study examined hotels' experiences with strategic human resource management practices and performance. The study adopted the best-fit approach and contingency approach. The study sampled 50 managers and manageresses using convenient sampling approaches
INTRODUCTION
Human resource management strategies include practices that guarantee employees' knowledge, skills, and capacities contribute to corporate results (Huselid et al., 1997).According to the theoretical literature, human resource management promotes productivity by improving employees' skills and motivation (Huselid, 1995).Employees are essential to hotels.Employees' market worth increasingly depends on intangible assets (Lawler, 2005).There are many areas in which human resource management is critical, and one of the most essential areas is newly hired employees because people make a significant difference by offering high-quality service.Human resource systems can contribute to long-term competitive advantage by facilitating the development of firm-specific competencies that result in complex social relationships rooted in a firm's history and culture.Human resource management is one of the most crucial sections of any hotel workforce.Proper human resource management can differentiate between a well-run hotel and one that is not.The human resources • Email: editor@ijfmr.com
IJFMR230610129
Volume 5, Issue 6, November-December 2023 2 manager has almost complete control over the overall atmosphere and presence of the hotel.This emphasises the significance of human resource management in hotels.The workers hired in a hotel have a significant impact on the quality of service and the overall mood of the hotel.This means that it is critical to hire upbeat, devoted employees for each role.It is the human resources manager's responsibility to ensure that qualified personnel are hired to work in the hotel.In many situations, hotel workers are merely there because they have nothing better to do.Employee retention is another major issue in the hotel service industry.Because so many of the employees do not see hotel labour as a long-term career objective, many of them only work in hotels for a brief period.Other employees may be let go due to bad work ethics or other concerns.However, there are several things a hotel's human resources management may need to do to reduce the desire and chance of staff leaving rapidly.Human resource management is critical in the hotel industry.Managers can provide substantial training and reward programs to encourage employees to stay at the hotel longer.Employees will stay considerably longer if there is a clear progression strategy for advancing to higher service levels.Another major challenge for the hotel sector is employee advancement and Promotion.In this area, the significance of human resource management for hotels is demonstrated.Hotels that offer opportunities for advancement or training for staff to improve in their careers and compete for key positions in the facility.
It is easy to implement services of this nature, and the expense is negligible compared to the expense and time necessary to constantly find new employees to replace the ones that always leave shortly after being hired.The impact of human resource management practices on organisational performance has been an essential area of research in the past two decades, showing a positive relationship between HR practices and organisational performance (Quresh et al., 2007).
It is simple to establish such services, and the cost is negligible compared to the cost and time required to constantly locate new personnel to replace those who depart shortly after being hired.The impact of human resource management practices on organisational performance has been a significant focus of research over the last two decades, with findings indicating a positive association between HR practices and organisational performance (Quresh et al., 2007).Their data reveal a considerable association between hotel financial success in Mexico and China.Chang et al. (2011) investigated hotel performance in hotel and restaurant firms through incremental and radical innovation.Their findings from 196 hotels and restaurants reveal that employing and educating multi-skilled core customer-contact staff has a significant and favourable impact on incremental and radical innovation.Researchers have utilised a variety of variables to assess organisational success.However, a dimension of opinions is observed regarding the reasons.It causes that lead to improved performance, whereas the effects of human resource management practices and their relationship to performance are not widely researched, particularly among Ghanaian hotels, including those in Ghana's Upper West Region.As a result, this study investigated hotel experiences with strategic human resource management methods and their impact on hotels in Ghana's Upper West Region.
LITERATURE REVIEW 2.1 Strategic Human Resource Management Practices
Organisations can improve performance by implementing strategic human resource management and worker administration techniques.According to SHRM experts, an organisation's representatives and their methods of conducting business have a significant impact on the organisation's success (Katou & Budhwar, 2006).In line with current conceptual frameworks of the intuitive consequences of SHRM practices and aggressive techniques on firm performance, organisations that can effectively influence the practices of their employees through HRM frameworks will be able to grow their performance (Huselid, 1995).Human resource integration strategy is typically the focus of strategic HRM practices (Armstrong, 2007).Based on a survey of the literature, businesses that use or adopt best practices in HRM do better than those that do not.
Training and Development System
Businesses can improve the quality of their To meet the problems that come with professions and workplaces in the twenty-first century, employees must continuously update their knowledge, skills, and work practices, and businesses should invest extensively in human capital development (Huselid, 1995).Aswathappa (2008) concluded that investing in staff training has a positive impact on organisational performance.
Recruitment and Selection
According to Pfeffer (1994), a thorough, reliable, and complicated selection system is essential to recruiting and selection since it helps identify a qualified candidate with performance potential.A stringent hiring procedure encourages elitism, elevates performance expectations, and conveys that employees are essential to the company (Pfeffer, 1994).A mismatch between the individual and the job could limit performance levels, but a clever selection process can guarantee a better fit between the person's skills and the organisation's needs.Furthermore, according to Terpsra and Rozell (1993), selection has been associated with higher company performance.
A valid recruiting and selection procedure enables an organisation to assign qualified employees to the best prospective activity position, ensuring hierarchical viability (Terpsra and Rozell, 1993).The processes of attracting and choosing people for employment in the sector are referred to by the terms recruitment and selection.People in charge of hiring new employees should be educated on the intricate recruitment and selection processes used by the staff department of a larger organisation (Terpsra and Rozell, 1993).
Performance Appraisal System
Organisations can utilise appraisal tools to track the development of desired employee attitudes and behaviours (Sujová et al., 2014).This appraisal-based data could alter hiring and training methods to find and develop individuals who exhibit the required behaviours and attitudes.However, talented personnel's performance will be limited unless they are driven to do their jobs (Sujová et al., 2014).Analysing an employee's current and historical performance against their performance standards is known as performance evaluation.How well performance appraisal is implemented determines its success (Sujová et al., 2014).Furthermore, it relies on how enthusiastic and well-versed in the performance review system the employees are.It is essential because it helps managers make wise administrative choices regarding employee payoffs, incentive pay increases, fringe perks, and promotions (Sujová et al., 2014).According to Sujová et al. ( 2014), a performance evaluation is a systematic and routine process that assesses an employee's job performance and productivity concerning established organisational goals.A large body of evidence shows that base pay has a considerable impact on business performance (Olagbemiro, 2021).HR divisions and organisations invest a lot of time and money into putting together benefits packages that can be used to hire new employees and keep the ones they already have (Olagbemiro, 2021).The research of Maina (2011) revealed that the vast majority of businesses rated as providing the best services include rules for adequate compensation, career advancement, a flexible work environment, and employee recognition.If effectively adopted in any competitive system, good employee relations, delegation, consultation, and autonomy in decision-making are some variables that enhance organisational performance.It should be highlighted, nonetheless, that the study was restricted to strategic HRM practices (Maina, 2011).
Employee Participation
The foundation of HRM is the notion of treating employees as the company's most valuable asset (Verma, 2000).Therefore, it is evident that encouraging and supporting more employee influence and participation is essential to good HR practice in businesses.Studies have shown that employee involvement is positively related to an employee's performance, happiness, and productivity (Verma, 2000).
High Performance
The growth of many interrelated HRM forms that, when combined, improve hierarchical performance is referred to as higher management (Espeland and Stevens, 1998).It is also reported by Espeland and Stevens (1998) that select work entails the advancement of several interconnected procedures that, when combined, have an impact on the firm's execution through its employees in areas like efficiency, quality, and dimensions of client administration, development, and benefits, as well as the delivery of increased investor esteem.Also, the study of Muhammed and Abdullah et al. (2016) indicates that supporting the economy's development and advancement needs will be more practicable when the human asset is purposefully built and equipped with fundamental skills that will help the organisation to accomplish its genuine potential.
Teamwork
For various reasons, compelling teamwork in the workplace is crucial; nonetheless, one of the most important is to create progress (Shaw et al. 1998).When a group works well together, you may expect a positive outcome from your efforts.When working as a team, you have a comprehensive set of people contributing unique ideas and explanations to problems (Shaw et al. 1998).A group that works well together is also willing to encourage one another as they complete their tasks and achieve their goals.A group, according to (Shaw, 1981), is a collection of people who work together.A group is a collection of people who have a high level of trust in one another and are working together to achieve a goal or complete a task.It could comprise similar individuals operating in parallel, with the group benefits usually centred on reducing costs by sharing data and assets (Garrick & Clegg, 2000).In this vein, (Kleiman, 1997) identified collaboration as one of the HRM practices that improve a firm's competitive advantage.
Promotion
The model for rating employees through systematic performance appraisal aids in identifying the bestqualified applicant by considering both quantitative and qualitative aspects of the incumbent's performance (Kalyani & Chong 2018).Employees are promoted to certain professional tracks (Kalyani & Chong 2018).This is because today's employees design and manage their career pathways with one or more companies.To reward employees' performance and productivity, merit-based promotions are necessary.To eliminate subjectivity in the promotion review process, HR managers must be more involved (Kalyani & Chong, 2018).In contrast to underdeveloped countries such as Ghana, the developed world places a high value on SHRM in hotels.According to a review of the literature, only some studies that provide information on SHRM practices in developing countries have been published internationally (Ananthram & Nankervis, 2013).This study focuses on the HRM processes of recruiting, selection, training, and development, as well as remuneration, job design, and appraisal.Two of these actions require firms to acquire and develop critical human resource competencies to achieve corporate objectives (Olli, 2018).In the literature on hotel HRM, these activities are often considered crucial
Best-Practice Approach
To gain a competitive edge, an organisation must create a human resource management system that achieves both horizontal and vertical integration.Management of human resources is, therefore, the hotel industry's most essential and delicate concern (Kusluvan et al., 2010).The 'best practice approach, according to Meshoulam (1988), is that human resource (HR) strategy is more effective when it is integrated with the unique organisational and environmental context.According to Boxall and Purcell (2000), the universalistic perspective places a strong emphasis on "best practices," with the implication that organisations would prosper if they identify and use "best practices" in terms of personnel management.In other words, regardless of the firm, strategy, or environment, some human resource practices are always superior to others (Rose & Kumar, 2006), and all organisations should implement them (Delery & Doty, 1996).(Miles & Snow, 1984) From this angle, a company must imitate and put these global best practices into place to have efficient human resource processes.Numerous empirical findings support that human capital is the most valuable organisational resource and the essential factor in obtaining exceptional performance.As a result, a key source of long-term competitive advantage is provided by human capital for organisations (Huselid, 1995).
HR procedures must be 'external fit' or 'vertical fit,' according to Lee (2021).Resilient organisations are aware of their external environment and plan their HR requirements in a way that takes into account the HR implications of a changing external environment, as well as the ability to modify their strategy or solve problems that may arise as a result of environmental changes (Bach, 2001).In SHRM, Internal prerequisite is a consistent approach to HR policy that is not overly reliant on a single element, such as training, but instead blends HR rules into a unified set of policies and processes (Bach, 2001).This means that start-up businesses prefer more relaxed HRM styles.Further elaborating, Boxall and Purcell (2000) noted that when organisations grow and hire more personnel, HRM styles become more popular.They also support "internal fit," which they define as the requirement for individual HR policies to "fit with and support one another" or, as they prefer to call it, "horizontal fit" (Boxall and Purcell, 2000).
Contingency Approach
The most essential "best-fit" model, however, is one in which the organisation's competitive strategy determines external fit rather than its stage of development (Altarawneh, 2016).The basic parameters for strategic HRM in this model include aligning HR strategy, plans, and policies with the demands of the enterprise (Altarawneh & Aldehayyat, 2011).Schuler and Jackson (1987) stated that human resource practices and procedures should be developed to support Porter's numerous generic techniques.Boxall and Purcell (2000) added that HR procedures complement the organisation's establishment of a competitive leadership position, focus and coordinated strategies to help in organisational performance.
Study Area
According to the Ghana Statistical Service's 2021 National Population and Housing Census, the Upper West Region has a total population of 901,502 persons (GSS, 2021 Census).The Upper West Region of Ghana is located northwest of the country, bordering the Upper East Region to the east, the Northern Region to the south, Côte d'Ivoire to the west, and Burkina Faso to the north.Wa is the Upper West's principal town and regional centre.The region encompasses 18,476 square kilometres or approximately 12.7% of Ghana's total land area.The location is located in the Guinea Savannah belt.Drought-resistant trees include shea, baobab, dawadawa, and neem.These trees provide wood for building houses and fuel for domestic use.The Upper West Region's climate follows a trend of the five northern regions.There is only one rainy season, from April to September, with an annual precipitation of about 115 cm.Harmattan follows a lengthy dry season that begins cold and overcast in early November and lasts until early March, when it is ended by the advent of early rainfall in April.The average monthly temperature ranges from 21 to 32 degrees Celsius.Temperatures peak at 40 degrees Celsius in March, just before the rainy season begins, and plummet to 20 degrees Celsius in December, during harmattan, which is caused by the trade winds from the north.The Wa-Lawra plains, which are located west of the city of Wa and near Lawra, have a relatively flat surface.The land is generally between 275 and 300 meters high, except for the area east of Wa, where it climbs more than 300 meters above sea level.As one drives east, the ground lowers to about 150m above sea level.The soil types in the area are diverse.They include groundwater laterites, tropical brown yeast, terrace soils encountered along river and stream sides, and Savannah ochrosols.Many grains, legumes, tubers, and cotton can thrive in this soil.Tobacco is one crop that is commonly produced on terrace soil types.Chieftaincy is a prestigious institution that plays a significant role in community mobilisation.In Sissala, the chiefs are called Koro (e.g., Tumu Koro), while the other districts are called Na (e.g., Wa Na).There are 21 traditional paramountcies, including two in Jirapa-Lambussie, three in Lawra, seven in Nadowli, five in Sissala, and four in Wa.The Mole Dagbon and Grusi are two broad generic categories that encompass the majority of the ethnic groups in the area.Dagaare, Sissali, Wale, and Lobi are the primary languages spoken in the area.Except for the Lobi, who follow a matrilineal system of inheritance like the Akan in southern Ghana, inheritance is patrilineal.The extended family system shares resources in polygamous marriages.There is often male dominance and a low status for women in the area.The three main religions are African traditional religion, Christianity, and Islam.Rural areas tend to be more dominated by traditional life and beliefs than urban areas do.The Damba festival takes place in Wa, the Dagaabas celebrate Dembenti, Kobine is held in Lawra, and Kakube is held in Nandom.The Wa Na's Palace and Dondoli Sudamic (Larabanga) Mosque, Jirapa Na's Palace, Nandom's Gothic art church made entirely of stone, and Wechiau's hippo sanctuary are just a few of the region's tourist attractions.The Gwollu Slave Defense Wall, slave site caves, and George Ekem Ferguson's tomb are additional attractions.
Research Design
In the middle to late 1980s, a mixed research technique was introduced (Creswell, 2016).Within a single study, this methodology uses both quantitative and qualitative data (Classen et al., 2007).Because one data source might not be sufficient, a secondary method may be required to supplement a primary method, and preliminary results may need to be explained further; it focuses on gathering, analysing, and combining both data to provide a better understanding of research problems than either approach alone.Along with investigating factors that take place on a community or public level, it also offers a practical way of examining the ideals and beliefs of a population.Convergent parallel design mixed methods, explanatory sequential mixed methods, and exploratory sequential mixed methods are the critical designs in mixed method research (2016).In research, the pragmatic approach mixes qualitative and quantitative methodologies since it identifies blended methods as the most appropriate way (Akotia et al. 2016).
Convergent Parallel Design
The research employed convergent parallel design, which involves carrying out both the quantitative and qualitative components at the same time during the research process, assigning equal weight to each strategy, analysing the two separately, and merging the results (Mkuna, 2021).Design Parallel Convergent provides a comprehensive analysis of the research problem; convergent parallel design mixed methodologies merge or converge quantitative and qualitative data.This method involved collecting both types of data at the same time, giving each method equal weight, maintaining the independence of the data analysis, combining the results during the overall interpretation, and searching for convergence, divergence, contradictions, or relationships between the two sources of data (Retailleau et al. 2019).
Quantitative data collection and analysis
Qualitative data collection and analysis
Merge results for comparison
Quantitative results
Qualitative results
Interpret or explain convergence or divergence Data was collected and analysed from two independent strands of quantitative and qualitative research.Non-probability sampled hotel managers in the Upper West will be requested to answer surveys for the quantitative strand.The qualitative strand assesses (key informant) senior staff members' opinions from the Upper West Region's hotel business.The study used side-by-side comparison to combine quantitative and qualitative data to converge or merge the data (Creswell, 2016).The quantitative statistical data and the qualitative findings support or contradict the statistical findings of research questions 1, 2, and 3, which are as follows: 1. what are the hotel industry's SHRM practices?2. What is the relationship between SHRM and the performance of the hotel industry?and 3. How does SHRM affect hotel sector performance?It is established that because researchers present the results in an argumentative mode, by first presenting one set of findings and then the other, the method is referred to as "side-by-side" (Salim, 2019).
Sampling and Sample Size Determination
The study had a sample frame of 57 respondents, constituting managers and manageresses in the upper west Region and institutions with stakes in the sector's management.The total number of managers and managers in the region's chosen hotels was fifty (50), and 7 institutions whose job descriptions relating to the hospitality sector were purposely selected for the study.
Data Collection Tools, Source and Analysis
The study gathered information from both primary and secondary sources.The Interview guide was used as a data-collecting method to acquire primary data from the study's sample population.Simultaneously, the Key Informant Interview Guide was used to collect data from stakeholders carefully selected based on their knowledge of the subject under inquiry.The argument for utilising an interview schedule as an instrument has been its expanding importance compared to other instruments, such as questionnaires, which present numerous obstacles, including retrieval issues.The critical informant interview was utilised to acquire important information about strategic human resource management techniques and their effects on hotel performance.Secondary sources of information were obtained from documentary reviews, periodicals, books, journals, newspapers, thesis and dissertations, conference proceedings, reports, and the internet, among others.Information was also being sought from government and non-governmental groups.As part of the approach, ethical issues, data analysis, and presentations were prioritised.
RESULT 4.1 Socio-demographic Characteristics
The table below contains the headings such as variables, response, facility grade total and percent.The sex distribution of the table shows that the number of male respondents across the various hotel facility grades is 74% of the total respondents, while 26% represents females.This clearly shows that, even though Ghana's female population is more significant than males, in terms of managerial roles in hotels of the Upper West Region, men are dominant.In the age group category, 6 % belonged to the 20-29 age group, 54% belonged to 30-39, and 8% also found themselves aged 40-49.Again, 32% belonged to the age group above 50 and above.This shows that there is a considerable number of youth managing these businesses in the Upper West Region.Concerning education, 28% obtained High National Diploma (HND) as their highest qualification, while 36% obtained a first degree as their highest academic qualification.The data further revealed that 22% are master's degree holders, with 12% obtaining a PhD as their highest qualification.The most minor category of respondents (2%) have an SHS certificate and below as the highest academic qualification.The data suggest that managers have attained the highest educational qualification of hotel managers and the hotel facility grades of the Ghana Tourism Authority hotel grading in the Upper West Region.Data regarding industry experiences, 36% indicated they had experience of 1-5 years in their field, while 64% of respondents have 6 years and above years of experience.Concerning respondents' positions held in the hotels, 68% are managers, representing 68 percent, while 22% are managers.The data also shows that 6% were Assistant managers, while 4% were founders of business establishments (hotels).Details are provided in Table 3.
Strategic Human Resource Management Practices and Performance of Hotels 4.2.1 Strategic Human Resource Management Practice and Productivity
The data on the implementation of SHRM improved productivity of the hotel gauging from the management perspective revealed that 28 of the respondents made up 56%.Hotels place maximum interest in the fact that implementation of SHRM increases productivity in the various hotels.They strongly agreed in their response, while 19 of the respondents, making up 38%, were also of the view that implementation of SHRM improved productivity at their hotels.About 6 of the respondents, representing 6%, do not accept the implementation of SHRM improved productivity as a strategic human resource management to improve hotel performance.They strongly disagree.The data suggest that implementing SHRM improved productivity from the management response to a large extent and is regarded as a critical strategy to derive the best from employees and develop the hotel business in the Upper West Region.Details provided in Fig .3
Strategic Human Resource Management and Retention of Key Staff
The data on SHRM ensure retention of core talented employees of the hotel gauging from management perspective revealed that 25 respondents make up 50%.Hotels place maximum interest in the fact that SHRM ensures the retention of core talented employees, as they strongly agree in offering their response.While 22 of the respondents making up 44% were also of the view that, SHRM ensure the retention of core talented employees at their hotels as they indicate agree.About 3 of the respondents, representing 6%, do not accept SHRM to ensure retaining core talented employees as a strategic human resource management to improve hotel performance, as they disagree.The data suggest SHRM ensure the retention of core talented employees from the management response to a large extent, which is regarded as a critical strategy in ensuring the retention of core talented employees to derive the development of the hotel business in the Upper West Region.Details are provided in Fig 4.
A key informant believed that for the hotel to develop, retention of key staff is essential.He offered this information as incorporation into Strategic Human Resource Management Practices.
Training and personal development of employees are essential for them to cope with emerging ways of doing things in the industry.Organisations should also know the personal development plan of their staff to enable them to align them there to get the best from them.Retaining staff with excellent qualities and potential in the industry keeps every big organisation ahead.Taking good care of your best employees can make them loyal to the company (key informant, Wa, 2022).
Fig. 4: Strategic Human Resource Management and Retention of Key Staff
Source: Field Survey (January, 2023)
Employee Recruitment Modalities and Hotel Performance
The data on the management perspective of criteria for recruitment revealed that about 24 of the respondents, making up 48%, place maximum interest in organisation criteria for the recruitment process in the various hotels.They indicated strong agreement in offering their response.At the same time, 15 of the respondents, making up 30%, were also of the view that criteria for recruitment at their hotels indicate 'agree'-four (4) of the respondents, representing 8%, were doubtful as they indicated they were not sure.Three (3) of the respondents, made up 6%, were not committed to the criteria for recruitment at their hotel as they indicated disagreement.There are those that strongly object to paying much attention to criteria for recruitment of employees as they strongly agree.Four respondents make up 8%.The data suggest there is a solid commitment to the criteria for recruitment of employees in the industry as a strategic human resource management in the hotel industry of the Upper West.Details are provided in Table 4. Source: Field Survey (January, 2023)
Customer Provision of Feedback and Hotel Productivity
The data on the impact of feedback about service and employees gauging from a management perspective revealed that 18 respondents made up 36%.They place maximum interest in the impact of feedback about service and employees in the various hotels, as they strongly agree in offering their response.While 28 of the respondents, making up 56%, held the view that the impact of feedback about service and employees at their hotels.About 4 of the respondents, representing 8%, do not agree with the impact of feedback about service and employees as a strategic objective for performance and the development of strategic human resource management to improve hotel performance.The data suggest the impact of feedback about service and employees from the management response, to a large extent, is a strategic objective that influences the performance and development of the hotel in the Upper West Region.Details are provided in Table 5.
A key informant had this to say on feedback; It is used to improve service delivery and customer care; every organisation must have a feedback system or a way for customers to report their concerns and challenges as suggestions.This brings out concerns to the right table and makes addressing of their concerns easy.This generally improves the way things are in the perspective of the customers.This customer or guest satisfaction is the ultimate assurance that will guarantee they come back to purchase your product, and if they feel their concerns are not adhered to, they will not come back, and your business will suffer (Key Informant, 2022).Source: Field Survey (January, 2023)
Employee Evaluation and Performance of Hotels
The data on the evaluation of employees improve the performance of hotel gauging from management perspective revealed that 25 of the respondents making up 50%.More so, hotels place maximum interest in the fact that evaluation is done to make decisions on job rescheduling in the various hotels, as they strongly agree in offering their response.While 15 of the respondents, making up 30%, were also of the view that evaluation to make decisions on job rescheduling at their hotels as they indicate agree.About 10 respondents, representing 20%, were doubtful, as they indicated that they were not sure.The data suggest evaluation to make decisions on job rescheduling as a strategic objective critical to the strategic human resource management and development of the hotel business in the Upper West Region.Details are provided in Table 6.Data was also sought on employee evaluation among hotels; a key informant believed that.Hotels that practice employee evaluation are likely to shape their employee into the best hoteliers in the industry, thereby growing the business and improving performance.Those implementing SHRM achieve the best returns in the competitive industry (Key Informant, 2022).
Another informant added that;
Every employee in any organisation must be evaluated on the job they have taken up.This evaluation must lead or contribute to the improvement of the performance of that organisation.The hotel industry is not different; employees will always have it at the back of their minds that they are evaluated in any other thing they are engaged in at the workplace; hence, they will always put in their best, and in the end, the organisation's performance is improved (Key informant, 2022).Source: Field Survey (January, 2023)
Strategic Human Resource Management and Achievement of Hotels Strategic Objective
The data on the training of employees aid the hotel in achieving its strategic objective of the hotel gauging from a management perspective revealed that 30 of the respondents make up 60%.Hotels place maximum interest in the fact that training of employees aids the hotel in achieving its strategic objective in the various hotels, as they strongly agree in offering their response.While 9 of the respondents, making up 18%, were also of the view that training employees aids the hotel in achieving its strategic objective at their hotels, as they agree.About 4 of the respondents, representing 8%, were doubtful as they indicated they were unsure.The other section of respondents was 7, representing 14% not training employees' aid.The hotel achieved its strategic objective and its core talents as a strategic human resource management to improve hotel performance, as they disagree.The data suggest that by training employees, the hotel achieved its strategic objective and core talents from the management response to a significant extent in developing the hotel business in the Upper West Region.Details are provided in table 7. Source: Field Survey (January, 2023)
Hotel Review of Previous Human Resource Management and Quality Practices
Data on hotel reviews regarding previous human resource management practices.The result shows that 40 of the respondents, representing 80% being the majority, answered in the affirmative that management reviews previous human resources practice.At the same time, 6 respondents also answered negatively to imply there is no practice of reviewing previous human resource-related issues.However, 8% were still deciding whether or not there is a review of previous human resource management practices.The data suggest hotel management prioritises reviewing human resources management practices for better learning outcomes.About 80% of managers place maximum interest in the fact that hotels review previous human resource management practices in the various hotels, as they indicate yes in offering their response.At the same time, 6 of the respondents, making up 12%, were undecided.About 4 of the respondents, representing 8%, disagree consistency is achieved by hotels reviewing previous human resource management practices as a strategic objective for performance and development of strategic human resource management to improve hotel performance as they indicate no.The data suggest that the hotel review of previous human resource management practices is a critical strategic objective from the management responses to a large extent previous human resource management practices and for developing the hotel business in the Upper West Region.Details are provided in table 8. Source: Field Survey (January, 2023)
Customer Expression of Satisfaction
The data on customer expression of satisfaction gauging from a management perspective revealed that about 31 respondents, making up 62%, place maximum interest in customer satisfaction.As they indicate, Strongly Agree in offering their response.At the same time, 17 of the respondents, making up 34%, were also of the view that customer expression of satisfaction is a significant issue as they agree.The lowest number of respondents was 2, representing 4% were doubtful, as they indicated not sure.The data suggest customer expression of satisfaction is important for continued patronage of hotel services in the Upper West Region.Details are provided in Table 9.
Influence of Job Design to Harnessing Employees Full Potentials and Abilities
The data on jobs designed to use employee potentials and abilities gauging from a management perspective revealed that 17 respondents make up 34%.Hotels place maximum interest in the fact that jobs are designed to use employee potentials and abilities in the various hotels, as they always indicate in their response.While 16 of the respondents, 32%, were also of the view that jobs are designed to use employee potential and abilities at their hotels, as they often indicate.About 6 respondents, representing 12%, were doubtful as they indicated they didn't know.The other section of respondent 11 does not agree jobs are designed to make use of employee potentials and abilities as a strategic objective for performance and development of strategic human resource management to improve hotel performance, as they sometimes indicate.The data suggest jobs are designed to use employee potential and abilities as a strategic objective and for developing the hotel business in the Upper West Region-details provided in Fig. 4. The critical informant had this to say on Job description influence performance output;
An excellent job of matching the experiences of your employees to their strong potential and abilities can make them do exceptionally well for the organisation. A good job design can give your business a greater advantage over your competitors by attracting the top talents into your organisation. Putting the right people in the right place lets you know they will deliver good work. Apart from being good, the person must work comfortably in the role (Key Informant, 2022).
Source: Field Survey (January, 2023)
Measure of Mean and Standard Deviation on the Effects of SHRM on Hotel Performance
Statistical analysis on the significance of means and standard deviation regarding the effects of strategic human resource management on hotel performance in the Upper West Region revealed a positive mean ranging from 3.78 to 4.58 with a standard deviation between 0.575 and 1.245.The statement "Customer Expression of Satisfaction" had the highest mean (4.58), followed by two of the statements, namely "Implementation of Strategic Human Resource Management Improve Your Hotel Performance" and ''Strategic Human Resource Management Ensures Retention of Core Talented Employees'' (4.38);.In contrast, "Jobs designs leads to the harnessing of employees' full potentials and Abilities" had the lowest mean (3.78).The data shows no significant difference as mean and standard deviation values are greater than zero.
Strategic Human Resource Management Practices and Performance of Hotels
The data suggest that implementing SHRM increases productivity from the management response to a large extent and is regarded as a critical strategy to derive the best from employees and develop the hotel business in the Upper West Region.The data on the implementation of SHRM improved productivity of the hotel gauging from the management perspective revealed that 28 of the respondents made up 56%.Hotels place maximum interest in implementing SHRM to increase productivity in the various hotels.They strongly agreed in their response, while 19 of the respondents, making up 38%, were also of the view that implementation of SHRM improved productivity at their hotels.In total, about 96% of respondents affirm that implementing SHRM improved hotel productivity in the region.The findings align with Taggar et al. (2008), who highlighted recruitment and selection, training and development, performance appraisal, rewards and compensations and career development as SHRM dimensions contributing to organisational growth.The findings also confirm the work of (Adresi and Darun 2017), who defined SHRM as the development and implementation of human resource programs to address business difficulties and any recurring issues in an organisation, which is the focus of the future-focused approach known as strategic human resource management.The findings are also in line with Schuler and Jackson (2014), who believe that several experts and practitioners have underscored the contribution of SHRM to personnel management as it 118 provides a long-term competitive advantage.The findings also align with Ziyae (2016), who believes there is a strong link between SHRM and corporate entrepreneurship.The findings further align with the assertion that SHRM practices, a hotel's potential to enhance sales, profitability, and the market percentage or market penetration index are established (Abbas and Hussien, 2021).The findings are also in line with Ziyae (2016), who noted that SHRM addresses empowering, administrative, and motivational issues that are critical to organisational development.
The data suggest SHRM ensure the retention of core talented employees from the management response to a large extent, which is regarded as a critical strategy in ensuring the retention of core talented employees to derive the development of the hotel business in the Upper West Region.Also, 94% of respondents indicated that SHRM leads to retaining talented employees.The data on SHRM ensure retention of core talented employees of the hotel gauging from management perspective revealed that 25 respondents make up 50%.The hotel places maximum interest in the fact that SHRM ensures the retention of core talented employees, as they strongly agree in offering their response.At the same time, 44% of respondents were also of the view that SHRM ensures the retention of core talented employees at their hotels 2012), when an organisation loses employees, its ability to compete in expertise, experience, and corporate memory is inevitably harmed.Retaining highly qualified personnel is one of the factors that encourage employee internal insights to identify alternative methods of reaching organisational goals, which increases the productivity of any firm.The findings align with the organisation's vision, mission, goals, and objectives and must be aligned perfectly with its human resources (Kuipers & Giurge 2016).The findings further confirm that the alignment is necessary to improve the performance of the hotel industry because human-resource-related issues are common in developing nations, including low-income levels, lax presentation standards for valuables, low incentive levels, poor employment scales, a lack of adequate compensation for workers' toil, and subpar supervisor management and employee motivation (Shabbir 2014).
The data suggest there is a solid commitment to the criteria for recruitment of employees in the industry as a strategic human resource management in the hotel industry of the Upper West Region.The data on the management perspective of criteria for recruitment revealed that about 24 of the respondents, making up 48%, place maximum interest in organisation criteria for the recruitment process in the various hotels.
As they strongly agree in their response, 15 of the respondents, making up 30%, were also of the view that the criteria for recruitment at their hotels as they indicate 'agree'.The findings confirm the work of Mohamed et al. ( 2013), who indicated that in SMEs, hiring and training strategies had a considerable impact on employee turnover.The findings again confirm the work of Pfeffer (1994), who believed that a thorough, valid, and complex selection system aids in identifying a suitable applicant with performance potential, which is critical in the recruitment and selection process.It also confirms the assertion that a strict selection process fosters elitism, raises performance standards, and sends a message about the significance of employees to the organisation (Pfeffer, 1994).The findings prove that (Terpsra and Rozell, 1993) were right when they said selection and recruitment have a link to improved business performance.It further proves him right when he said Performance levels might need to be improved by a mismatch between the person and the job.In contrast, an intelligent selection procedure can ensure a better fit between the person's abilities and the organisation's requirements.The finding also affirms Abbas and Hussien's (2021) study, which shows that hotel managers practice good SHRM to enhance corporate performance.The data suggest the impact of feedback about service and employees from the management response, to a large extent, is a strategic objective that influences the performance and development of the hotel in the Upper West Region.The data on the impact of feedback about service and employees gauging from a management perspective revealed that 18 respondents made up 36%.They place maximum interest in the impact of feedback about service and employees in the various hotels, as they strongly agree in offering their response.At the same time, 28 of the respondents, making up 56%, were also concerned about the impact of feedback about service and employees at their hotels.The result is consistent with (Gjerald and Furunes, 2020 The data suggest evaluation to make decisions on job rescheduling as a strategic objective critical to the strategic human resource management and development of the hotel business in the Upper West Region.The data on the evaluation of employees improve the performance of hotel gauging from management perspective revealed that 25 of the respondents making up 50%.The hotel places maximum interest in the fact that evaluation is done to make decisions on job rescheduling in the various hotels, as they strongly agree in offering their response.While 15 of the respondents, making up 30%, were also of the view that evaluation to make decisions on job rescheduling at their hotels as they indicate agree.The results are consistent with Sujová et al. (2014), who stated that it is essential because it helps managers make wise administrative choices regarding employee promotions, fringe benefits, payoffs, and incentive pay increases.The results support the findings of Hassan et al. ( 2013), who also noted that one of the factors driving organisational performance is the ongoing growth of staff skills and knowledge.The finding also confirms the work of (2019) when he revealed that SHRM's primary obligations are to ensure that suitable employees have the necessary abilities and experiences to accomplish tasks and responsibilities effectively.The data suggest that by training employees, the hotel achieved its strategic objective from the management response to a large extent, the hotel business in the Upper West Region.The data on the training of employees aid the hotel in achieving its strategic objective of hotel gauging from the management perspective revealed that 30 of the respondents make up 60%.The hotel places maximum interest in the fact that training of employees aids the hotel in achieving its strategic objective in the various hotels, as they strongly agree in offering their response.Nine of the respondents, all making up 18%, were also of the view that training employees aids the hotel in achieving its strategic objective, as they agree.
CONCLUSION AND RECOMMENDATION
The study concludes that there is a positive effect of human resource management practices on hotel performance in terms of productivity, recruitment and training, retention, customer satisfaction, job descriptions, feedback and performance evaluation.There are no variables in the management practices, which indicates insignificant with no effects on performance.It is therefore recommended for hotels to enhance the implementation of strategic human resource management to guarantee quality performance.
Authors Contribution
Haq Mohammed Issah and Ibrahim Kaleem conceived and designed the study, collected field data, analysed the results, and wrote the manuscript.Tahiru Lukman contributed to the manuscript text and supported data analysis, as well as proofread the manuscript as well.
Businesses can affect employee motivation in several different ways, according to Boudreau et al. (1999).Through performance-based compensation, employees can be rewarded for achieving the objectives and goals set forth by the organisation.A sizable body of evidence is presented by Boudreau et al. (1999) that incentive-based compensation affects business performance.Representatives receive a variety of remuneration and benefits as a result of their work (Kee et al. 2015).
(Timo & Davidson, 2005), with considerable sums of money spent each year on hiring and training new workers(Georgenson, 1982)."Strategic managers" work in a strategic unit or level of a hotel.General Managers and first-line managers who are either Head of Department or supervisory level are included among these managers (Guest, 2021).Mensah (2015) identified managers, HR specialists, and chief executive officers (CEOs)/ general managers as critical actors in implementing the SHRM plan.Mensah (2015) added that the managers' ability to demonstrate exceptional line-of-sight and a firm understanding of a company's strategic goals depended on the SHRM plan (Lepak & Boswell, 2012).
Fig. 1 :
Fig. 1: Map of the Upper West Region Source: Ghana Statistical Service (2021) At the lineage and settlement levels, the inhabitants of the Upper West Region are arranged under chiefs.Chieftaincy is a prestigious institution that plays a significant role in community mobilisation.In Sissala, the chiefs are called Koro (e.g., Tumu Koro), while the other districts are called Na (e.g., Wa Na).There are 21 traditional paramountcies, including two in Jirapa-Lambussie, three in Lawra, seven in Nadowli, five in Sissala, and four in Wa.The Mole Dagbon and Grusi are two broad generic categories that encompass the majority of the ethnic groups in the area.Dagaare, Sissali, Wale, and Lobi are the primary languages spoken in the area.Except for the Lobi, who follow a matrilineal system of inheritance like the Akan in southern Ghana, inheritance is patrilineal.The extended family system shares resources in polygamous marriages.There is often male dominance and a low status for women in the area.The three main religions are African traditional religion, Christianity, and Islam.Rural areas tend to be more dominated by traditional life and beliefs than urban areas do.The Damba festival takes place in Wa, the Dagaabas celebrate Dembenti, Kobine is held in Lawra, and Kakube is held in Nandom.The Wa Na's Palace and Dondoli Sudamic (Larabanga) Mosque, Jirapa Na's Palace, Nandom's Gothic art church made entirely of stone, and Wechiau's hippo sanctuary are just a few of the region's tourist attractions.The Gwollu Slave Defense Wall, slave site caves, and George Ekem Ferguson's tomb are additional attractions.
Fig. 4 :
Fig. 4: Influence of Job Design to Harnessing Employees Full Potentials and Abilities . The results support Afsal et al.'s (2013) work, which gave an overview of the value of strategic human resource management to organisational performance and suggested that human resource planning is one of the HR competitive strategies that boost organisational productivity.The results support the work of Afsal et al. (2013), who emphasised that businesses should always look for individuals with the ideal combination of knowledge and abilities.The results further support the claim made by Datta et al. (2003) that HRM techniques like employee trust, organisational commitment, job satisfaction, labour absenteeism, and service quality help to improve organisational performance, including turnover rate.According to Mutua et al. ( ) argument that adaptability to contingencies requires flexible abilities and behaviours and that organisational performance is improved by flexibility in adjusting hospitality services to customer expectations.The results support the following claim: Innovative work behaviour is defined as employees' development, processing, and application of novel ideas regarding goods, processes, procedures, technologies, or combinations to enhance organisational functioning (Bos-Nehles et al., 2017).The results also support the findings of Bani-Melhem et al. (2018), who reported that customers have become more demanding regarding service quality in the hospitality sector.Customers must receive satisfactory services to increase their loyalty and the hotel's reputation (Bani-Melhem et al., 2018).The findings align with the observation that employees in the hospitality sector are encouraged to engage in innovative services (Karatepe et al., 2020).
The findings are consistent with those made byAfsal et al. (2013), who offered an overview of the significance of strategic human resource management to organisational performance and suggested that human resource planning is one of the HR competitive strategies that boost organisational productivity.The finding further aligns withAfsal et al. (2013), who emphasise that businesses should always look for individuals with the ideal combination of knowledge and abilities.The results support Aswathappa's (2013) conclusion that staff training investments have a beneficial impact on organisational performance.The measure operationalises the potential for a hotel to enhance its reputation and increase employee and client happiness because of SHRM activities.The finding reaffirms the work of (Wuen et al., 2020), who said training and development sessions, as well as staff engagement sessions, were found to have a considerably favourable impact on SME performance (Wuen et al., 2020).The findings also bring to significance the study of Muhammed and Abdullah et al. (2016), which indicates that supporting the economy's development and advancement needs will be more practicable when the human asset is purposefully built and equipped with fundamental skills that will help the organisation to accomplish its genuine potential.The data suggest customer expression of satisfaction is important for continued patronage of hotel services in the Upper West Region.The data on customer expression of satisfaction gauging from a management perspective revealed that about 31 respondents, making up 62%, place maximum interest in customer satisfaction.They indicate Strongly Agree in offering their response.At the same time, 17 of the respondents, making up 34%, were also of the view that customer expression of satisfaction is a significant issue, as they agree.The findings of Datta et al.(2003) argue that HRM strategies such as employee trust, organisational commitment, job happiness, labour absenteeism, and service quality enhance organisational performance, including turnover rate.The findings also confirm that (Zhang and Mao, 2012), because of the intense competition among hotels, most hotel managers have recognised the importance of developing a distinct image.In positioning systems, a well-expressed image is critical.Hotels employ environment-based strategies and distinctive HRM techniques to uphold their brand positioning and increase their core competitiveness.Excellent service qualities have been recognised as necessary in defining the core characteristics of a hotel's image by studies on image formation.The data suggest jobs are designed to use employee potential and abilities as a strategic objective and for the development of the hotel business in the Upper West Region.The data on jobs designed to use employee potentials and abilities gauging from a management perspective revealed that 17 respondents make up 34%.Also, hotels place maximum interest in the fact that jobs are designed to use employee potential and abilities in the various hotels, as they always indicate in offering their response.While 16 of the respondents, 32%, were also of the view that jobs are designed to use employee potential and abilities at their hotels, as they often indicate.The finding is in line with Hassan et al. (2013), who added that the continuous development of employee skills and knowledge is one of the drivers of organisational performance.The results are consistent with the viewpoint of (Gjerald and Furunes, 2020), who emphasise the necessity for flexible skills and behaviours for adaptability to contingencies and flexibility in tailoring hospitality services to customer expectations to improve organisational performance.
current employees by offering proper training and development.Investing in problem-solving, cooperation, and interpersonal relations training does pay off at the corporate level, according to research (Barak et al., 1999).Employees' immediate and potential skills, knowledge, and capacities are all improved through training and development (Aswathappa, 2008).
Table 6 : Employee Evaluation and Performance of Hotels Variables Evaluation of Employees Improve the Performance of Your Hotel
• Email: editor@ijfmr.comIJFMR230610129 Volume 5, Issue 6, November-December 2023 15 | 2023-12-29T16:07:11.931Z | 2023-12-23T00:00:00.000 | {
"year": 2023,
"sha1": "a1869a0cc54ec034fb37e7f9cb42bd295462855d",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijfmr.com/papers/2023/6/10129.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "20aafd16da450af4dab841f626ce970e9be8e532",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
225095057 | pes2o/s2orc | v3-fos-license | Inhaled dry powder alginate oligosaccharide in cystic fibrosis: a randomised, double-blind, placebo-controlled, crossover phase 2b study
Background OligoG is a low molecular-weight alginate oligosaccharide that improves the viscoelastic properties of cystic fibrosis (CF) mucus and disrupts biofilms, thereby potentiating the activity of antimicrobial agents. The efficacy of inhaled OligoG was evaluated in adult patients with CF. Methods A randomised, double-blind, placebo-controlled multicentre crossover study was used to demonstrate safety and efficacy of inhaled dry powder OligoG. Subjects were randomly allocated to receive OligoG 1050 mg per day (10 capsules three times daily) or matching placebo for 28 days, with 28-day washout periods following each treatment period. The primary end-point was absolute change in percentage predicted forced expiratory volume in 1 s (FEV1) at the end of 28-day treatment. The intention-to-treat (ITT) population (n=65) was defined as randomised to treatment with at least one administration of study medication and post-dosing evaluation. Results In this study, 90 adult subjects were screened and 65 were randomised. Statistically significant improvement in FEV1 was not observed in the ITT population. Adverse events included nasopharyngitis, cough and pulmonary exacerbation. The number and proportions of patients with adverse events and serious adverse events were similar between OligoG and placebo group. Conclusions Inhalation of OligoG-dry powder over 28 days was safe in adult CF subjects. Statistically significant improvement of FEV1 was not reached. The planned analyses did not indicate a significant treatment benefit with OligoG compared to placebo. Post hoc exploratory analyses showed subgroup results that indicate that further studies of OligoG in this patient population are justified.
Introduction
Despite recent advances in the treatment of cystic fibrosis (CF) with drugs that directly improve cystic fibrosis transmembrane conductance regulator (CFTR) function, there is still an unmet medical need for new therapeutics that enhance the clearance of airway secretions, decrease chronic infection and inflammation, and reduce the treatment burden currently experienced by people with CF [1,2]. Existing mucolytic treatments either hydrate CF sputum, e.g. hypertonic saline, or degrade macromolecules that contribute to its abnormal viscoelastic properties, e.g. dornase alfa. OligoG represents a potential alternative way to improve airway clearance in CF, with additional potential to combat bacterial infections. It is a low molecular weight alginate oligosaccharide derived from the stem of the brown seaweed, Laminaria hyperborea. OligoG has been shown in previous in vitro studies to alter the viscoelastic properties of mucin/alginate gels, mucin/DNA gels and CF sputum [3, -5]. Extensional and shear rheology analyses showed that OligoG treatment resulted in marked reductions in both elastic (G′) and viscous response (G′′) of CF sputum compared to controls and a significant reduction in the elastic response, G′, and viscous response, G′′ [4]. In addition, potentiation of rhDNase I (a mucolytic used in the treatment of CF) was observed when used in conjunction with OligoG, showing statistically significant differences in both G′ and G″ [4]. Fourier-transform infrared spectroscopy analysis of CF sputum confirmed an interaction with mucin glycans that supports the changes in viscoelastic properties of CF sputum in the presence of OligoG [5]. Results of sputum rheology in a clinical phase 2A study of nebulised OligoG (www.clinicaltrialsregister.eu EudraCT number: 2010-023090- 19), as well as ex vivo studies [5,6], indicated that OligoG normalises mucus biophysical properties (i.e. viscosity) without degradation or cleavage of polymers within CF mucus. Furthermore, studies in a CF mouse model demonstrated that treatment with OligoG significantly reduced the accumulation of mucin, normalising the mucosal phenotype and improving long-term survival [7]. These observations and biophysical changes suggest that mucociliary function and mucus clearance could be improved by OligoG, which in turn might lead to better lung function and reduced frequency of pulmonary exacerbations.
In addition to its mucoactive properties, in vitro studies have shown that OligoG is able to potentiate antibiotics against a wide range of multidrug-resistant bacteria [8], probably by disrupting bacterial biofilms [9]. The disruption of biofilm by OligoG was observed in a dose-dependent manner over 24 h, with up to a 2.5-log reduction in Pseudomonas aeruginosa in infected mouse lungs [10]. The same study also demonstrated that the presence of OligoG showed a significant reduction in minimum biofilm eradication concentration for colistin from 512 µg·mL −1 to 4 µg·mL −1 after 8 h. A later study confirmed these findings while also demonstrating disruption of P. aeruginosa microcolony formation [11], to which increased tolerance to antibiotics has been attributed. OligoG also reduces the expression of both the las and rhl components in P. aeruginosa quorum sensing, which is a key mechanism influencing virulence factors and bacterial communication in biofilm development [12,13]. In addition to reducing bacterial virulence, OligoG has also been shown to inhibit hyphal formation in Candida albicans and reduce invasion and growth of fungal pathogens such as C. albicans and Aspergillus spp. [14,15].
Based on these pre-clinical data and the improved sputum rheology observed in the previous phase 2A study (EudraCT number: 2010-023090-19) [16], it was expected that an increased dose of OligoG combined with an improved lung distribution pattern could be achieved with a dry powder formulation [4,17] which would provide a more effective delivery mechanism without imposing a higher treatment burden on the patients. At the time of the study, pre-clinical rodent toxicity studies were limited to 4-week repeat-dose inhalation exposure. These studies confirmed an exemplary safety profile, but restricted the duration of exposure in this clinical trial to a 28-day treatment regimen.
The primary aim of the study was to demonstrate superiority of OligoG compared to placebo determined by the absolute changes in forced expiratory volume in 1 s (FEV 1 ) at the end of 28 days' treatment.
Methods
We conducted a phase 2B randomised, double-blind, placebo-controlled, multicentre crossover study to assess the efficacy and safety of inhaled alginate oligosaccharide (OligoG) in subjects with CF. The study was performed at 18 centres in Denmark, Germany, Norway, Sweden and the UK between December 30, 2014 (first patient first visit) and December 16, 2016 (last patient last visit). The study was conducted in full accordance with the Declaration of Helsinki 1964 (amended by the 64th WMA General Assembly, Fortaleza, Brazil, October 2013) and International Conference on Harmonisation guidelines for good clinical practice, and according to applicable laws and regulations for clinical research in the involved countries. A clinical trial application was submitted to national competent authorities and ethics committees before commencement of the trial, as applicable according to local regulations.
Study participants
Inclusion criteria were adults (aged ⩾18 years) with a confirmed diagnosis of CF, including typical clinical features and a sweat chloride level of ⩾60 mmol·L −1 and/or a confirmation of two CF-causing CFTR mutations. Patients must have had evidence of P. aeruginosa lung infection in their medical history (based on positive sputum or cough swabs over the past 12 months; supplementary table S1) and an FEV 1 of 40-100% predicted normal value according to the Global Lung Function Initiative normative equations at screening [18]. A subset of patients who had FEV 1 >60% pred were included in a substudy (at qualified study sites) that utilised the multiple-breath nitrogen washout technique to measure the lung clearance index (LCI). Females of child-bearing potential and sexually active males were required to use contraception as defined per protocol throughout the study. Eligible patients were requested, when possible, to remain on their stable therapy, including physiotherapy, for the duration of the study, including no change in the 14 days prior to baseline measurements. Inhaled n-acetylcysteine was not allowed during the entire study period; inhaled mannitol or hypertonic saline were not allowed from 7 days prior to and throughout each treatment period. Concomitant antibiotics were permitted including cycled tobramycin (TOBI), colistin and/or aztreonam, but patients who had recently initiated cycled therapies should have had at least two complete cycles in the months preceding enrolment. Patients with alternating TOBI and colistin cycles were started on an "off-TOBI" period at day 0 (visit 2); patients with alternating colistin or TOBI and aztreonam cycles were started on an "off-TOBI" or an "off-colistin" period at day 0 (visit 2). Patients on cycled aztreonam were started concurrently with an "on-aztreonam" cycle. Concomitant use of all other marketed antibiotic agents was permitted, providing patients were willing to remain on the same regimens within the 28 days immediately prior to day 0 (visit 2) and for the entire duration of the study (until the follow-up visit, day 112).
Patients could not have experienced any pulmonary exacerbations within the 28 days prior to screening, nor could there be any positive microbiological finding of Burkholderia spp. or a history of allergic bronchopulmonary aspergillosis within 12 months before enrolment. Further exclusion criteria included known lactose intolerance or hypersensitivity to any component of the study medication, any ongoing acute illness or hospitalisation between screening and first study drug administration, and CFTR modulator therapy (ivacaftor was approved when study was performed). All patients provided informed consent before any study procedures. Full inclusion/exclusion criteria are listed in supplementary table S1.
Study design and assessments
Eligible participants were randomised (1:1) to receive OligoG 1050 mg per day (10 capsules three times daily) and a matching placebo of lactose in randomised treatment sequences. Each treatment period lasted 28 days followed by a 28-day washout period. A final follow-up safety visit was scheduled 4 weeks after the final washout. The design and visit schedule are outlined in figure 1. The study medication was administered using a dry-powder monodose inhaler (MIAT, Milan, Italy). Since the selected OligoG dose was higher than those used previously, the first 12 patients followed a dose-titration scheme for the first 3 days, starting with a total of 10 capsules on day 1 (i.e. 350 mg once daily), followed by a total of 20 capsules on day 2 (i.e. 700 mg twice daily) and as final dosage a total of 30 capsules on day 3 (i.e. 1050 mg three times daily). An interim review of safety and tolerability data during this dose escalation was performed by the data and safety monitoring board (DSMB), which led to approval for ongoing enrolment at this dose without dose titration.
The randomisation list was generated by a statistician not involved in the study operations, using SAS software (version 9.3; SAS Institute, Cary, NC, USA). All study site personnel, as well as personnel involved in the monitoring or management of the study, were blinded to the individual patient treatment assignments. The study sponsor and the DSMB were blinded to individual patient treatment assignments for the duration of the entire study. If required, the DSMB would have been allowed to be unblinded in order to facilitate prompt analysis of any safety/tolerance issues that may have been raised.
Mucociliary and cough clearance
In this study, three sites with MCC capabilities (Copenhagen, Denmark; Southampton and Glasgow, UK) were selected and trained to follow a study-specific standard operating procedure similar to published methods [20,21]. MCC was measured by γ-scintigraphy after the inhalation of technetium-labelled albumin colloid particles. A cobalt-57 transmission scan was used to define lung boundaries and regions of interest. Radioactive fiducial markers placed over the spine were used to assist image alignment. Technetium-99m-labelled albumin colloid was delivered using a controlled breathing pattern (500 mL tidal volume; 500 mL·s −1 inspiratory flow rate; 30 breaths·min −1 ) with a standardised aerosol delivery set-up at each site. Dynamic acquisition of serial 2-min images were obtained for 60 min while asking the subject to minimise spontaneous coughing to assess cilia-driven mucus clearance. 60 voluntary huff coughs were performed between 60 and 90 min of the scanning period to assess cough-driven mucus clearance. A static image obtained 24 h after isotope inhalation was obtained as an additional assessment of composite whole-lung clearance. Blinded scans were analysed at a central reading site (University of North Carolina, Chapel Hill, NC, USA).
Microbiology
Standard culture microbiology to quantify viable bacterial load in sputum was performed at a central lab (Synlab, Munich, Germany).
Lung clearance index
LCI was performed at selected sites on subjects with mild to moderately impaired lung function (FEV 1 >60% pred) using the Exhalyser D nitrogen washout (Eco Medics, Duernten, Switzerland) according to standard operating procedures [22]. Traces were assessed for quality and scored by the European Cystic Fibrosis Society Clinical Trial Network LCI core facility based at the Royal Brompton Hospital (London, UK).
Sputum rheology
Rheological analyses were performed on sputum samples over a range of frequencies (0.1-10 Hz) relevant to biological process in the airway. Analyses involved measuring the frequency-dependent complex shear modulus (G*), and subsequent calculation of the loss tangent tan (δ) or phase angle, of sputum samples using the small amplitude oscillatory shear (SAOS) technique [23]. In SAOS the deformation of the sample is studied in terms of its rheological response to a series of imposed oscillatory stress waveforms over a range of frequencies. The results of the measurements are reported as the two components of G*: the shear elastic modulus G′, and the (viscous) loss modulus G″.
Pharmacokinetics
Whole-blood samples were collected from all subjects for the quantification of plasma OligoG concentrations. Plasma concentrations of OligoG (DPn 10) were analysed by a liquid chromatography-tandem mass spectrometry method (Vitas, Oslo, Norway), using OligoG-C13 as an internal standard. OligoG was extracted from plasma using protein precipitation following enzymatic digestion. Analysis was performed by liquid chromatography (Agilent 1200 LC Systems; Santa Clara, CA, USA) with tandem mass spectrometric detection (Agilent 6460 Triple Quad LC-MS/MS detector).
Statistical analysis
For determination of sample size, it was assumed that the FEV 1 change from baseline to end of treatment would have a mean±SD value of 0.1±0.2 L during OligoG treatment and 0.0±0.2 L during placebo treatment. In addition, it was assumed that the correlation between the changes from baseline in the two treatment periods would be 0. With 66 subjects enrolled (33 in each treatment sequence), a two-sample t-test would have 80% power to detect a difference in FEV 1 between the treatments using a 5% level of significance. The FEV 1 values at the end of each treatment period were examined using a mixed model. It was expected that the model would have similar, but slightly higher power to detect a treatment difference than the two-sample t-test.
For the FEV 1 (the primary variable of the study) absolute values at the end of each treatment period, a linear model was made using SAS PROC MIXED, with treatment, treatment sequence and treatment period as fixed effects, patient as random effect and the baseline value in each treatment period as a covariate. Patients with no measurements post-baseline in one or both treatment periods were excluded from the model. Week 0 was defined as baseline for period 1; week 8 was defined as baseline for period 2. If the week 0 value was missing, the screening visit value was used as baseline, if available. Country and country×treatment effects were also included in the model if statistically significant on a 10% level. If model assumptions were not met, transformations of data or alternative analyses were attempted. If a significant sequence effect was found, this was interpreted as an indication of a carry-over effect and a separate analysis was performed using only the data from the first treatment period. Additionally, models were made for the percentage of predicted values. Effects were estimated with 95% confidence intervals; p-values <0.05 were considered statistically significant.
Subjects were included in the intention-to-treat population (ITT) if they were randomised to treatment, received at least one administration of study medication and had at least one post-dosing evaluation. The primary population for all analyses (including main safety analyses) is the ITT population. A total of 65 patients were included in the ITT population, while 25 out of the 90 patients enrolled were not randomised because they did not fulfil all inclusion criteria.
Initial analyses included in the clinical trial analysis plan on the ITT and the per-protocol populations suggested that the number of OligoG doses (as observed through patient compliance) influenced the efficacy outcome. Therefore, further unplanned post hoc analyses were performed in a modified (m)ITT population, which included subjects who were compliant to the study protocol without any major violations except for dose compliance; any level of treatment compliance was accepted in the ITT population. Study drug compliance was defined as the number of study medication capsules used divided by the scheduled number of study medication capsules. The per-protocol population consisted of 47 patients. Of the 65 patients in the ITT population, 18 were excluded from the per-protocol population due to low study drug compliance and/or missing visits.
For the spirometry parameters VC, FVC, FEF 25-75% , FEV 1 /FVC and PEF, analyses were performed in the same manner as for FEV 1 . Models were made in which the end-of-treatment-period values were replaced by the week-2 and week-10 values, as defined in the trial protocol. For MCC, the average lung clearance through 60 min was analysed, similar to the primary end-point. Central and peripheral lung clearance through 60 min, and cough clearance at 60-90 min (whole lung, central lung and peripheral lung) were similarly analysed.
Supplementary analyses were performed on end-of-treatment data where no values were carried forward, i.e. for every analysis parameter, all patients with missing data at either week 0, 4, 8 or 12 were excluded. For MCC, retention/clearance and deposition parameters and time intervals not mentioned above were summarised with descriptive statistics. Lung clearance index (LCI), quality of life by CFQ-R, sputum rheology and microbiological measurements were summarised with descriptive statistics.
Retrospective analysis of pulmonary exacerbations
Patients who had completed the study per protocol and were included at sites in the UK or Germany were eligible for an additional retrospective data collection focusing on the number of pulmonary exacerbations. This retrospective assessment of pulmonary exacerbation frequencies, hospitalisations and antibiotic treatments was performed in subjects during the 6 months before and after study participation. The definition of pulmonary exacerbation used in the study was based upon criteria on the CF Foundation Therapeutics Development Network Coordinating Center (Seattle Children's Hospital, Seattle, WA, USA) [24].
For evaluation of a potential long-term benefit of OligoG treatment, the proportion ( p1) of patients with pulmonary exacerbations within 6 months after the end of treatment were compared to the proportion ( p0) within 6 months prior to study participation. The null hypothesis, p0=p1, was assessed with McNemar's test both for the two treatment sequences combined and for each treatment sequence separately. For each patient, the difference between the number of exacerbations in the 6 months after the end of randomised treatment and the number of exacerbations in the 6 months prior to study participation was calculated. The Wilcoxon signed-rank test was used to test whether the number of pulmonary exacerbations per patient before and after treatment had the same distribution. Furthermore, to evaluate the severity of pulmonary exacerbations, the proportion of patients with hospitalisations due to pulmonary exacerbations, as well as the proportion of patients that had received antibiotic treatments due to pulmonary exacerbations were compared within 6 months before and after OligoG treatment. These were compared in the same manner as the proportions of patients with exacerbations. The number of hospitalisations per patient and the number of antibiotic treatments per patient were analysed in the same manner as the number of exacerbations per patient.
Results
A total of 90 patients were screened for the study. 65 patients were randomised to receive treatment (figure 2). 32 patients were randomised to receive OligoG in the first treatment period, while 33 patients received placebo first. Patient demographics are summarised in table 1. The baseline demographic characteristics were similar in the two treatment sequences.
Lung function
The primary outcome parameter was the absolute change in FEV 1 Mucociliary and cough clearance A total of 14 patients from three sites, underwent MCC assessments. However, only 12 patients completed all planned MCC assessments. MCC data obtained at the two pre-treatment baseline visits were analysed to characterise the reproducibility of key MCC outcomes in this multisite study. Correlation between the paired baseline MCC rates (Ave90Clr) was high (R 2 =0.73; p<0.001) as was that between the central: peripheral (C/P) particle deposition ratios (R 2 =0.73; p<0.001), suggesting that these data are robust and did not suffer from carry-over effects. While no change in the Ave90Clr or cough clearance was observed after OligoG treatment, a trend towards a more peripheral deposition pattern after OligoG when compared to the preceding baseline was observed (C/P 2.46±1.15 versus 1.98±0.87; p=0.09, Wilcoxon test), that was not seen during placebo treatment (figure 4 and supplementary table S3).
Lung clearance index
A total of 11 patients with FEV 1 between 60% and 100% at screening also had LCI assessments at seven study sites. Due to tests not being performed at certain visits (6%) or failing to meet quality criteria (14% of test occasions), only three patients had LCI values available from all six visits. Subject treatment differences are summarised for the ITT and per-protocol populations, and no statistically significant treatment differences were seen (supplementary table S4). Due to the low numbers of patients who underwent LCI testing, no conclusions as to effect of treatment on LCI could be drawn.
Quality of life
No differences were seen between OligoG and placebo for eight out of the nine quality-of-life domains ( physical, role, emotion, social, body image, eating, treatment burden, health perceptions) and two of the symptom scales (weight, digestion). For the respiratory scale and vitality life domain, baseline adjusted mean and median scores were higher with placebo than with OligoG after both 2 and 4 weeks of treatment, suggesting that study subjects considered OligoG treatment was not beneficial for these quality-of-life parameters (supplementary table S5).
Sputum rheology
For sputum rheology analysis in the ITT population, no meaningful differences in rheological properties of the sputum were observed between the two treatment groups at 0.1, 1 and 10 Hz (supplementary table S6).
Microbiology
The low sample size available for sputum culture analysis (n=6-15, dependent on treatment and visit) was the result of a delayed start in sputum collection. No effect of OligoG on bacterial density and type (mucoid P. aeruginosa, non-mucoid P. aeruginosa and Staphylococcus aureus) in sputum cultures was observed. There was a reduction in mean CFUs for total bacteria after OligoG treatment (figure 5), although this was not reflected in the mean P. aeruginosa counts compared to the lactose placebo ( figure 6). Furthermore, the placebo showed almost 1 log reduction in P. aeruginosa CFUs ( figure 6).
Safety
The numbers and proportions of patients with adverse events during the study were similar during OligoG and placebo treatment; 52 (83%) patients experienced one or more adverse events during OligoG treatment compared to 51 (84%) patients during placebo treatment. Most of the reported adverse events were probably related to the underlying CF disease. Seven (11%) patients experienced one or more serious adverse events (SAEs) during OligoG treatment, while eight (13%) patients had one or more SAEs during placebo treatment. The most frequent SAE was pulmonary exacerbations. Most events were of grade 1 (mild) severity in all treatment sequences and treatment periods. There were two grade 4 events in the study: one appendicitis and one elevated potassium. Both events occurred during the first washout period in the placebo-OligoG treatment sequence. No large treatment differences were seen for the safety and tolerability variables between OligoG and placebo, although dyspnoea was identified in 12 events for OligoG compared to four for placebo; these were of low-grade severity. The most common adverse events were nasopharyngitis, cough and pulmonary exacerbations of infective origin (table 2).
Pharmacokinetics
Plasma concentrations of OligoG were in the range of 0.5-8.98 µg·mL −1 (table 3) and did not show any signs of systemic accumulation: mean plasma concentration at 14, 28 and 56 days (28 days after completing treatment) was 1.57, 1.32 and 0.00 µg·mL −1 , respectively. Furthermore, there was no detectable OligoG in plasma at day 56 in any of the patients who received OligoG in period 1 (day 56 levels were not measured in period 2).
Patient compliance
Study drug compliance for the ITT population in terms of proportion of scheduled study drug capsules used per treatment period (i.e. number of study medication capsules used divided by the scheduled number of study medication capsules) is summarised in table 4; the majority of patients had a treatment compliance of >80%. The compliance for patients randomised to placebo in the first treatment period was higher than the compliance for patients randomised to placebo in the second treatment period, which was similar to the compliance seen in the two treatment periods using OligoG. All patients with treatment compliance <80% were excluded from the per-protocol population, but included in the mITT population (see post hoc analyses).
Retrospective study on pulmonary exacerbations
In the retrospective pulmonary exacerbations study, 17 SAEs were reported in the 6 months that followed the study treatment; all of them unrelated to, or with an unlikely relationship to the study treatment. No significant differences were found in numbers of patients with hospitalisations or antibiotic treatments due to pulmonary exacerbations preversus post-study. Thirty-four pulmonary exacerbations were reported in the pre-treatment period versus 24 in the post-treatment period of OligoG. This represents a mean reduction of 0.25 pulmonary exacerbations per patient, or a 29% reduction in pulmonary exacerbations ( p=0.06).
Post hoc analyses
Lung function Initial analyses included in the clinical trial analysis plan on the ITT and the per-protocol populations, suggested the dosing of OligoG influenced the outcome in efficacy: 45% of trial participants had taken fewer than the prescribed number of capsules per treatment period. Therefore, further unplanned post hoc analyses were performed in a mITT population (n=54 based on having had no protocol violations except dose compliance). Positive trends were shown in subgroup analyses for those with the following clinical features: 1) taking cyclic inhaled tobramycin; 2) using continuous inhaled antibiotics and <100% OligoG compliance; and 3) younger age (⩽25 years). Data at the end of treatment (day 28) and after 4 weeks of washout (day 56) were assessed to describe the sustained effect on FEV 1 . OligoG treatment in the setting of concomitant cyclic inhaled tobramycin (synchronised in the off-cycle) was associated with a relative There was a trend towards improvement in FEV 1 which did not reach statistical significance, after OligoG treatment in those patients who took less than the full dose of the drug (<100% compliance), with 5.5% improvement in FEV 1 at day 28 ( p=0.07), and 8.5% improvement in FEV 1 at day 56 ( p=0.07). OligoG treatment in younger patients (⩽25 years) did not show significant effects, although a 4.4% increase in FEV 1 was noted at 28 days, which increased to 9.5% at 56 days.
In the ITT population there was a pronounced drop in FEV 1 at day 14, followed by recovery to baseline by day 28 ( figure 3). This was not observed in those patients who were subsequently identified to have taken less than the full dose of OligoG (<100% compliance), as determined by the proportion of study drug capsules used per treatment period (figure 7).
Rheology
Rheology measurements of viscosity, elasticity and phase angle in expectorated sputum did not show a statistically significant improvement in the ITT population (supplementary table S6). Additional post hoc analyses were performed on the ITT population for sputum rheology at 0.1 and 1 Hz which were not planned in the study protocol, but added during the reporting phase of the study. Phase angle values at 0.1 Hz and 1 Hz for the ITT population where patients were excluded if they had one missing value per treatment period are shown in table 7. These results show a continuous trend during OligoG treatment of increased phase angle scores indicative of more fluid-like sputum from day 0 to day 56. The placebo scores were more variable during the treatment period, but generally similar scores were found at day 0 and day 56, suggesting no overall effect on sputum rheology from the placebo. There was a statistically significant improvement observed at 0.1 Hz ( p=0.03) in the modified ITT population at 14 days of treatment. A similar finding was observed at 0.1 Hz for the modified ITT population on tobramycin ( p=0.05), which may reflect the combined effects of OligoG on rheology, biofilm disruption and antibiotic potentiation. This effect of OligoG treatment was even more pronounced in patients aged <25 years who showed a marked improvement in the phase angle at 0.1 Hz ( p=0.002) at the end of treatment (28 days), indicating that the sputum viscosity is reduced and more liquid present after treatment with OligoG.
Discussion
Statistically significant improvement in FEV 1 was not observed in the ITT population and this study did not meet the primary end-point. This study demonstrated that repeated inhalation of OligoG dry powder was generally safe in adult CF patients.
The relative change at day 14 followed by recovery to baseline by day 28 may be explained by the known mechanism of action of OligoG in reducing elevated mucus viscosity, especially since the drop in FEV 1 at week 2 was only a transitory phenomenon. This is thought to be related to the rapid increase in the mobility of mucus due to the calcium-binding effect of OligoG as previously described [6], which would in turn trigger the release and possible swelling of stagnant mucus plugs. This could initially reduce pulmonary function parameters before the mucus was expectorated by the patients, and probably explains the observation of dyspnoea in OligoG compared to placebo treatment. This points towards the need to revise the dosing regimen. This notion is supported by the combined subgroup analyses that suggest the potential value for OligoG with a lower initial dose might potentially be less likely to lead to an initial drop in FEV 1 , and therefore be more beneficial.
Although the viscosity, elasticity and phase angle in expectorated sputum did not show a statistically significant improvement in the ITT population, the improvement in the modified ITT population at 14 days of treatment and in younger patients (<25 years) at the end of treatment (28 days) supports what has previously been reported for the mechanism of action of OligoG [3][4][5][6][7].
A more peripheral deposition of radiolabelled particles following OligoG inhalation suggests that the smaller airways were more open after treatment. However, this effect was not reflected in a more effective clearance. This could be due to the fact that the MCC assay is best suited to capture effects on larger-airway cilia-driven clearance. An analysis of peripheral lung clearance, which is less confounded by deposition changes, showed a trend towards faster peripheral lung clearance with OligoG.
There was no difference in rates of pulmonary exacerbations between OligoG and placebo during this relatively short clinical trial. However, in the retrospective study, the trend towards fewer pulmonary exacerbations in the post-study period is an interesting observation, although it is far from clear whether this reduction was the direct result of OligoG or other treatment changes following the trial.
Post hoc analyses of defined subgroups were sufficiently intriguing to warrant further investigation of patients on inhaled anti-P. aeruginosa antibiotics combined with lower doses of OligoG. Nevertheless, it is important to note that recent clinical studies [32] have highlighted caution in placing too much emphasis on post hoc subgroup analyses: while the data presented in the current study might support the known mechanism of action of OligoG, further prospective clinical studies are clearly required to substantiate the potential for OligoG in CF.
One of the main limitations of the study was the placebo formulation. In order to minimise the risk of unblinding, the lactose placebo was administered in the same dose (1050 mg per day) and comparable particle size as the dry powder formulation of the active drug. Subsequent review suggests that this high dose of lactose may not have been the best choice of placebo to evaluate microbiological effects of OligoG, or the subsequent impact on lung function parameters. Indeed, the negative results from culture microbiology analysis were unexpected given the breadth of in vitro and in vivo data already highlighting the antimicrobial properties of OligoG [11][12][13][14]25]. Further investigation identified independent evidence that lactose, present at high concentrations in the placebo used in the trial, inhibits the growth of P. aeruginosa and the adhesion of other respiratory pathogens (e.g. Burkholderia spp.) to lung epithelial cells and potentiates the activity of antibiotics [26][27][28][29]. Additional studies have also identified a role for metabolites, such as lactose, enhancing antibiotic susceptibility to antibiotics such as tobramycin [30,31].
An additional limitation of the study was the selected dosing regimen of 10 capsules three times daily. This dose was based on conclusions from the previous phase 2A study (EudraCT Number: 2010-023090-19) [16], that indicated the dose was in the lower range of what would be expected to demonstrate efficacy: the dose was sufficient to affect the rheology of expectorated sputum, although not sufficient to result in significant changes in FEV 1 . Although providing better compliance and reduced treatment burden compared to the nebulised formulation, the inhaled dry powder dose selected for the current study clearly proved to be a treatment burden for some patients, as indicated by the number of patients that were not compliant with taking the full number of capsules/doses. Considering these observations, future studies are required to re-evaluate the impact of dose, capsule size and capsule loading.
The results did not reveal any safety concerns for adult CF patients following administration of 1050 mg OligoG per day dry powder inhalation over a 28-day period and confirms the results of early-phase studies showing that the active drug has a favourable safety profile. Further phase 2B clinical studies in CF using lower doses of OligoG DPI are being planned (under the framework of HORIZON2020). | 2020-10-28T19:20:43.977Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "8d8843df868338d06a4b095d37cc350aa5beb56e",
"oa_license": "CCBYNC",
"oa_url": "https://openres.ersjournals.com/content/erjor/6/4/00132-2020.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a2e47639bfca54518667e2b28244dbf287c41d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21503310 | pes2o/s2orc | v3-fos-license | Clinical Profile, Outcome, and Prognostic Factors of Cortical Venous Thrombosis in a Tertiary Care Hospital, India
ABSTRACT Background: Cortical venous thrombosis (CVT) is a rare condition, compared to arterial stroke and often occurs in young individuals presenting with varying clinical features. Aim: The aim is to study clinical profile and assess the outcome and prognostic factors of CVT patients. Methodology: A case series study was done for 2 years. CVT cases confirmed by magnetic resonance imaging were included in this study. Clinical presentation and risk factors were noted then patients were assessed at the time of discharge for their physical and mental status. Modified Rankin scale was used to group patients, accordingly scores 0–2 were considered as good and 3–6 as poor outcome, respectively. Data were analyzed using Chi-square test to know the association between prognostic factors and outcome. Results: Out of 81 patients, more than half of the patients were in the age group of <35 years (55.6%), and majority of patients were females (79%). Most common symptom was headache (82.7%) and least was fever (14.8%). Superior sagittal sinus was most commonly involved (74.1%). Nearly half of the patients were in puerperal period (44.1%). Patients aged more than 35 years (odds ratio [OR]: 9.1, confidence interval [CI]: 4.463–19.750) presenting with symptoms such as fever (OR: 3.442, CI: 1.088–12.140), impaired consciousness (OR: 5.467, CI: 2.064–15.330) and having clinical signs such as coma (OR: 23.99, CI, 3.844–544.1), papilledema (OR: 25.15, CI: 7.565–101.5), and with focal neurological deficit (OR: 9.366, CI: 2.693–3.41) had statistically significant poor outcome. Conclusion: Females formed a major bulk of patients. Higher number patients showed poor outcome. Study showed association between age, headache, impaired consciousness, coma, papilledema, and neurological deficit to poor outcome.
Introduction
C ortical venous thrombosis (CVT) is any thrombosis that occurs in intracranial veins or sinuses. [1] Cerebral vein and sinus thrombosis is rare compared to arterial stroke and often occurs in young individuals. [2] Worldwide estimated annual incidence is 3-4 cases/1 million population, about 75% of the adult patients are women. [3] With reference to India, there are no definitive data as there are no multicenter hospital-based studies.
The spectrum of clinical manifestation varies; headache is the most common symptom. [4] Seizures occur in about 7%-15% of cases during presentation and in about 40% during the course of illness, and they may be focal or generalized in almost equal proportion. [5] Patients may also present with long-lasting focal neurological deficits or transient focal neurological deficits mimicking transient ischemic attacks. [6] Other common symptom is altered sensorium.
Etiological factors vary depending on geographical area and also according to anatomical site involved. Primary or idiopathic CVT is mainly caused by hypercoagulable state. started our study with following objectives to study clinical profile and assess the outcome and prognostic factors of CVT patients.
Methodology
A case series study was done for 2 years in a tertiary care hospital in North India. During study period, patients aged more than 18 years which are confirmed as CVT from clinical features and investigations such as CT scan of the brain with contrast or MRI with MRV were considered. Cases where CT scan of the brain showing hemorrhage or infarct in the distribution of arterial territory, neoplastic and granulomatous disease of the brain, and eclampsia or preeclampsia cases were excluded from the study.
Clinical examination was done to note focal neurological deficits and mental status. A cerebrospinal fluid analysis was considered in cases with no contraindications to lumbar puncture. All patients were assessed at the time of discharge for physical and mental status to look for outcome.
Data were collected using pretested semi-structured questionnaire. Demographic data, clinical features, and risk factors were noted. Data were entered in Excel sheet, and appropriate descriptive statistics were done to describe demographic and clinical features. Chi-square test was applied to find association between prognostic factors and outcome. Accordingly, modified Rankin scale was used to assess outcome, scores between 0 and 2 were considered as good and 3-6 as poor outcome, respectively. To analyze data, SPSS version 20 (SPSS version 20, IBM, New York, USA) was used. A P < 0.05 was fixed as significant and P < 0.01 as highly significant.
Results
A total of 81 patients formed as our study participants, more than half of the patients were in the age group of <35 years (55.6%), and majority of patients were females (79%). The most common symptom of presentation of patients was with headache (82.7%), seizure (63%), and focal neurological deficit (63%) and least common symptom was fever (14.8%) [ Table 1].
According to radiological findings, the most common involved sinus was superior sagittal sinus (74.1%) and then right transverse sinus (42%).
Discussion
A total of 81 patients were included during study period with age range from 18 to 70 years (mean ± standard deviation: 30.2 ± 9.15).
More than half of our patients were in the age group <35 years (55.6%). This is in conformity with most of the earlier studies by Ameri and Bousser [9] (61%), Daif et al., [10] Deschiens et al., [11] Nagaraja et al., [12] and Narayan et al. [13] The present study showed a female preponderance with ratio of male to female patients being 1:3.76 because of influence of 13 (16) pregnancy, puerperium, and oral contraceptive pills, whereas ratio was 1:1.29 in study done by Ameri and Bousser [9] and 1:1 by Daif et al., [10] and studies done in India showed higher number of female patients such as 1:1.38 by Patil et al. [14] and 1:1.4 by Mehta et al. [15] CVT presents with a wide spectrum of symptoms and signs.
Patients presented with headache, foal deficit, seizure, papilledema, impaired consciousness, and coma in the order of decreasing frequency.
Around (63%) of patients presented with focal deficit such as hemiparesis, monoparesis, and paraparesis with cranial nerve palsy (18.5% with 7 th nerve, 4.9% with 3 rd nerve, and 2.5% with 4 th nerve palsy). A similar results were observed by Stolz et al. [19] (56.9%), whereas in a study done by Halesh et al., [18] only 48% presented with focal deficits, this may be due difference in number of patients.
Seizure was seen in 62% of cases. It is a common symptom of CVT next to headache. It may be generalized, focal, or with secondary generalization. In Einhäupl et al. [16] (48%) and in Barinagarrementeria et al. (60%), [20] patients presented with seizures which were comparable to our study. Papilledema was seen in 37% of patients. Accordingly, it varied from (27%) Einhäupl et al. [16] to (45%) by Bousser et al. [2] In the present study, 54% of patients presented with altered sensorium, out of which 38% were having Glasgow coma scale (GCS) between 9%, 13% and 16% were having GCS <9, i.e., comatose. In a study conducted by Barinagarrementeria et al., [20] altered sensorium was present in 63%, which was comparable to our study. Other symptom was fever seen in 16% of cases which may be secondary to infection such as otitis media, meningitis, and septic thrombotic process.
Female patient contributed majority, among females, more than half were in puerperal period, i.e., 36 (56.25%) and only 6 (9.37%) were on oral contraceptive pills. A very [21] De Bruijn et al., [5] and Deschiens et al. [11] The reasons may be because of postpartum hypercoagulable state combined with culturally practiced water restriction and high fatty diet.
Infection was found in 14.8% of patients. All patients were suffering from meningitis secondary to otitis media which was comparable to Bousser et al. [2] Other risk factors for CVT in decreasing order of frequency were pregnancy (13.6%), hyperhomocysteinemia (3.7%), dehydration (2.5%), and neurosarcoidosis (1.2%). In remaining 12.4% of cases, causes could not identify and named as idiopathic.
The superior sagittal sinus was most common sinus involved in 74.1% of the cases, either alone or with other sinuses. In studies by Ameri and Bousser [9] (72%) and Daif et al. [10] (85%), superior sagittal sinus was involved which was comparable to our study. Other sinuses involved in decreasing order of frequency were right transverse sinus (42%), left transverse sinus (38.3%), sigmoid sinus (34.6%), and straight sinus (22.2%), respectively.
At the time of discharge, 44.3% of patients had poor outcome with 27.3% of patients having dependent morbidity and 16% of mortality. Patients aged more than 35 years had poor outcome compared to younger age group (OR: 9.1, CI: 4.463-19.750). Similar results were observed in studies done by Stolz et al., [19] Ferro et al., [21] and De Bruijn et al. [5] The present study showed that males had a poor outcome compared to females but was statistically not significant (OR: 2.822, CI: 0.927-9.219). Many of the previous studies such as Halesh et al. [18] and Ferro et al. [21] also showed poor outcome for males. However, there was not much difference in outcome in a study done by Narayan et al. [13] Patients who had history of fever (OR: 3.442, CI: 1.088-12.140) and impaired consciousness (OR: 5.467, CI: 2.064-15.330) were associated with poor outcome.
The study showed that patients who had coma (OR: 23.99, CI, 3.844-544.1), papilledema (OR: 25.15, CI: 7.565-101.5), and presenting with focal neurological deficit (OR: 9.366, CI: 2.693-3.41) were also associated with poor outcome which were statistically significant. Similar results were observed by Ferro et al. [21] with respect to coma and hemiparesis or any deficit at the time of diagnosis.
Patients who had seizure also had a poor outcome but was statistically not significant (OR: 2.061, CI: 0.809-5.441), whereas in a study by Stolz et al. [19] who studied 79 patients, more than two seizures despite antiepileptic treatment was associated with poor outcome, and patients with history of headache had a comparatively better outcome but that was also statistically not significant (OR: 1.542, CI: 0.465-5.566).
Involvement of any of the sinuses was not associated with poor outcome. However, involvement of only straight sinus was associated with poor outcome but was statistically not significant (OR: 1.766, CI: 0.6046-5.283), and similar findings were seen in De Bruijn et al. [5] As compared to earlier studies, the present study shows higher number of dependency and death rate. This discrepancy in the studies from Western countries and our study may be attributed to the fact that our institution being a referral center receiving patients with comparatively poor prognosis. In addition, patients in this setup seek alternative interventions leading to delay in medical intervention.
Financial support and sponsorship
Nil. | 2018-04-03T02:53:11.227Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "386be1c77664b4ff1141932725fe3b2f306fd482",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.4103/0976-3147.203812.pdf",
"oa_status": "GOLD",
"pdf_src": "Thieme",
"pdf_hash": "4a756212f223c00e9a3b250b5b68c7f08a59d34a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270001802 | pes2o/s2orc | v3-fos-license | microorganisms
: Entomopathogenic fungi and entomopathogenic nematodes are globally distributed soil organisms capable of infecting and killing a vast variety of insects. Therefore, these organisms are frequently used as biocontrol agents in insect pest management. Both entomopathogenic fungi and nematodes share the soil environment and thus can infest and compete for the same insect host; however, natural co-infections are rarely found due to the cryptic soil environment. Our current knowledge on their interactions within hosts mainly comes from laboratory experiments. Because of the recent trend of combining biocontrol agents to increase their efficacy, many studies have focused on the co-application of different species of EPF and EPNs against various insect pests with variable outcomes ranging from synergistic effects and additive effects to antagonism. In addition, the effect on the development and reproduction of each pathogen varies from normal reproduction to exclusion, and generally the outcomes of the interactions are dependent on pathogen and host species, pathogen doses, and the timing of infection. The present review aims to summarize the current knowledge on the interactions of entomopathogenic fungi and nematodes within an insect host and to estimate the possible effects of the interactions on natural pathogen populations and on their use in biocontrol.
Introduction
Wetlands form a significant proportion of North America's ecosystems [1] and are defined as sites adapted to a wet environment as a result of having a water table situated near or above surface level long term as a result of poor draining of the soil [2].This is indicated by the vegetation growth and other biological activities taking place at these locations [2].They fulfill many systemic roles which provide stability to the surrounding locale and are an invaluable habitat for a variety of plants, fish and waterfowl [2,3].An example of one of these roles includes their function as initial reservoirs for water during flooding, limiting damage to nearby locations [2,4].In addition, their ecological capacity to act as a carbon sink potentially makes them a key player in combatting global warming and climate change [2].The filtration and breakdown of anthropogenic waste and pollutants like metals, fertilizers, antibiotics and sewage has been shown to occur [2,4,5].In fact, the many beneficial contributions have led to the development of constructed wetlands as wastewater treatment systems, important biodiversity hot zones, educational sites and for recreation [2].
In Manitoba, Canada, there are many wetlands, comprising approximately 43% of the total land area, accredited to its generally flat topography [1].This includes swamps, bogs, fens, marshes and prairie potholes.Marshes do not form peat, have large seasonal fluctuations, are affected by ground and surface waters, usually do not have trees or shrubs and tend to be found near shallow open waters [1].They are the minority, comprising about 2.5% of the total provincial terrestrial land [1].Prairie potholes, also known as sloughs, are shallow ponds containing marsh-like features, abundant in Southern Manitoba [6,7].
Growing interest in understanding these ecosystems has coincided with greater research into the microbes occupying these habitats.Previously, microorganisms were found to contribute significantly to primary productivity in biofilms [8] and shown to impact carbon uptake and accumulation [9], both critical activities which researchers have primarily attributed to plants [2].In some studies of wetland communities, eDNA sequencing of the V4 region of the 16S rRNA gene found Proteobacteria comprised a major group [10,11].Analysis of their environmental contributions has mainly centered around the capability of some to participate in the biogeochemical cycling of sulfur, nitrogen and phosphorus [10,12].However, there are also Proteobacteria that participate in carbon cycling and photosynthesis, undoubtedly playing a part in primary organic productivity and sequestration in marshes.This includes two groups which primarily reside in αand β-Proteobacteria, purple non-sulfur bacteria (PNSB) and aerobic anoxygenic phototrophs (AAP).They utilize light energy through a cyclic pathway, where it is converted into chemical energy (ATP) through photophosphorylation with the involvement of pigment bacteriochlorophyll a (Bchl a).The process of anoxygenic photosynthesis is nearly identical in both groups, with a few exceptions: AAP only perform it aerobically and PNSB anaerobically.AAP use it as a supplemental source of energy (to cellular respiration), and PNSB have the metabolic capability to grow exclusively according to photosynthesis.Furthermore, AAP are unable to fix carbon, while PNSB can [13].AAP and PNSB inhabit a variety of environments [13][14][15]; however, there are limited studies on their presence in wetlands [16][17][18].Additionally, anoxygenic phototrophic bacteria's (AnPB) contribution was not considered in previous work, as Bchl a measurements were not undertaken [9].Therefore, it is important to fill these gaps of knowledge to elucidate AnPB's impact on the critical roles that marshes fulfill, especially in terms of carbon sequestration and bioremediation.
As marsh waters are rich in organics and are well aerated due to enhanced algal and cyanobacterial oxygenic photosynthetic productivity, it was expected that AAP would be present in abundance and that the metabolically versatile PNSB may be as well.To gain insight into the composition of the culturable aerobic AnPB in wetlands, three Manitoban locations were sampled.This included a slough in King's Park, the constructed marshes at FortWhyte Alive, both within Winnipeg, and Oak Hammock Marsh in South-Central Manitoba.These are the initial results of a long-term study on marsh water microbiology, which is the first to focus on AnPB.
Sample Collection and Strain Cultivation
Water was obtained on the 10 July 2023 from 8 different sites located in constructed Manitoban marshes: King's Park, FortWhyte Alive in Winnipeg and Oak Hammock Marsh near Stonewall.The pH was taken using a broad-range pH (2.0-10.0)paper strip.Once collected, the specimens were immediately placed on ice in the dark and kept there until the return to the laboratory.Next, 10-fold serial dilutions to 10 −8 were prepared for each sample with the following solution (g/L): MgCl 2 , 0.5; KH 2 PO 4 , 0.3; NH 4 Cl, 0.3 and CaCl 2 , 0.1, adjusted to a pH of 7.0 after autoclaving [19].All the samples (10 0 -10 −8 ) were plated onto three different media, a rich organic medium (RO), an oligotrophic medium (OM) and a potato broth medium (PM), as previously described [20].Incubation of plates occurred at 28 • C in the dark for two weeks, which were then grown at room temperature for two additional weeks.Throughout this period, they were monitored for colored colonies of interest to streak to obtain pure cultures for subsequent analysis.Once a strain was pure, it was cryopreserved at −75 • C using OM and 30% glycerol.
Identification of AnPB
The colonies of interest were restreaked onto their respective isolation media.Once they were confirmed to be pure, whole-cell absorption spectra in the range of 300-1100 nm were recorded for the detection of carotenoids, Bchl a and light-harvesting (LH) complexes by identifying the corresponding peak(s).The plate-grown cells were resuspended in 0.3 mL of 20 mM Tris-HCl buffer (pH 7.8) and 0.7 mL of glycerol to minimize light scattering [21].Subsequent testing (described below) of phototrophic anaerobic and photoautotrophic aerobic growth alongside 16S rRNA gene sequencing confirmed the identities of the AnPB isolates.A total of 14 strains which represented the phenotypic and phylogenetic diversity were subjected to further analysis.
Morphology and Physiology of Isolates
The shape and size of the cells were assessed using phase contrast microscopy (Zeiss Axioskop 2) after 4 days of growth at 28 • C in the dark on the isolation medium.Motility was evaluated in a similar manner, except after 2 days with a hanging drop slide.A Gram stain [22] as well as a KOH test [23] was checked for all the strains.
The physiological experiments were conducted at 28 • C, at a pH of 7.0, in RO for a week in the dark, aerobically and in a shaker incubator, unless stated otherwise.The temperature growth range and optimum were determined to be the following (approximate values, • C): 7, 12, 16, 20, 25, 28, 32, 37 and 41.For the pH, a range of 4.0 to 11.0 at 1.0 increments was studied.The utilization of individual carbon sources was evaluated in RO initially prepared without organics and then before inoculation, supplemented with 0.5% of the following: Na-acetate, Na-butyrate, Na-citrate, ethanol, Na-formate, fructose, glucose, Na-glutamate, lactose, malic acid, Na-pyruvate and Na-succinate.Photoheterotrophic anaerobic growth was assessed in purple non-sulfur medium (PNSM) [24].The liquid cultures in PNSM, as well as a modified version which substituted L-cysteine and L-methionine for 1.0 mM of Na 2 S, were incubated in filled screw-capped tubes under constant light.Aerobic photoautotrophy was evaluated using liquid basal organic-free RO supplemented with 1.5 g/L of NaHCO 3 as a carbon source and 0.5 g/L of Na 2 S 2 O 3 as an electron donor [25] and grown under constant illumination, provided by an incandescent light bulb.To account for the organics contained within the inoculum liquid culture, 2 additional transfers of the cells into fresh basal RO were conducted to ensure the growth was truly photoautotrophic.The ability to ferment glucose, fructose and sucrose was evaluated as described [19].Oxidase, catalase, aerobic nitrate reduction and the hydrolysis of Tween 20, 40, 60 and 80, starch, gelatin and agar were determined [19,26].Antibiotic susceptibility was assessed with diffusion disks of the following (µg): ampicillin (10), chloramphenicol (30), erythromycin (15), imipenem (10), kanamycin (30), penicillin G (10 IU), polymyxin B (300 IU), streptomycin (10) and tetracycline (30).The strains were deemed resistant to the antibiotic if no zone of clearing was observed.
A phylogenetic tree based on the 16S rRNA gene was constructed using MEGA v11.0.13 software [29] through neighbor joining alignment with 1000 bootstrap replicates.Its evolutionary history was inferred using the Maximum Likelihood method.The model chosen for the tree, with the Tamura-3 parameter [30], was selected using the MEGA 'find best DNA model' tool.A total of 62 nucleotide sequences and 1582 positions were present in the final dataset.
Site Description and the Isolation and Detection of AnPB
The sampling sites are shown in Figure 1.King's Park, Site 1, is a recreational area in the South with one slough, located right beside the Red River.As marshes are typically affected by ground and surface waters [1], it is very likely that the activity along this portion of the river also impacts the slough.Site 1 was at the edge of a pond with plentiful aquatic vegetation.The water was clear, and a sample was obtained just below the surface to avoid collecting plant matter.Sites 2-4 were at FortWhyte Alive.This is a protected environment comprising forests, prairie grassland, constructed lakes and marshes, serving as a recreational and educational area [31].In recent years, the waters have had rising phosphorus and nitrogen levels, causing eutrophication [32].Site 2 was abundant in greenery and had floating algal-cyanobacterial mats.Site 3 was copious in macrophytes, nearly covering the entire surface.The samples were taken just below a large algal bloom and hydrophytes, if present, at each respective location.At Site 4, a thin olive-green bacterial mat from the top layer of the sediment was collected.It had rocks and more turbid water present.Sites 5-7 were at Oak Hammock Marsh in South-Central Manitoba.Formerly an extensive wetland called St. Andrew's Bog, this 36 km 2 area constitutes the reconstructed remnants of considerable drainage of the fertile land for agriculture [33].Located directly beside a walkway, Site 5 had abundant reeds and floating plants at the surface.The water was slightly brown and turbid.Site 6 had an abundance of dense grass-like macrophytes.A brownish, purple sulfur bacterial mat layer on the subaqueous soil was found at Site 7.This was identified based on the smell of sulfide coming from the sample.As sulfide reacts with oxygen, it was presumed this site was anaerobic.It was agitated during collection, and as such, water directly above was acquired as well.This sample came from the deepest zone below the surface of all the sites.In general, prairie marshes are shallow [8] with anaerobic sediment at the bottom [2]; therefore, they will usually have a relatively steep oxygen gradient.As such, there is potential to find AnPB which display aerotolerant pigment production and have the capacity to conduct anoxygenic photosynthesis both aerobically and anaerobically, like the transitional Charonomicrobium ambiphototrophicum EG17 [25].
Surprisingly, in the three studied habitats, there was quite poor microbial mat development, suggesting the bacterial communities within the water remained mostly suspended, attached to sediments or surrounding floating plants.The sampling took place during the afternoon on a sunny day; however, the vegetation covering the surface of some sites (1, 3 and 6) may have affected the amount of light that penetrated and was available for photosynthesis.The ambient temperature ( • C) was 16.5 for King's Park, 17.6 for FortWhyte and 13.3 for Oak Hammock Marsh.The pH was taken and for each site was approximately as follows, in order from 1 to 7: 6.0-7.0,9.0-10.0,7.0, 7.0, 8.0, 9.0-10.0,8.0.As these were pH paper estimations, the values reflect the range of the sites and are not exact.However, most of them fall within the expected range, as marshes are known to be relatively neutral [2].Future experimentation will require an accurate pH meter to obtain precise values.
Colored colonies were present in each medium tested from all the sites, with more appearing throughout the duration of incubation.AnPB were found based on the identification of a Bchl a peak in the whole-cell absorption spectra [13].In total, 102 or 43.4% of the pigmented isolates from the 235 tested were AnPB (Table 1): a total of 62.1% were in RO, followed by OM (19.6%) and PM (18.3%).The majority of the selected colonies had orange or yellow hues.From the total, 14 were selected to represent the diversity at the sites (Table 2).Four of the strains were PNSB and all were isolated on PM.This does not necessarily mean colonies did not develop on RO and OM but they were likely not isolated from these plates because PNSB usually do not actively synthesize photosynthetic pigments aerobically and produce pale colors due to limited carotenoids [14].However, on PM, the PNSB colonies were colored and produced pigment-protein complexes (Figure 2).When these strains were plated on RO, the complexion was muted or non-colored (not shown), although they also displayed their photosynthetic apparatus (Figure 2).Therefore, based on appearance, such colonies on RO and OM were not chosen.The other 10 strains studied were AAP, confirmed according to their physiological activity.In general, this group constituted the majority of the AnPB obtained.AAP were present in all the samples regardless of depth, indicating the waters were well aerated.Support comes from the fact that these places have plentiful vegetation, algae and cyanobacteria, and as such, a significant amount of oxygen is produced from their oxygenic photosynthetic activity [2].It is especially interesting in the case where a purple sulfur bacteria mat was observed (Site 7).It should be anoxic due to the presence of sulfide (identified by scent) as it reacts with the surrounding oxygen, therefore the majority in this community probably comprise anaerobes [14].A possible explanation for the isolation of AAP from this site is that they were situated in the aerobic water just above the mat.In such a case, the presence of AAP nearby is proof of the presence of a steep oxygen gradient.Surprisingly, in the three studied habitats, there was quite poor microbial mat development, suggesting the bacterial communities within the water remained mostly suspended, attached to sediments or surrounding floating plants.The sampling took place during the afternoon on a sunny day; however, the vegetation covering the surface of some sites (1, 3 and 6) may have affected the amount of light that penetrated and was available for photosynthesis.The ambient temperature (°C) was 16.5 for King's Park, 17.6 for FortWhyte and 13.3 for Oak Hammock Marsh.The pH was taken and for each site was approximately as follows, in order from 1 to 7: 6.0-7.0,9.0-10.0,7.0, 7.0, 8.0, 9.0-10.0,8.0.As these were pH paper estimations, the values reflect the range of the sites and are not exact.However, most of them fall within the expected range, as marshes are known to be relatively neutral [2].Future experimentation will require an accurate pH meter to obtain precise values.
Colored colonies were present in each medium tested from all the sites, with more appearing throughout the duration of incubation.AnPB were found based on the identification of a Bchl a peak in the whole-cell absorption spectra [13].In total, 102 or 43.4% of the pigmented isolates from the 235 tested were AnPB (Table 1): a total of 62.1% were in RO, followed by OM (19.6%) and PM (18.3%).The majority of the selected colonies had orange or yellow hues.From the total, 14 were selected to represent the diversity at the sites (Table 2).Four of the strains were PNSB and all were isolated on PM.This does not The presence of AnPB in the marshes aligns with previous works that identified their residence in wetland-like environments using infrared epifluorescence microscopy [16,17,34] or sampling of PNSB from soils [18].Nonetheless, our paper is the first describing the isolation of PNSB and AAP from constructed marshes and sloughs.Obtaining pure cultures is especially important, as it allows the roles attributed to microbes to be studied directly and may help to indicate other activities the bacterial community contributes to.
Here, it was used to accurately identify AnPB and distinguish between PNSB and AAP, a task difficult to perform through environmental sequencing or microscopy.Typically, sequencing of the pufM gene is used to indicate AAP presence in aerobic environments [35]; however, this gene is also present in PNSB, and if they are growing aerobically in the areas measured, they will also be counted.Epifluorescence microscopy [36] uses infrared lighting to distinguish AAP cells from others, but issues remain, as PNSB can also be detected using this approach.As a result, neither method precisely differentiates between the two.
Spectral Analysis
All the PNSB cultures (KP4, FW5, OHM24 and FW36) produced photosynthetic pigment-protein complexes anaerobically as well as aerobically (Figure 2).The Bchl a and carotenoid levels were higher under anoxic growth in PNSM, as expected, since light harvesting is usually conducted photoheterotrophically or photoautotrophically, where oxygen is absent [14].Interestingly, for each isolate, the relative level of expression of LHI, LHII and carotenoids varied in the aerobic dark and anaerobic light conditions, as well as between the two media (RO and PM) in the presence of oxygen.This characteristic corresponded well with the visual difference in pigmentation in all the strains investigated, supporting the conclusion that there was varied expression of carotenoids (400-600 nm) and Bchl a (LH peak(s) at 850-880 nm).In general, on PM, PNSB had greater primary accessory pigment concentrations, as reported earlier [13], and greater LHII in comparison to RO (Figure 2).Both aerobic and anaerobic photosynthetic complex expression by the same species has been seen before, although it is not common [13].The abovementioned strain EG17, a γ-Proteobacterium isolated from a hypersaline spring, East German Creek, Manitoba, is to date the sole known strain which synthesizes Bchl a regardless of the presence of oxygen [25].However, there are a few AAP closely related to the PNSB Rhodobacter [37,38], suggesting the possibility that such expression of pigments may occur in some as of yet taxonomically undefined strains.These absorption spectra do not definitively show the PNSB used light energy aerobically, but this would be worthwhile to investigate in KP4 and FW5 (related to species currently or formerly placed in the Rhodobacter genus [39]).FW36 interestingly showed significantly more LHI in oxic conditions than the others investigated.A close relative, Rubrivivax gelatinosus, has previously been shown to produce photosynthetic pigments in semi-aerobic conditions [40], and the observations in FW36 could potentially be explained by the center of colonies having less oxygen, allowing for greater expression.OHM24 synthesizing its pigments aerobically and anaerobically was the least surprising finding, considering its relation to Rhodopseudomonas sulfidophila, which has also displayed this characteristic [41].As these strains, which reside in different subphyla of Proteobacteria, showed aerobic photosynthetic pigment-protein complex expression, it would be worthwhile to investigate whether that is the case for some other PNSB.
The AAP isolates in general displayed the typical spectral features also found in the most closely related genera (Figure 3).They all had an abundance of carotenoids relative to low Bchl a, which is a common attribute of the group.The majority of accessory pigments have been found to protect cells from photooxidation, and only a few support light absorbance when directly incorporated into LH complexes [13].This was similar to the PNSB strains when grown aerobically but was vastly different to those under anoxic growth, where carotenoids and Bchl a were expressed in relatively equal proportions (Figure 2).FW153 displayed the most red-shifted Bchl a peak (872 nm) of the strains but was still within the known range [42,43].FW159 and FW176 had the most defined LHI peaks among the isolates.FW250 and OHM48 were the only AAP with LHII produced.This has been seen previously in Polymorphobacter [44] and not in Erythrobacter, their respective closely related genera.
FW36 could potentially be explained by the center of colonies having less oxygen, allowing for greater expression.OHM24 synthesizing its pigments aerobically and anaerobically was the least surprising finding, considering its relation to Rhodopseudomonas sulfidophila, which has also displayed this characteristic [41].As these strains, which reside in different subphyla of Proteobacteria, showed aerobic photosynthetic pigment-protein complex expression, it would be worthwhile to investigate whether that is the case for some other PNSB.
The AAP isolates in general displayed the typical spectral features also found in the most closely related genera (Figure 3).They all had an abundance of carotenoids relative to low Bchl a, which is a common attribute of the group.The majority of accessory pigments have been found to protect cells from photooxidation, and only a few support light absorbance when directly incorporated into LH complexes [13].This was similar to the PNSB strains when grown aerobically but was vastly different to those under anoxic growth, where carotenoids and Bchl a were expressed in relatively equal proportions (Figure 2).FW153 displayed the most red-shifted Bchl a peak (872 nm) of the strains but was still within the known range [42,43].FW159 and FW176 had the most defined LHI peaks among the isolates.FW250 and OHM48 were the only AAP with LHII produced.This has been seen previously in Polymorphobacter [44] and not in Erythrobacter, their respective closely related genera.With the wide phylogenetic diversity of anoxygenic phototrophs isolated comprising part of the microbial community in marshes, they probably occupy an important ecological niche, although this has not yet been well investigated.Unfortunately, no studies have shown AnPB's contribution to overall photosynthesis in wetlands, as they tend to focus on Cyanobacteria and chlorophyll a measurement [9].Some approaches utilize the pufM gene, which codes for a part of the reaction center in both groups of AnPB, as a genetic indicator of AAP in aerobic environments [35].This is an issue, as it may also capture PNSB in the oxygenated portion of the water.Therefore, it is likely such numbers are overestimated.Another strategy of AAP detection and enumeration using epifluorescence microscopy and infrared lighting [36] may also be fallible.We identified a set of PNSB With the wide phylogenetic diversity of anoxygenic phototrophs isolated comprising part of the microbial community in marshes, they probably occupy an important ecological niche, although this has not yet been well investigated.Unfortunately, no studies have shown AnPB's contribution to overall photosynthesis in wetlands, as they tend to focus on Cyanobacteria and chlorophyll a measurement [9].Some approaches utilize the pufM gene, which codes for a part of the reaction center in both groups of AnPB, as a genetic indicator of AAP in aerobic environments [35].This is an issue, as it may also capture PNSB in the oxygenated portion of the water.Therefore, it is likely such numbers are overestimated.Another strategy of AAP detection and enumeration using epifluorescence microscopy and infrared lighting [36] may also be fallible.We identified a set of PNSB occupying a wide breadth of phylogenetic diversity, including different subphyla that express their photosynthetic pigments aerobically.There is a high probability, especially in nutrient-rich locations such as marshes and high-peat-content wetlands, some PNSB will express Bchl a aerobically, leading them to also be detected with infrared light.Again, this would misrepresent the pervasiveness of AAP.A method that can effectively differentiate between the two with the utmost accuracy has not yet been designed, and as such, these possibilities should, at the very least, be acknowledged.Studies on the Bchl a prevalence and photosynthetic activity of AnPB communities would be insightful for understanding their contribution to the overall primary productivity in wetlands.
Phenotypic Features of the Strains
All the isolates had a gram-negative cell wall.They grew at and near a neutral pH (Table 3), as expected, since most of the sites had a pH near 7.0-8.0.No strain could survive at a pH of 5.0 or lower, and OHM14 was the only one to not grow at a pH of 6.0.The cultures from Sites 3 and 7 (FW199, OHM172, FW153, OHM176), which were alkaline (pH 9.0 to 10.0), grew at a pH of 9.0, except for OHM176.This is likely because paper tests are not very accurate, so the actual pH could be different from what was paper-estimated, as marshes are typically neutral [2], making Sites 3 and 7 unusual.The optimal growth for the group was either at a pH of 6.0 or 7.0.Interestingly, FW199, FW36 and FW5 were able to grow in significantly alkaline conditions.Aside from these instances, their growth pH range reflects the sites and what was expected from the isolates' phylotype.
In general, the AnPB had broad temperature growth ranges, although the best for each one was at 32 or 37 • C (Table 3).FW5 and OHM16 grew at all the temperatures tested.The thermotolerance of all the representatives may be attributed to the climate of Manitoba experiencing some of the coldest and hottest temperatures in Canada annually.This is credited to its flat topography and lack of mountains, which usually act as temperature stabilizers.
As expected of the PNSB, FW5, KP4, FW36 and OHM24 grew anaerobically as photoheterotrophs.The AAP could not.All the strains were incapable of aerobic phototautotrophic growth.These two factors, in conjunction with the production of Bchl a (Figure 3), brought us to the conclusion that the other 10 AnPB were indeed AAP.
Most of the isolates were not motile after 2 days of growth (Table 3).The strains' morphology (Figure 4) was coccoid (FW5, OHM14), ovoid (FW250, KP4) or rod-shaped (FW199, OHM172, KP164, OHM176, FW153, OHM48, OHM16, FW159, OHM24, FW36), with FW153 having tapered ends (Figure 4F).FW5 was coccoidal, although its close relative Cereibacter azotoformans was characterized as ranging from ovoid to rod-shaped.KP164 had light-refractile circular globules inside it, varying from 1 to 5 per cell (Figure 4E).This could possibly be an accumulation of polyhydroxyalkanoate, which has been previously shown in some AAP depending on the conditions [45].Most of the isolates were not motile after 2 days of growth (Table 3).The strains' morphology (Figure 4) was coccoid (FW5, OHM14), ovoid (FW250, KP4) or rod-shaped (FW199, OHM172, KP164, OHM176, FW153, OHM48, OHM16, FW159, OHM24, FW36), with FW153 having tapered ends (Figure 4F).FW5 was coccoidal, although its close relative Cereibacter azotoformans was characterized as ranging from ovoid to rod-shaped.KP164 had light-refractile circular globules inside it, varying from 1 to 5 per cell (Figure 4E).This could possibly be an accumulation of polyhydroxyalkanoate, which has been previously shown in some AAP depending on the conditions [45].Most of the strains could use at least one carbon source (Table 4), except for OHM14 and FW250.Both were able to grow on RO and OM, suggesting there are essential growth components in these complex media.They were all incapable of growing with ethanol or Na-formate.Phylogenetically close groups showed similar trends.The Erythrobacteraceae members (OHM16, OHM48, FW159, FW172) all used Na-butyrate and Na-glutamate but not Na-citrate, fructose, lactose or malic acid.FW199, KP164 and FW153 of Sphingomona- Most of the strains could use at least one carbon source (Table 4), except for OHM14 and FW250.Both were able to grow on RO and OM, suggesting there are essential growth components in these complex media.They were all incapable of growing with ethanol or Na-formate.Phylogenetically close groups showed similar trends.The Erythrobacteraceae members (OHM16, OHM48, FW159, FW172) all used Na-butyrate and Na-glutamate but not Na-citrate, fructose, lactose or malic acid.FW199, KP164 and FW153 of Sphingomonadaceae did not grow with Na-citrate, malic acid and Na-succinate as the sole carbon sources but could use glucose.Paracoccaceae KP4 and FW5 were the most versatile, utilizing Naacetate, Na-butyrate, fructose, glucose, Na-glutamate, Na-pyruvate and Na-succinate but not lactose or malic acid.Interestingly, this also applied to FW36 but not the remaining PNSB, OHM24.FW199 was the only one able to assimilate lactose, and OHM24 was the sole strain that metabolized malic acid.Fermentation did not occur with the sugars tested.KP4 produced an acid when grown with fructose.This is typical, as only a few AnPB can ferment, and some make acids as a result of metabolizing sugars.As for enzymes, all the strains were oxidase-positive.FW199, OHN172, KP164, OHM176, OHM48 and OHM16 were catalase-positive; FW199, KP164, OHM48, OHM16, FW159 and FW36 could break down starch; KP164, FW250, OHM176, OHM16, FW159, OHM14, FW36 and KP4 hydrolyzed gelatin and FW153, FW36 and FW5 reduced nitrate into nitrite aerobically.None of them hydrolyzed agar.Strong lipolytic activity was found in the Erythrobacteraceae members, FW199, KP164, OHM172 and FW36.The rich variety of organic carbon types used or broken down by the AnPB (amino acids, mono-, di-and polysaccharides, organic acids, lipids) indicates their great contribution to carbon cycling and their important role in sequestering accumulated organics in marshes.Further physiological examination of pure cultures could lead to the discovery of additional influences AnPB have within wetland communities.
The AnPB had varying degrees of antibiotic resistance (Table 5); however, some general trends existed among the group.All the AnPB were susceptible to imipenem and resistant to nalidixic acid.Most were susceptible to kanamycin as well, with the exception of FW250.The PNSB isolates FW5, KP4 and FW36 had sensitivity to all the antibiotics tested, except nalidixic acid.OHM24 and FW250 resisted the highest number of antibiotics.In marshes, this is of interest, as they are recreational areas with increased human activity and therefore have greater exposure to anthropogenic waste, potentially including antibiotics.Some wetlands have also been constructed as wastewater treatment locations [2] and have been shown to break down antibiotics, primarily through the function of microbes, like some Proteobacteria, which use them as carbon sources [46][47][48].Sequencing has indicated a decrease in resistance genes [46,49].Additionally, pathogenic bacteria have been known to be dismantled through processes such as antibiotic secretion from macrophytes [50].In contrast, it was revealed that resistance can accumulate in such
16S rRNA Gene-Based Phylogenetics
The results from 16S rRNA partial gene sequencing (1300-1400 bp) show most of the strains are related to AAP or PNSB (Table 2).All the isolates belong to α-Proteobacteria, except for a β-proteobacterium, FW36.This matches the phylogenetic placement of most known AAP and PSNB [14,51].
The majority of the isolates (KP164, FW250, OHM176, OHM48, OHM16, FW159, OHM24, FW36, FW5 and KP4) likely represent new strains within most of the related species because of their very high 16S rRNA gene similarity.FW199, OHM172, FW153, and OHM14 may potentially be new species; however, they have yet to be taxonomically described as such, and DNA-DNA hybridization of the complete genome would be required to support such a conclusion, as well as finding other phenotypic distinguishing features.Four of the isolates belong to the Erythrobacteraceae family.OHM16, OHM48 and FW159 are all members of Erythrobacter, and OHM172 is from Novosphingobium, known AAP genera [52].KP164, FW199 and FW153 are from the family Sphingomonadaceae and the genera Blastomonas, Sphingomonas and Rhizorhabdus, respectively.Novosphingobium and Sphingomonas representatives have been shown to degrade aromatic compounds and other xenobiotics, as well as tolerate and accumulate heavy metals [53][54][55][56].Therefore, they may play a significant role in bioremediation and the degradation of anthropogenic waste in marshes.FW153's most closely related genus has no AAP, and none have been shown to produce Bchl a. Prior reports indicated Rhizorhabdus did not synthesize carotenoids [57]; however, the newest member described, Rhizorhabdus phycosphaerae, FW153's closest relative, proved otherwise [58].Identification of the photosynthetic gene cluster in Rhizorhabdus spp.may aid in evaluating whether this genus has other AAP members or whether its features are unique to FW153.All the other strains were related to published AnPB.FW250 is the sole representative of Polymorphobacter, a tentative genus in Sphingosinicellaceae [59].FW5 and KP4, in Paracoccaceae, are closely related to Ceribacter azotoformans (previously known as Rhodobacter azotoformans [39]) and Rhodobacter capsulatus, respectively.Both species have denitrification capabilities that were investigated for wastewater treatment [60,61].OHM176 is associated with the Brevundimonas genus, found in Caulobacteraceae.They are resistant to high levels of heavy metals [19,62,63], possibly contributing to filtering and treating metal waste known to occur in wetlands [4].Nitrobacteraceae is represented by the Rhodopseudomonas relative, OHM24.The most distant (based on 16S rRNA gene phylogeny, Figure 5) from all other α-Proteobacteria is OHM14, connected to Roseomonas (synonym, Falsiroseomonas) in Acidobacteraceae.FW36 was the sole strain found from the β-Proteobacteria subphylum.It is closely related to Rubrivivax gelatinosus of the Comamonadaceae family.Alongside contributing to photosynthetic productivity and carbon cycling, AnPB also participate in other activities, such as degrading anthropogenic pollutants and heavy metal oxides, as mentioned in the specific examples.These have been broadly advertised as important roles AAP and PNSB play in bioremediation [13,64].
Conclusions
The cultivated isolates revealed a diverse and readily available community of AnPB, indicating they are important microbial contributors to life in marsh ecosystems.This is likely because such places are highly enriched in organics, have a neutral pH and are aerobic due to oxygenic photosynthetic activity.Furthermore, the waters there are relatively shallow and have limited peat accumulation, making sunlight accessible in excess.Alt-
Conclusions
The cultivated isolates revealed a diverse and readily available community of AnPB, indicating they are important microbial contributors to life in marsh ecosystems.This is likely because such places are highly enriched in organics, have a neutral pH and are aerobic due to oxygenic photosynthetic activity.Furthermore, the waters there are relatively shallow and have limited peat accumulation, making sunlight accessible in excess.Although these conditions better support the growth of AAP than PNSB, the metabolic flexibility of the latter has also made them possible to culture.This is especially important to consider, as many AAP studies do not factor in PNSB contributing to their relative abundance detected using infrared epifluorescence microscopy and pufM sequencing, leading to misjudgment of their actual abundance.However, they are indeed present, can synthesize photosynthetic pigments aerobically and therefore must be accounted for as influencing such measurements.This is one example of how cultivation is important for the precise analysis and comprehension of microbial contributions, as proof of activity can be directly assessed, and it can appropriately complement sequencing and microscopy techniques.There is a need for a better understanding of AnPB's participation in the total primary productivity in these habitats to accurately evaluate their ecological role.Although the AnPB here represent a wide breadth of the potential community in Manitoban marshes, there are many more, which simply remain unculturable.These may include AnPB that express photosynthetic apparatus regardless of oxygen's availability.Since there is a steep oxygen gradient in marshes, from well-aerated shallow waters to anaerobic bottom sediments, such flexibility in using anoxygenic PS would be advantageous and could exist here.Nonetheless, the phylogenetic diversity of the isolates and their known physiology provide some context to the functions they possibly perform.While some insights into the AnPB community have been enriched, more work is necessary to better elucidate their contribution to the essential activities wetlands perform for the biosphere.
Figure 2 .
Figure 2. Whole-cell absorption spectra of PNSB grown at various conditions.OHM24 (A), KP4 (B), FW5 (C), FW36 (D).Spectra were taken after 4 days of growth at 28 °C in the following conditions: anaerobically in illuminated PNSM (purple) and aerobically in the dark on PM (dark blue) and RO (green).Bchl a peaks are indicated.
Figure 2 .
Figure 2. Whole-cell absorption spectra of PNSB grown at various conditions.OHM24 (A), KP4 (B), FW5 (C), FW36 (D).Spectra were taken after 4 days of growth at 28 • C in the following conditions: anaerobically in illuminated PNSM (purple) and aerobically in the dark on PM (dark blue) and RO (green).Bchl a peaks are indicated.
Microorganisms 2024 ,
12, x FOR PEER REVIEW 15 of 18 pollutants and heavy metal oxides, as mentioned in the specific examples.These have been broadly advertised as important roles AAP and PNSB play in bioremediation[13,64].
Figure 5 .
Figure 5. Phylogenetic tree of representative strains from Manitoba marshes and most related based on 16S rRNA gene sequences.Version with highest log likelihood (−11,688.47) is presented.Branch lengths measured as the number of substitutions per site.The percentages of trees clustered together in the associated taxa are shown next to the branches.Accession numbers for sequences used are included in parentheses.
Figure 5 .
Figure 5. Phylogenetic tree of representative strains from Manitoba marshes and most related based on 16S rRNA gene sequences.Version with highest log likelihood (−11,688.47) is presented.Branch lengths measured as the number of substitutions per site.The percentages of trees clustered together in the associated taxa are shown next to the branches.Accession numbers for sequences used are included in parentheses.
Table 1 .
Total number of isolates from Manitoba marshes.
Table 2 .
Representative AnPB used in this study.
Table 4 .
Organic carbon sources utilized by representative strains 1 . | 2024-05-26T06:49:01.447Z | 2020-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "38526f157804e2685eba6994b37e28c43c286d01",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/12/5/1007/pdf?version=1715907509",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ac73b2b1b88138f4339cf12f6c26c69d6344ad2c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
232047605 | pes2o/s2orc | v3-fos-license | Dot1l Aggravates Keratitis Induced by Herpes Simplex Virus Type 1 in Mice via p38 MAPK-Mediated Oxidative Stress
Background Disruptor of telomeric silencing 1-like (Dot1l) plays a vital role in biological processes as a well-known methyltransferase. However, its role in herpes simplex virus type 1- (HSV-1-) infected keratitis remains unclear. Methods In vitro and in vivo models were assessed to investigate the role of Dot1l in HSV-1 induced keratitis. C57BL/6 mice corneas were infected with HSV-1 for different days, with or without Dot1l inhibitor, to demonstrate the regulation of Dot1l in herpes simplex keratitis (HSK). Human corneal epithelial (HCE) cells were cultured and infected with HSV-1 to identify the molecular mechanisms involved. Results In this study, we found that Dot1l was positively related to HSK. Inhibition of Dot1l with EPZ004777 (EPZ) alleviated corneal injury, including oxidative stress and inflammation in vivo. Similarly, the inhibition of Dot1l with either EPZ or small interfering RNA (siRNA) showed an inhibitory effect on HSV-1-induced oxidative stress and inflammation in HCE cells. Moreover, our study revealed that the expression of p38 MAPK was elevated after HSV-1 infection in HCE cells, and the inhibition of Dot1l could reduce the increased expression of p38 MAPK induced by HSV-1 infection in vivo and in vitro. Conclusion Our results demonstrated that the inhibition of Dot1l alleviated corneal oxidative stress and inflammation by inhibiting ROS production through the p38 MAPK pathway in HSK. These findings indicated that Dot1l might be a valuable therapeutic target for HSK.
Introduction
Herpes simplex virus type 1 (HSV-1) is a highly prevalent virus [1] among the population. In humans, HSV-1 infection leads to encephalitis, paronychia, gingivitis, and blinding keratitis [2]. Eye disease caused by HSV-1 infection usually presents as herpes simplex keratitis (HSK), which accounts for 50-80% of ocular herpes [3]. HSK is threatening to ocular healthy, without adequate treatment, it may lead to progressive corneal opacity and poor eyesight [4]. Previous studies have shown that HSK is the leading cause of infectious blindness in the developed world [2]. However, the effect treatment for HSK is still limited.
The overproduction of reactive oxygen species (ROS) leads to oxidative stress, which is recognized as one of the most important factors in the pathogenesis of corneal diseases [5]. The generation of ROS is usually through the mitochondrial electron transport chain under physiological conditions, and it plays a key role in activating cellular factors or signaling for survival. However, the overproduction of ROS may cause oxidative damage to the cell, including lipid oxidative damage to DNA, intracellular oxidative modification of proteins, and peroxidation of the membrane [6,7]. Up to now, although oxidative stress was widely participated in ocular diseases, its role in the progression of HSK was still unknown. In this study, we focused on the effect of oxidative stress in the pathogenesis of HSK and its possible mechanism.
Disruptor of telomeric silencing-1 like (Dot1l) protein specifically catalyzes the methylation of histone H3 on Lys79 (H3K79) in targeted gene and is found to be related with oxidative stress. Besides, it has been reported that Dot1l is related to many biological processes, such as DNA damage response, cell cycle progression, somatic reprogramming, transcriptional regulation, and embryonic cell development [8,9]. As a conservative protein, Dot1l is widely expressed in different species [10]. However, the role of Dot1l in HSK remains unclear. In the present study, we detected the role of Dot1l in HSK. We also investigated the potential mechanisms involved in the Dot1l-mediated generation of ROS.
Material and Methods
2.1. Animal. All C57BL/6 mice (male; weight, 60-80 g; age, 6-8weeks) were provided by the Center of Experimental Animals in the Medical College, Wuhan University. This project was approved by the committee of experimental animals of Wuhan University, and the procedures were carried out in accordance with routine animal-care guidelines. All procedures complied with the Guidelines for the Care and Use of Laboratory Animals. Before surgery procedures, mice were anesthetized intraperitoneally with sodium pentobarbital (50 mg/kg) and then placed on a homeothermic table to maintain core body temperature at 37°C.
Virus.
The virus HSV-1 KOS strain had a titer of 2 × 10 7 pfu/ml before use, based on a previous study [11,12]. The virus was produced by Vero cells. Mice were anesthetized intraperitoneally and scratched on the mouse corneal epithelium with the back of the blade of the No. 5 surgical blade. Subsequently, 5 μl of a solution containing HSV-1 (KOS strain; 10 5 spot forming units (pfu)) was spotted and retained for 10 s on the cornea, and the eyelids were closed and massaged for 30 s to allow the virus fluid to sufficiently contact the cornea. After surgery, 0.5% gentamicin eye drops were used to avoid bacterial infection.
Experimental Design and Groups.
Mice were infected with HSV-1 to establish a corneal HSV-1 infection model. Then, they were sacrificed prior to corneal infection or at 1, 3, and 7 days postinfection (dpi). The inoculated eyes (five mice in each group) were enucleated and immediately frozen in liquid nitrogen or 4% formalin fixative for the following experiments.
All mice were divided into different groups (n = 5): control group, different infection time of HSV-1 group (1, 3, and 7days), Dot1l inhibitor group (10 mg/kg, 50 mg/kg), and dimethyl sulfoxide (DMSO) group. In the HSV-1 group, only the right eyes were scratched and then infected with HSV-1 on different days. In the Dot1l inhibitor group, after the right eyes were scratched and infected, the mice were administered with EPZ004777 by dissolving in DMSO via subconjunctival injection once daily. In the DMSO group, after infected with HSV-1, the mice were administered with equal DMSO as the control. The concentration of DMSO in the Dot1l inhibitor group and DMSO group was 0.1%. The score of corneal opacity was based on the opaque area of the cornea [13]: 0, the corneal stroma was clear and transparent; 1, mild corneal haze; 2, moderate corneal opacity with iris visible; 3, severe corneal opacity with indistinct distinguish the position of pupil; and 4, severe corneal opacity with invisible intraocular structure.
Small
Interfering RNA (siRNA) Transfection. HCE cells were transfected with either small interfering RNA against the targeting gene or with nontargeting siRNAs (Santa Cruz, CA, USA) at a concentration of 100 nM, and nontargeting siRNAs served as a negative control (NC) for 48 h using Lipofectamine 3000 reagent. The effects of siRNA were assessed using western blot or RT-PCR.
2.6. Histological Examinations. After the tissues were fixed in 4% paraformaldehyde, they were embedded in paraffin and incised with an average thickness of 4 μm. Then, the sections were deparaffinized, hydrated, and stained with hematoxylin and eosin (H&E) in order to assess histopathological corneal injury. Morphological assessments were observed by two experienced pathologists who were unaware of the experimental design.
Immunofluorescence Staining.
After the tissues were fixed in 4% paraformaldehyde, they were embedded in paraffin and incised with an average thickness of 4 μm. For immunofluorescence staining, the sections were incubated with diluted CD31 primary antibody (BD, New Jersey, USA) overnight at 4°C. After washing with PBS, fluorescence-conjugated secondary antibody was added and incubated at 37°C for 2 h. Then, DAPI (Invitrogen, United Kingdom) was added for 5 min to visualize the nuclei. Finally, the signal was observed under the fluorescence microscope (Olympus, Japan).
2.8. RT-PCR. RNAiso Plus (TaKaRa Biotech, Dalian, China) was used to extract total RNA from frozen corneal tissues according to the instructions provided by the manufacturer. Subsequently, the PrimeScript™ RT Reagent Kit (TaKaRa Biotech) was used for reverse transcription into cDNA. In all PCR experiments, the expression of GAPDH was used as the internal reference. The qRT-PCR analysis was performed using the ABI ViiA7DX System (Foster City, CA, USA). The qRT-PCR primers for the specific target genes (listed below) were designed and synthesized by TaKaRa Biotech. Routine qRT-PCR for the following genes and GAPDH was performed as follows: 94°C for 3 min, followed by 30 cycles (25 cycles for GAPDH) at 94°C for 30 s, 55°C for 30 s, and 72°C for 1 min. The primers included as follows. Figure 1: Dot1l was elevated in HSK progression. The corneal images were taken using a slit lamp (a) and calculated for opacity score (b) after HSV-1 infection 1, 3, and 7 days. The levels of Dot1l protein (c) were detected after HSV-1 infection 1, 3, and 7 days, and quantification of Dot1l expression (d) was determined in fold change relative to the 0 d group. SOD activity (e), MDA content (f), and H2O2 production (g) were also detected after HSV-1 infection 1, 3, and 7 days. Data were expressed as means ± SD (n = 5). * P < 0:05 versus 0 d group. Experiments were repeated 3 times. 2.9. Western Blotting. Proteins from cornea tissue were extracted and quantified using the bicinchoninic acid method. Then, equal concentrations of protein were separated on 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis gels and then transferred to a nitrocellulose membrane. Primary antibodies against Dot1l (ab64077), catalase (ab209211), SOD1 (ab51254), SOD2 (ab68155), p38 (ab31828), p-p38 (ab178867), and β-actin (ab8226) were purchased from Abcam (dilution 1 : 1000). β-Actin was used as a loading control to ensure equal loading. Subsequently, membranes were washed twice with PBS and then incubated with goat anti-rabbit or goat anti-mouse horseradish peroxidase-conjugated immunoglobulin G secondary antibody ( Figure 2: Dot1l inhibition prevented keratitis induced by HSV-1 infection. The corneal images were taken using a slit lamp (a) and calculated for opacity score (b) at HSV-1 7dpi in mice, with or without Dot1l inhibitor EPZ004777 (10 and 50 mg/kg) treatment. H&E staining (×400) and immunofluorescence staining (×400) for CD31 were also performed (a). The mRNA levels of proinflammatory factors (c-g), including IL-1β, MMP-1, MMP-2, IL-6, and MMP-9, were detected at HSV-1 7 dpi in mice, with or without Dot1l inhibitor EPZ004777 (10 and 50 mg/kg) treatment. Data were expressed as means ± SD (n = 5). * P < 0:05 versus control group; # P < 0:05 versus DMSO group. Experiments were repeated 3 times. Oxidative Medicine and Cellular Longevity 2.13. Statistical Analysis. Data are presented as mean ± standard error of the mean (SEM). The means of the different groups were compared using one-way analysis of variance (ANOVA) and the Student-Newman-Keuls test. Differences were considered to be statistically significant when P < 0:05.
Clinical Course of Keratitis and Oxidative Stress after
Corneal HSV-1 Infection. Corneal morphology images indicated that C57BL/6 mice were susceptible to HSV-1 infection and developed typical keratitis at 7 dpi, which lead to the most serious corneal opacity (Figures 1(a) and 1(b)). To explore the role of Dot1l in HSK, its expression was measured at 0, 1, 3, and 7 dpi in mice. Compared with 0 day, HSV-1-infected corneas displayed obviously elevated Dot1l expression at 1, 3, and 7 dpi, with the highest expression at 7 dpi (Figures 1(c) and 1(d)). Also, we found that oxidative stress was related to the HSK. The results suggested that SOD activity (Figure 1(e)) continued to decrease and MDA content (Figure 1(f)), and H2O2 production (Figure 1(g)) continued to increase in the course of HSK progressed. Overall, these results indicated that Dot1l expression and oxidative stress were related to the progression of HSK.
Dot1l Inhibition Attenuated Oxidative Stress Induced by
HSV-1 Keratitis. WB results showed that HSV-1 keratitis could stimulate Dot1l expression, which were inhibited by EPZ004777 at different concentration (Figures 3(a) and 3(b)). Next, we investigated the relationship of Dot1l and oxidative stress induced by HSV-1 keratitis. It was indicated that the decreased SOD level (Figure 3(c)) and the increased MDA content (Figure 3(d)), H 2 O 2 production (Figure 3(e)) and induced by HSV-1 keratitis could be reversed by Dot1l inhibitor. WB results also indicated that catalase, SOD1, and SOD2 expression were increased after HSV-1 infected keratitis, and the inhibition of Dot1l could decrease their expression (Figures 3(f)-3(i)). Overall, these results indicated that Dot1l inhibition might reduce oxidative stress induced by HSV-1 keratitis.
Oxidative Stress Induced by HSV-1 Infection Depends on
Dot1l in HCE Cells. First, we determined whether different HSV-1 infection time affected Dot1l expression in HCE cells. Its expression in HSV-1 infection groups, 3, 6, and 12 h, was significantly elevated compared with that observed in the control group, with the more obvious effects at 12 h post-HSV-1 infection (Figures 4(a) and 4(b)). We also found that SOD activity (Figure 4 3.6. Dot1l Regulated the Activation of p38 MAPK in HSV-1-Infected HCE Cells. It was reported that p38 MAPK was activated in experimental keratitis in vivo and in vitro. In this study, we found that phosphorylated p38 (p-p38) was obviously elevated after HSV-1 infection (Figures 6(a) and 4(b)); however, the total p38 level was not changed in different groups. Interestingly, the increased p-p38 level was largely suppressed by knockdown of Dot1l compared with the si-NC group. In addition, the mRNA levels of proinflammatory factors, including IL-1β, MMP-1, MMP-2, IL-6, and MMP-9 (Figures 6(c)-6(g)), were elevated after HSV-1 infection in HCE cells, and si-Dot1l could alleviate their mRNA levels. These results indicated that p38 MAPK was activated by HSV-1 infection, and the inhibition of Dot1l could suppress p38 MAPK activation in HCE cells.
Inhibition of Dot1l Attenuated p38 MAPK Activation
Induced by HSV-1 Keratitis. The effects of Dot1l inhibition on p38 MAPK activation observed in vitro also needed to 9 Oxidative Medicine and Cellular Longevity be verified in vivo. As shown, p-p38 MAPK was increased at 7 dpi, however, their expression was inhibited by treatment with Dot1l inhibitor (Figures 8(a) and 8(b)).
Discussion
In the present study, we focused on the effect of Dot1l in the HSK model and investigated the underlying mechanism. We found that Dot1l played an important role in HSK in mice. The results showed that the inhibition of Dot1l could alleviate corneal injury induced by HSV-1 infection in mice. Besides, oxidative stress induced by HSV-1 infection relied on Dot1l in HCE cells, and inhibition of Dot1l using siRNA blocked the inflammation and oxidative stress induced by HSV-1 infection in HCE cells. Furthermore, we also found that ROS generation was modulated by Dot1l through p38 MAPK activation. Therefore, our findings demonstrated that Dot1l might be a therapeutic target for HSK, while EPZ004777 might be an effective therapeutic agent for corneal injury induced by HSV-1 infection.
Corneal lesions caused by HSV-1 involve the direct effect of the virus and the immunoinflammatory response triggered by virus particles [15]. HSV-1 infection model has been studied in different animals to understand the pathogenesis, biology, and immune response, and the pattern of infection was different due to the animal species, age, and genotype and viral serotype and strain [16]. In this study, we observed that C57BL/6 mice were susceptible to HSV-1 infection and developed typical human stromal keratitis at 7 dpi, which lead to the most serious corneal opacity. Next, we chose 7 days as the observation time point in the following experiments. Morphological results of H&E staining showed that the HSV-1 infected cornea had extensive pathologic vessel growth and CD31 positive cells at 7 dpi. These results were consisted with the previous study, suggesting that HSV-1 infection induced corneal keratitis in mice [17].
Dot1l, as the histone methyltransferase, is correlated with mammalian development. A previous study showed that Dot1l was overexpressed in prostate cancer and associated with poor outcome. Chemical or genetic inhibition of Dot1l impaired the viability of androgen receptor-positive prostate cancer cells [18]. Other studies found that Dot1l epigenetically promoted the transcription of c-Myc via H3K79me2, while silence or inhibition of Dot1l induced cell cycle arrest in colorectal cancer cells, suggesting that Dot1l inhibitor might be a potential drug for the treatment of colorectal cancer [19]. Until now, Dot1l has been widely studied for cancer pathogenesis and development, however, the role and function of Dot1l in HSK still remain unknown. In this study, it was indicated that Dot1l inhibition could reduce elevated neonatal vessel growth, CD31 expression, and mRNA levels of these proinflammatory factors induced by HSV-1 infection, which suggested the vital role of Dot1l on the regulation of HSK progression.
Although oxidative stress plays a key role in the regulation of many biological processes, including intracellular signaling [20], it can also induce serious cellular damage under adverse condition. The imbalance between free-radical-generating and radical-scavenging systems could lead to oxidative stress, which is associated with noninfective diseases and infective diseases. A previous study has been demonstrated that ROSinduced oxidative injury is involved in the pathogenesis of fungal keratitis in mice [21]. However, whether oxidative stress participated in the HSK was still unknown. In this study, the results showed that the inhibition of Dot1l could decrease the elevated Catalase, SOD1, and SOD2 expression induced by HSV-1, as well as MDA content, H 2 O 2 production, and the proinflammatory cytokines. In response to si-Dot1l in HCE cells, it was found that the decreased SOD level and the increased MDA content, ROS, and H 2 O 2 production induced by HSV-1 could be reversed in HSV-1 infected HCE cells. These results indicated that the inhibition of Dot1l could alleviate HSK through the regulation of oxidative stress.
Oxidative stress might also activate MAPK signaling pathways, as elevated ROS could selectively activate ERKs, JNKs, or p38 MAPKs [22]. A previous study has shown that blockage of p38 MAPK activity may decrease ROS-mediated injury in the pathogenesis of fungal keratitis [21]. However, whether p38 MAPK participated in the HSK was still unknown. In this study, we found that p38 MAPK was activated by HSV-1induced infection, and inhibition of Dot1l could suppress p38 MAPK activation in HCE cells. Besides, with the p38
12
Oxidative Medicine and Cellular Longevity MAPK inhibitor, we further demonstrated that the expression of catalase, SOD1, SOD2, and the mRNA levels of proinflammatory factors, including IL-1β, MMP-1, MMP-2, IL-6, and MMP-9, was reduced in response to the inhibition of p38 MAPK activation. In addition, the results also showed that the activation of p38 MAPK was inhibited by treatment with Dot1l inhibitor in mice cornea infected by HSV-1.
Conclusion
In summary, we revealed that the inhibition of Dot1l protected the cornea against HSK and prevented corneal injury through modulating p38 MAPK-mediated ROS production. Overall, these results indicated that Dot1l might be a novel therapeutic target for HSK. Figure 8: EPZ004777 alleviated p38 MAPK activation induced by HSV-1. Western blot was performed for the activation of p38 MAPK (a) after treatment with EPZ004777 (10 and 50 mg/kg) at HSV-1 infection 7 days, and quantification of their expression (b) in fold change relative to the control group. Data were expressed as means ± SD (n = 5). * P < 0:05 versus control; # P < 0:05 versus DMSO. Experiments were repeated 3 times. 13 Oxidative Medicine and Cellular Longevity
Data Availability
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest. | 2021-02-26T05:58:10.655Z | 2021-02-15T00:00:00.000 | {
"year": 2021,
"sha1": "e914aa0b8a2f53ba755c95745aa8858fd3d7db7b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2021/6612689.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e914aa0b8a2f53ba755c95745aa8858fd3d7db7b",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214072123 | pes2o/s2orc | v3-fos-license | Stock Pledge of Ultimate Owner Information Transparency and Construction Projects Cost
In recent years, with the rapid development of China’s capital market, strong shareholders’ stock pledge activity has occurred frequently and has received increasing attention. The disclosure of company’s information is the main way for investors to know about the enterprise. The company’s information transparency will also affect the enterprise construction project cost. Based on this, the paper selects all a-share construction enterprises from 2006 to 2016 as samples to study the impact of stock pledge of ultimate owner and information transparency on construction project costs. First of all, the article examines the impact of the stock pledge on project cost. Then, referring to Bhattacharya’s research, this paper used earnings aggressiveness and earnings smoothing as the measurement of transparency. The relationship between the transparency and the construction project cost discussed when the transparency is taken as the moderator variable. The research found that the stock pledge of ultimate owner will constrain construction project expenditure and that transparency is positively correlated with construction project cost. The research also found improving firm’s transparency can alleviate the cost constraints which was caused by pledge.
Introduction
With the rapid development since the reform and opening up (nearly 40 years), China has entered an important stage of economic transformation and industrial upgrading. The outline of the science and technology development plan clearly states that scientific and technological progress will become a new driving force for China's economic growth. After China's entry into the world trade organization (WTO), economic integration has promoted China's foreign trade and brought competitive pressure to Chinese enterprises in the global market. The key to the core competitiveness of an enterprise is to form its continuous comparative advantage. Therefore, the essence of enterprise core competitiveness is expenditure. Under this background, this paper studies how to encourage enterprises to carry out expenditure activities.
The existing research shows that the pledge of shares by major shareholders will aggravate the agency problem in the company, and the more the pledge of shares is, the more serious the agency problem will be. Pledge of equity refers to the pledge of the company shares held by shareholders as collateral. The capital risk brought by equity pledge will seriously affect the style of management. When the pledge of equity, the controlling shareholders are faced with the original financing constraints and the stock price decline risk caused by the pledge of equity. Therefore, we propose whether equity pledge will affect enterprise expenditure activities. On the other hand, corporate information disclosure is an important means for investors to understand corporate business conditions. A good level of information disclosure of an enterprise can bring good signals of enterprise operation to investors and promote investors to increase investment. Therefore, whether the improvement of information transparency can alleviate the inhibiting effect of financing constraints on enterprise expenditure activities to some extent is also one of the issues studied in this paper.
On this basis, this paper selects a-share listed companies in China as research samples for empirical experiments, explores the financing channels of enterprise expenditure activities, and tests the relationship between empirical equity pledge and enterprise expenditure, as well as the impact of accounting information transparency on enterprise expenditure behavior. This paper investigates the economic consequences of equity pledge in the field of enterprise expenditure activities, enriches the theoretical basis for equity pledge, and provides some empirical evidence for enterprise expenditure, which is undoubtedly of great significance for Chinese companies in the period of social and economic transition to improve their expenditure.
Equity pledge to enterprise expenditure
Equity pledge affects enterprise expenditure mainly by aggravating agency problem and expanding information asymmetry problem, which affects the investment of enterprise research and development, and further affects the development of enterprise expenditure activities. Enterprise expenditure activity is a complex system project with high risk, long term, high adjustment cost and high information asymmetry. The influencing factors of enterprise expenditure performance depend on internal expenditure activities and the interaction between activities and expenditure environment (J· Hinlooper, 1998). R&D input is one of the most important factors affecting the expenditure performance of the company, and the company's R&D activities are the main carrier of expenditure.
This paper analyses the impact mechanism of equity pledge from the perspective of enterprise expenditure activity characteristics. First, enterprise expenditure needs continuous cash flow support and has the characteristics of high risk and vulnerability to internal and external conditions. When the controlling shareholder carries out stock pledge repurchase, if the capital is used for high-risk projects, once the investment fails, the stock price of the listed company may face the risk of collapse, and the controlling shareholder will face the crisis of losing the control right of the company. So equity pledge will intensify the risk aversion of controlling shareholders and reduce the company's willingness to invest in research and development. Second, process of expenditure is very long. Although technological expenditure can gain competitive advantages, under the situation of equity pledge financing of controlling shareholders, controlling shareholders tend to improve short-term performance, so the willingness of enterprises to enhance long-term competitiveness will decline, and therefore the expenditure performance of enterprises will decline accordingly. Third, most of the small shareholders funds is one of the main source of enterprise expenditure capital, when the controlling shareholder equity pledge, controlling shareholders usually uses earnings management and manipulation of information disclosure to manipulate the stock price, and some investors will recognize it and choose to reduce the investment of listed companies, which results in enterprise financing constraints and make enterprise expenditure activities constrained with funds. From above, we propose hypothesis 1 of this paper.
H1a: stock pledge of major shareholders has an inhibiting effect on enterprise expenditure; H1b: pledge of shares of major shareholders promotes enterprise expenditure.
Information transparency on enterprise expenditure under equity pledge
Many scholars believe that the stock pledge behavior of controlling shareholders aggravates the tunneling degree of major shareholders and the second type of agency problem. This paper will analyse the impact mechanism of enterprise expenditure activities from two perspectives. Firstly, there have been many literature studies on the second type of agency problems in the academic circle, including enterprise size, ownership structure, compensation incentive mechanism and other factors that will affect the agency problems between large shareholders and small shareholders. Financial information disclosure can alleviate the agency problem within the company by reducing the information asymmetry between major shareholders and small and medium-sized shareholders, and the level of corporate information disclosure will affect the role of other corporate governance mechanisms (Armstrong, 2010).For example, recent studies have shown that corporate information transparency largely reflects the agency problem between large shareholders and small shareholders.(li wenjing, kong dongmin, 2013) Bushman (2001), Armstrong (2010) et al. conducted an empirical study on the relationship between information transparency and corporate governance mechanism. The research results show that information transparency can indeed affect the governance effect of external directors, manager evaluation, equity structure and financing constraints.
Bider (2007) and Willd (2010) found that the more frequently the corporate disclosure of financial information, the understanding of the outside investors to invest the more deep, which can effectively eliminate the internal and external information asymmetry, shareholders, the more feel free to put money in the enterprise, will be willing to put more money into the enterprise, reduce the agent cost and the cost of equity financing, to alleviate the principal-agent problem under equity pledge, had regulation for equity pledge.
In addition, foreign researchers Madhavan (1996) looked at the relationship between the share price and information transparency, the results show that the frequency of the enterprise information disclosure is higher, the investor's trust in the company will be higher, investors expected return of positive attitude of stock prices have driven effect, so as to alleviate the controlling shareholder of shortterm psychological risk aversion and interests, promote the R&D investment in the enterprise for the long term, so as to improve enterprise expenditure performance. Based on this, we propose hypothesis 2 of this paper.
H2: improving information transparency can help alleviate the restriction of equity pledge on enterprise expenditure.
Explained variable
This paper uses the patent number of listed companies to measure the enterprise expenditure performance (He and Tian, 2013). Considering the lag of expenditure activities, this paper uses the number of patents in the current period and the number of patents in the first period to measure the expenditure performance of the company, and the data comes from the CSMAR database.
Explained variable
In this paper, the sum of absolute values of manipulative accruals in the current and previous two periods is used as an alternative indicator of information transparency (Huttonetal., 2009; Kim and Zhang, 2014b;Pan yue et al., 2011).The smaller the sum of absolute values of manipulative accruals in the first three periods, the lower the degree of information asymmetry, and the higher the information transparency.
Pledgerate: Pledgerate refers to the cumulative Pledgerate of major shareholders in the year, which is calculated by the ratio of the number of shares pledged by major shareholders to the total number of shares held by major shareholders.
Control variables •
Asset-liability Ratio (Lev): companies with high asset-liability ratio have a higher probability of financial difficulties.
• Investment Opportunity (tobinQ): this paper USES tobinQ as an alternative indicator to measure business investment opportunities. Growth ability (Growth): this paper USES the Growth rate of revenue as an indicator to measure the Growth ability of enterprises. Operating income growth rate =∆ operating income/previous operating income.
• Enterprise Size (LnSize): this paper takes enterprise size as the control variable to eliminate the impact of enterprise size on expenditure performance. If large enterprises have a high position in the asset market, the enterprise have a larger competitive advantage in the market.
• Cash Holdings: This paper uses the initial cash inventory and compares the total assets at the beginning of the previous period to calculate the cash holding rate at the beginning of the period.
• Enterprise Age (Age): the Age of listed companies on gem is selected to represent, and the calculation method is to subtract the listing year from the current year of the sample.
Empirical Model
In order to test the impact of equity pledge on enterprise expenditure and the regulatory effect of information transparency, this paper designs a model for data regression. are the coefficients. App represents enterprise expenditure achievements, Pledgerate represents cumulative equity Pledgerate, Pledgerate*Tran is the cross product of Pledgerate and Tran, and the rest are control variables, the same as models (1) and (2). Descriptive statistical results are obtained through statistical software. As shown in Table 1, we can see that among all sample companies, the maximum value of patent application is 4236, the minimum value is 0, and the standard deviation is 190.53, indicating that there is a large difference in the number of expenditure achievements among enterprises as a whole. The Pledgerate mean and standard deviation of the equity Pledgerate are 23% and 0.36, indicating that the percentage of equity pledge of the controlling shareholders of listed companies is about 23%. The values of other variables are also within a reasonable range. First, it can be seen in Table 2 that in model (1) the control variables, except asset-liability ratio and cash flow of operating activities, other variables have significant influences on enterprise expenditure achievements at the level of 1%, and asset-liability ratio is significant at the level of 5%. Second, in model (2), the regression coefficient of Pledgerate is significantly negative, indicating that the intensity of stock pledge of controlling shareholders increases and the expenditure achievements of enterprises decrease. In other words, stock pledge of controlling shareholders and enterprise expenditure are negatively correlated, supporting hypothesis H1. Finally, the coefficient of cross-product term Pledgerate*Tran is significantly positive, indicating that the lower information transparency is, the stronger inhibitory effect of equity pledge on enterprise expenditure. On the other hand, the improvement of information transparency will help ease the the inhibitory effect of equity pledge on expenditure, corresponding with hypothesis H2.
Robust analysis
In order to ensure more robust and reliable research results, we do the following robustness test on the basis of the main test. When the natural logarithm of the number of patents filed plus one, ln (APP+1), is used to replace the index to measure the expenditure performance of enterprises, the conclusion remains unchanged
Conclusions •
There is a significant negative correlation between stock pledge of controlling shareholders and enterprise expenditure performance.
Although equity pledge can temporarily solve the problem of capital shortage, it will aggravate the agency conflict between large shareholders and small shareholders of the company. Because of the fear of control transfer, the controlling shareholders focus on the preparation of false financial statements rather than the long-term strategic development of the enterprise. In the decision-making of expenditure activities, the controlling shareholders will reduce their investment in expenditure research and development, and the expenditure performance of enterprises will decline accordingly.
• The improvement of information transparency is helpful to alleviate the enterprises' expenditure activities which are inhibited after facing the pledge of equity.
The controlling shareholders are worried about investors' doubts on the enterprise's operation after the pledge of shares, so they choose to whitewash financial statements, hinder real information disclosure. For those companies which have lower information transparency, investors will reduce investment, leading to constraints of enterprise expenditure activities.
Recommendations
• For controlling shareholders, equity pledge expands the financing channels for shareholders and can relieve financing constraints. However, we must recognize the possible negative impact of equity pledge. The controlling shareholder should do efforts to improve business performance and increase the company's core competitiveness to improve the share price, rather than rely on information disclosure management.
• Regulatory authorities should improve the legal system of equity pledge. The current laws and regulations have not made specific provisions on the pledgee, pledged shares, pledged information disclosure and other aspects.
• As for enterprises, corporate governance must be improved. In addition to the provisions of the company's articles of association, it must also strengthen the responsibility of the pledge subject through the construction of laws and regulations. | 2019-12-12T10:51:13.823Z | 2019-12-06T00:00:00.000 | {
"year": 2019,
"sha1": "626cc89e38d540e823bc3f34d60d447ccbf202e0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/688/5/055027",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dd838fe73f5fa4b866760d59031c87b57890fed5",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
234309608 | pes2o/s2orc | v3-fos-license | Legal Regulation Of Social Networks: Necessity And Prospects (On The Example Of The Republic Of Uzbekistan)
: This article analyzes the scientific and practical issues of regulation of social networks, in particular, the issues of the need for legal regulation of social networks in the context of digital development of the Republic of Uzbekistan. Scientific novelty of research consists in the fact that the article was first explored the issues of forming legal basis of social networks, the regulation of relations in social networks, protection of rights and freedoms, interests of legal entities and the state of information security. The study is considered important from the point of view of the fundamental study of legal relations in social networks in Uzbekistan. The practical significance of this article is the possibility of using the results obtained in the course of the study in the implementation of the tasks provided for in the Laws of the Republic of Uzbekistan «On Informatization», «On the principles and guarantees of freedom of Information», «On Guarantees and freedom of access to information»
Introduction
Today, in the context of globalization, the virtualization of relations and digitalization of almost all forms of activity have become an irreversible and integral element of our life. Virtual space, especially social networks, expanding the possibilities of relationships between members of society, created a new form of public relations in the World Wide Web. Today, it is impossible to imagine everyday life without social networks.
According to data from the analytical companies We Are Social and Hootsuite, the audience of social networks reaches 3.96 billion in the first half of 2020. people, or 51% of the world's population [1]. At the same time, over the past year, the number of social media users has grown by 10.5%, with about a million people registering on various platforms every day. Of these, 99% access social networks via smartphones. According to the official data of the Ministry for the Development of Information Technologies and Communications, as of December 2020, more than 22.5 million Uzbek citizens are covered by the Internet in Uzbekistan [10]. According to Datareportal, at the beginning of 2020, the number of users of social networks was 3.2 million people [2].
Starting from ordinary citizens-users, ending with government agencies, social networks are becoming the most popular platform for interaction. The increasing influence of social networks on the consciousness of people, on the one hand, indicates their convenience and efficiency for members of society, but on the other hand, significantly increase the risk of negative use of social networks by turning them into a «tool» for intruders and other destabilizing elements. In turn, the absence of any regulatory mechanisms, in particular legal norms regarding social networks, requires the adoption of timely and adequate response measures on the part of the state. The ongoing COVID-19 coronavirus pandemic makes it even more clear that the delay in taking measures to regulate social networks can lead to various kinds of problems of a political, social and economic nature.
Methods The study used various methods of the theory and practice of jurisprudence. In particular, the method of analysis is the conditional consideration of social networks as objects of legal research by separately separating the relations subject to protection, the participants in the relations, the status of the social networks themselves and users (bloggers) into separate parts, highlighting their main or essential features, on the basis of which a certain proposal is developed; the method of synthesis is the study of various components of relations that arise in social networks by combining their parts; The critical-legal method of analogy is to test the theoretical provisions of the legal status of social networks, freedom of access to information and the liberalization of the information environment by the criteria of fairness, objectivity, sufficiency, as well as the need to prevent excessive freedom that violates the rights and legitimate interests of others.
The critical analysis of the legislation made it possible to draw the following conclusions about the priorities for improving legislation in the field of legal aspects of social networks: inventory of national legislation for implementation, bringing it in line with international standards and best foreign practices; determination of the legal status of social networks and their subjects, the category of legal relations requiring legal regulation, by preparing proposals for the development of legal norms and amendments and additions to certain legislative acts of the Republic of Uzbekistan; creating a conceptual framework for the regulation of social networks, ensuring information security, developing standards for the regulation and filtering of content in social networks; elimination of legal gaps and conflicts in legislation that hinder the effective functioning of the system for protecting the rights and legitimate interests of individuals, society and the state in social networks, ensuring the effectiveness of measures of responsibility for offenses in social networks.
Discussion The implementation of measures on the legal regulation of social networks corresponds to the goals of the Laws of the Republic of Uzbekistan «On Informatization», «On Principles and guarantees of Freedom of Information», «On Guarantees and freedom of access to Information», the Strategy of Action on the five priority areas of development of the Republic of Uzbekistan in 2017 -The objectives of the Comprehensive Program for the further Development of national content in the world information network Internet for the period 2019-2021 [3] for the continuous study of trends in the development of the global and national segment of the world information network Internet, the formation of proposals for improving the mechanisms for the development of national content of the world information network Internet and improving the regulatory framework.
Meanwhile, the need for the development of science, education and the digital economy demonstrates the importance of developing national content in domestic social networks and expanding its presence in foreign information resources of the world information network Internet, improving the activities of social networks and messengers in Uzbekistan.
At the same time, the legal regulation of social networks will also solve the problems of ensuring information security in the context of globalization. Timely and proper response to the facts of offenses and threats, the dissemination of false information in social networks will ensure the creation of the necessary conditions and guarantees of free access to information, protection of privacy, protection from illegal information and psychological influences.
Preventing illegal information and psychological influence on public consciousness, manipulating it, creating a system to counteract information expansion aimed at deforming national identity, separating society from historical and national traditions and customs, destabilizing the socio-political situation, violating interethnic and interfaith harmony require the early adoption of the legal framework of social networks in the Republic of Uzbekistan. The existence of a regulatory framework provides an opportunity to monitor content in social networks, regulate and filter false information.
To regulate the legal aspects of social networking in the context of globalization and virtualization is the creation of an effective and adequate mechanism of legal regulation, social networking, emerging in her relations with the realities, the formation of a conceptually new approach to virtual relations, in particular social networks as the new phenomenon of the modern digital development, search evidence-based and practice oriented proposals for solving problems in regulation and social networking activities of their users.
At the same time, the analysis of the current legislation shows that there are no norms regarding the regulation of the national segment of the Internet and social networks, and the existing general norms are insufficient for optimal solutions to problems in social networksregulation of civil relations, protection of the individual, society and the state, ensuring information security. Having analyzed the current legislation of the Republic of Uzbekistan, as well as based on the practical experience of developed countries [4], we can conclude that today's legislation does not reflect the features of virtual legal relations, modern cyber law. Moreover, the status of social networks, their users and owners is not defined, legal relations in social networks are not regulated, there is no practical mechanism for protecting privacy, personal data, business reputation of companies, and state security. Tazhke does not regulate the procedure for resolving disputes, including international commercial disputes [5]. Thus, a scientific study of the legal aspects of social networks is necessary.
It is necessary to take into account the fact that the theoretical support should clearly define the general principles of legal regulation of social networks, the conceptual framework for ensuring information security, which should be implemented in practice in the development of draft laws. In our opinion, in this regard, it is necessary to take into account the experience of national authors -S.Gulyamov [6], I.Rustambekov [7], A.Rasulev [8].
It should be noted that legal regulation should take into account the entire national legal system in order to create a complete and fully harmonized set of rules that ensure both proper regulation and freedom of access to information.
Research and Analysis The legal aspects of social networks are a complex of relations, the regulation of which is complex and multifaceted, and their scientific research is relevant.
First, the legislation of the Republic of Uzbekistan does not determine the legal status of social networks, their owners and users. The issues of regulating the virtual space are generally reflected only in the norms of the Laws of the Republic of Uzbekistan «On Informatization», «On the principles and guarantees of Freedom of Information», «On Guarantees and freedom of access to information» [3]. However, these legal acts do not provide for mechanisms for regulating social networks and their subjects (the texts of the laws do not contain the concept Legal Regulation Of Social Networks: Necessity And Prospects (On The Example Of The Republic Of Uzbekistan) of «social networks»). A clear definition of the legal status of social networks will make it possible to distinguish them from the mass media, electronic platforms for certain types of activities (e-commerce, provision of services), and interaction between various entities (circulation through social networks). Secondly, social networks have become a widely popular platform for carrying out various types of activities. So, social networks become an excellent environment for trade and commercial activities (various types of Telegram bots that accept orders for the delivery of food and other goods). Many business structures (entrepreneurs, companies) actively conduct their activities through social networks, in particular in the Telegram messenger (for example, the food ordering service EVOS, Street 77 burger), Instagram.
In turn, for government agencies and organizations, the work in social networks has become the most important indicator of their openness and transparency. Social networks have become a convenient platform for dialogue with the population, working with appeals of individuals and legal entities. The information policy of state bodies and organizations is now evaluated not by the availability of information in the official websites, but by the speed of the information provided in social networks. By itself, the presence of an official page of state bodies in social networks already indicates that the social network has ceased to be a platform for personal correspondence and communication between people. Today, the social network is an important platform for society and the state. Modern users look at the social network as a platform for obtaining official information.
According to the UReport survey, the most popular sources of information about changes in legislation and new laws used by the survey participants today are the Telegram channels of the Ministry of Justice (34%) and the Internet media (22%). At the same time, social networks are the most popular source among all age groups of respondents [11].
The presence of various types of relations on the Internet allows us to distinguish both civil legal relations (the provision of services and the delivery of goods through social networks) and public relations (appeals to state bodies, information policy, etc.). Without any doubt, the unsettled nature of these relations entails: the lack of proper control over traditional relations (for example, taxation of the provision of services, delivery of goods through social networks) is problematic because it is difficult to maintain quantitative and qualitative accounting of the taxable object in this format of activity); the chaotic nature of relationships, the formation of a sense of irresponsibility and impunity (the lack of legal norms leads to problems in bringing to justice), as a result of which there is a fact of violation of the rights and freedoms of the individual in the virtual space.
Third, social networks, unfortunately, have become a place of violation of the rights and interests of the individual, society and the state. Increasingly, offenses and other illegal actions are committed in social networks. It is not for nothing that the world community is interested in creating an effective mechanism to counter threats in social networks. The conditions of the pandemic showed an increase in various financial frauds and fraudulent operations.
According to the gazeta.ru during the coronavirus pandemic, the number of calls from fraudsters in Russia increased by 200%. In the first half of 2020, fraudsters stole 4 billion rubles from the accounts of bank customers [12].
The virtual nature of relations in social networks expands the range of violations, often offenses in social networks are committed on the territory of several States, which indicates the need to take measures to respond to these offenses.
One of the key problems is the problem of regulating the activities of bloggers. According to the Law of the Republic of Uzbekistan «On Informatization», a blogger is defined as an individual who places publicly available information of a socio-political, socio-economic and other nature on their website and / or website page on the world information network, including for discussion by information users. As can be seen from the definition, there are no specific requirements for a blogger, which indicates that everyone can become a blogger. The law «On Informatization» prohibits a blogger from calling for a violent change in the existing constitutional order, territorial integrity of the Republic of Uzbekistan, promoting war, violence and terrorism, as well as ideas of religious extremism, separatism and fundamentalism, disclosing information that constitutes state secrets or other secrets protected by law, spreading information that incites national, racial, ethnic or religious hostility, as well as discrediting the honor and dignity or business reputation of citizens, allowing interference in their private lives, promote narcotic drugs, psychotropic substances and precursors, pornographic materials, as well as perform other actions that entail criminal and other liability in accordance with the law. However, there are no effective measures for bringing to justice and preventive control over the prevention of such actions in the legislation. This, in turn, entails the expansion of According to a survey conducted by UReport, almost half (49%) of respondents believe that there is a problem with fake news in Uzbekistan. Respondents suggested strengthening state control/ verification/supervision over social networks, blocking access to sources that publish fake news, especially in social networks [11].
At the same time, there are problems with Internet harassment in social networks, which negatively affect the consciousness of people, in particular young people. Dubbed «cyberbullying», this negative phenomenon has an impact on the minds of young people. Although the Law of the Republic of Uzbekistan «On the protection of children from information harmful to their health» provides for the protection of children from negative information, the provisions of the Law do not provide for the specifics of protecting children in the virtual space, filtering information in social networks. Cyberbullying is characterized by systematic and purposeful behavior, and the inability to identify the perpetrators. Also, the consequences of cyberbullying are expressed in a violation of the mental state and psychological balance, sometimes suicide or an attempt on it.
According to the study Mail.ru While more than 70% of Russian schoolchildren have been bullied, harassed or insulted on the Internet, a total of 58% of Russians have experienced online bullying. At the same time, about 76% of teenagers do not tell their parents about bullying [13].
The next important aspect is expressed in the regulation of the protection of privacy and personal data, preventing the dissemination of false and offensive information in social networks. Today, you can find many examples of illegal use of personal data of celebrities in their accounts and pages on social networks.
The State Unitary Enterprise «Center for Cybersecurity» noted that there are 40,640,449 freely available lines in the Telegram messenger around the world, of which 50,062 lines belong to the numbers of Uzbek mobile operators. A preliminary study showed a match of more than 50% of the compromised data, which indicates the relevance of the threat to the Uzbek segment of Telegram messenger users.
The urgency of this problem is particularly evident in the current global pandemic of the COVID-19 coronavirus infection caused by the SARS-CoV-2 coronavirus. In this regard, a quick and adequate response to these threats and the adoption of preventive measures in the form of legal norms regulating this area are of key importance.
Conclusion In connection with the above, the following important issues should be highlighted in the legal regulation of social networks: the legal status of social networks and their owners, users; regulation of relations arising in social networks; problems of protecting the rights and interests of individuals, society and the state in social networks, countering offenses.
Solving the problems of regulating the activities of social networks is of both theoretical and practical importance. The presence of complex problems indicates the importance of theoretical research of the essence of social networks within the framework of a complex system of cyber law, since traditional branches of law (civil law, administrative law) are not fully able to ensure the effectiveness of regulation and proper protection of the rights and freedoms of the individual, society and the state. While the legislation regulates some aspects of these problems in various legal acts, the lack of a comprehensive approach to social networks leads to the emergence of multi-vector problems.
It is necessary to take into account the fact that information and communication technologies are already developing rapidly in Uzbekistan, and digitalization is becoming the ultimate goal of reforming various state spheres (judicial proceedings, public services). Therefore, the delay in regulating social networks will hinder the development of the digital economy.
Therefore, social networks should be studied through the prism of legal norms in the context of modern realities.
Thus, an effective legal basis for regulating relations in social networks is an objective necessity that determines the state policy in the information sphere in the coming decades. What follows is the need to regulate the legal aspects of social networks in the context of globalization and virtualization: determining the legal status of social networks, regulating legal relations in social networks, protecting the subjects of virtual relations.
As part of the legislative changes, it will be necessary to conduct research in the following main areas. First, a comparative legal analysis and study of the best foreign experience in the implementation of the legal mechanism for regulating social networks. Secondly, the inventory of the legislation of the Republic of Uzbekistan, the development of conceptual proposals for the development of norms, as well as amendments and additions to the current legislation of the Republic of Uzbekistan. | 2021-05-11T00:03:52.395Z | 2021-01-15T00:00:00.000 | {
"year": 2021,
"sha1": "340cf68e42e6d773821cd259824219fc9fb15456",
"oa_license": "CCBY",
"oa_url": "http://psychologyandeducation.net/pae/index.php/pae/article/download/1311/1098",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5f36bc1d02878a17ca3f4d43a46d5e4afbb76f74",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
251149228 | pes2o/s2orc | v3-fos-license | Characterization and potential applications of silver nanoparticles: an insight on different mechanisms
In the 21 st century, a great interest is devoted to biomedical application of various nanoparticles, particularly, as a means of improving the effectiveness of therapy for different diseases. Silver nanoparticles (AgNPs) are among the most studied types of nanoparticles. Due to the wide spectrum of their action, silver nanoparticles may be used both to influence pathogenic microorganisms and to improve the treatment of cancer. The basic physicochemical characteristics and stabilizing agents play an important modifying role in the pharmacokinetics and pharmacodynamics of nanoparticles, determining the severity of the caused effect and their potential toxicity. This review summarizes the main physicochemical properties of AgNPs and their impact on the biological effects. Additionally, biochemical and pathophysiological mechanisms of silver nanoparticles activity against various microorganisms and tumor cells are considered. Fi-nally, we address the problems associated with determining the optimal characteristics of nanoparticles in order to increase their efficiency and reduce their toxicity for the macroorganism.
Introduction
Currently, the research on biomedical properties of various nanoparticles, synthesized from gold or silver, attracts a considerable interest. Despite the lack of data on the mechanisms of silver toxicity in vivo, the nanoparticles obtained from it are widely used as antibacterial substances in medical, cosmetic, food and textile industries. Silver nanoparticles (AgNPs) have unique binding properties, including internal antimicrobial activity and potentially low toxicity. Silver, and, in particular, its ions have the strongest antibacterial effect among metals; AgNPs may be used in wound dressings, local drug delivery systems, orthopedic and orthodontic materials, antiseptic solutions, catheters, bandages, tissue scaffolds and protective clothing [1][2][3]. Thus, AgNPs can be an effective alternative to local antibacterial drugs because of special physicochemical properties and wide spectrum of action against various Gram-negative (Escherichia coli, Neisseria gonorrhea, P. aeruginosa) and Gram-positive (S. aureus, including MRSA) bacteria, as well as intracellular microorganisms (Chlamydia trachomatis) and different viruses [3][4][5]. AgNPs may be considered as a means to modernize the treatment regimens for Mycobacteria, such as tuberculosis [6]. The systematic review of Fakhruddin et al. demonstrates the existence of the evidence of antimicrobial properties of silver nanoparticles against cariogenic flora in vitro and prevention of the dentin destruction [7].
Silver nanoparticles may interact with bacterial membranes and penetrate into the cell with the subsequent initiation of vital malfunction and structural changes leading to destruction and death of pathogenic microorganisms [15]. The key factor is the activation of oxidative stress processes leading to dysfunction of cellular structures such as DNA (genotoxic effect). Disturbance of protein structure occurs due to the formation of Ag-S complexes, which leads to malfunction of membrane pumps and respiratory chain. Activation of lipid peroxidation processes is another important factor of cellular dysfunction [16].
Data sources
A literature review on the properties of silver nanoparticles was performed using the following databases: MEDLINE (PubMed interface), SCOPUS, Cochrane Library, Google Scholar. The published data on the in vitro and in vivo studies were accessed between January 1990 and December 2021. Backward and forward reference searching was applied to find the most relevant articles.
Inclusion and exclusion criteria
The current review includes predominantly full-text articles presented in English language and focused on silver nanoparticles chemical/physical/biological properties. The selected articles were original studies (observational and experimental), systematic reviews, narrative reviews and meta-analyses. The studies were excluded if they were not consistent with the research objectives and the actual purposes of the current review. All studies were carried out in vitro (cell lines) or on mice/rats due to ethic issues.
The key research objectives were: 1. Studying different chemical and physical properties of AgNPs, which can change biological effects.
2. Uncovering some important biological and biochemical mechanisms of AgNPs effects and toxicity against bacteria, viruses, fungi, protozoa, and cancer. Figure 1 Characteristics of silver nanoparticles. Different ways of AgNPs synthesis. Adapted from refs [17,18]. REVIEW 3 of 20 3. Elucidating potential harm and toxicity of AgNPs for a macroorganism and a host.
The current review was prepared according to a scale for the quality assessment of narrative review articles -SANRA [25]. 3. Results and discussion 3 [27]. Consequently, the smaller size results in the higher antibacterial activity. This rule is explained by the fact that small AgNPs have larger specific surface area and more surface-acting centers, resulting in the increase in Ag + release rate and, therefore, toxicity [28,29]. Dobias et al. demonstrated that in natural ponds, the dissolution of 5 nm AgNPS occurred faster than that of 10 nm silver nanoparticles. Smaller nanoparticles released more silver ions than 50 nm AgNPs [30]. Morones et al. showed that only nanoparticles with a diameter of 10 nm or less could penetrate through the bacterial cell membrane [31]. In addition, applying the nanoparticles with the size of 8.3±1.9 nm increases the damage of cellular DNA by influencing the nucleotide excision repair [32]. Conversely, Bélteky et al. noticed that in spite of the fact that nanoparticles with the smallest diameter have the greatest toxicity, an increase in the size of the nanoparticles promotes colloidal stability and also provides greater resistance to environmental conditions and aggregation [33].
Shape
Another property that determines the antibacterial activity of silver nanoparticles is their shape. There are quasispheres, nanotubes, rods, disks, cubes, prisms, octahedral, and triangular nanoplates [34][35][36][37] (Figure 2). Pal et al. investigated the antibacterial activity of spherical, rodshaped, and triangular silver nanoparticles against E. coli. Truncated triangular nanoplates demonstrated the highest bactericidal activity compared to nanospheres and nanorods. The result could be explained by the presence of special active faces that determine their high reactivity. This triangular structure of nanoplates promotes more effective interaction with the bacterial cell, leading to its lysis [38]. On the contrary, in one of the latest studies, triangular nanoplates showed a smaller antibacterial effect compared to nanospheres against E. coli, S. aureus, P. aeruginosa. This can be put down to the fact that the surface area of the nanospheres (1.307±5 cm 2 ) exceeded the surface area of triangular nanoplates (1.028±35 cm 2 ) [39]. In the other studies, the greatest bactericidal activity was also demonstrated by nanoparticles with the largest surface area, which caused the accelerated formation of ions on the nanoparticle surface during their dissolution, therefore, increasing the antibacterial effect [40,41].
Concentration of nanoparticles
The dissolution of silver nanoparticles depends on their initial concentration. If this concentration is lower than the solubility of AgNPs (which depends on the size of the nanoparticles, the presence of ligands forming a complex with silver (I) and the physicochemical properties of the solution), all nanoparticles will eventually dissolve. Aggregation also explains why the initial concentration of AgNPs affects the release kinetics of silver ions. For example, if other parameters are fixed, the higher initial concentration of nanoparticles results in the slower initial release of silver ions. [43]. This is explained by the fact that at the high initial concentration AgNPs tend to aggregate more rapidly, which decreases the soluble AgNPs surface area [44]. The initial concentration of AgNPs ranging from 300 to 600 μg/L increases the aggregation rate for all three sizes of AgNPs (20, 40 and 80 nm) [45]. The antibacterial activity of AgNPs is also determined by the concentration of nanoparticles. Panáček [48]. In addition, the minimal concentration of AgNPs for antibacterial properties is mainly determined by their shape. Thus, truncated triangular silver nanoparticles showed the inhibition of bacterial growth at the 1 μg concentration, while nanospheres and nanorods induce an inhibiting effect at concentrations of 12.5 μg and 50-100 μg, respectively [31,38]. But, the shape cannot be considered as a single factor that affects antibacterial activity, because the particle size varies with the form, which influences the overall rate of particles dissolving [34].
Stabilizing agents
As a rule, stabilizing agents are used during AgNPs synthesis to provide an electrostatic repulsion between individual particles and to prevent their aggregation [43]. As opposed to other silver forms, nanoparticles exhibit high surface energy values due to their small size and, therefore, they are more likely to clump and to form agglomerates. Choosing a suitable stabilizing agent is an important requirement for the nanoparticles stabilization, because coating agent affects the structural properties of the nanoparticle, including its size, shape, surface charge, and interaction with environment [49]. It is worth noting that coatings probably detach from the surface after interacting with environment [50]. Among stabilizing mechanisms of coating agents, there are electrostatic stabilization, steric stabilization, and stabilization by hydration forces, depletion stabilization and stabilization by the Van der Waals forces [51]. Organic coating agents are widely used as stabilizers for silver nanoparticles. In some cases, the stabilizing agent acts simultaneously as a reducing agent of Ag + ion to Ag 0 [52]. Also, organic molecules contribute to the complexation of silver ions, thereby accelerating their dissolution [53]. The most common coating agents for silver nanoparticles are citrate, polyvinyl alcohol (PVA), sodium dodecyl sulfate (SDS), polyvinylpyrrolidone (PVP), Tween 80 [54][55][56][57][58]. Citrate is one of the most commonly used stabilizers and reducing agents used for the AgNPs synthesis. Citratecoated particles are electrostatically stabilized by negatively charged anions. However, as the pH value decreases, the citrate-anion protonates, which causes the loss of stabilization [59]. In vivo AgNPs capped by citrate or PVP demonstrate the greater antibacterial activity against Salmonella, compared to uncapped AgNPs, which could be explained by minimal interaction with serum proteins. Uncapped AgNPs, by contrast, lose their antibacterial activity due to interaction with bovine serum albumin (BSA) [58]. Kvítek et al. proved that any of three mentioned stabilizing agents (Twin-80, sodium dodecyl sulfate (SDS) or polyvinylpyrrolidone (PVP)) increased the antibacterial activity and excellent stabilization of silver nanoparticles dispersion against aggregation. Among all AgNPs ligands, SDS-modified AgNPs proved to be the most stable due to the electrostatic repulsion and steric effect. SDS-modified AgNPs demonstrated the highest antibacterial activity associated with good silver nanoparticles dispersibility and effective interaction with the cell membrane [60]. Ajitha et al. found that PVA-coated AgNPs had the smallest size and demonstrated high stability and antibacterial activity, compared to nanoparticles stabilized by other coatings [49].
Currently, the biosynthesis of metals and metal oxide nanoparticles, using biological agents such as bacteria, fungi, yeast, plant and algae extracts, has gained popularity in the field of nanotechnology [61]. Thus, AgNPs, which are synthesized by various microorganisms, provide high stability due to the fact that microbes produce large amounts of protein [62]. AgNPs, which are synthesized using cyanobacterial extract of Oscillatoria limnetic, have high antibacterial activity against multidrugresistant bacteria (Escherichia coli and Bacillus cereus) as well as cytotoxicity against breast cancer and colon cancer cells at low concentrations of 6.147 μg/ ml and 5.369 μg/ml, respectively [63]. Plants contain carbohydrates, fats, proteins, nucleic acids, pigments and several types of secondary metabolites that act as stabilizers and reducing agents in the biosynthesis of silver nanoparticles [64]. Caffeine and theophylline are widely used as stabilizing agents and could be found in water-alcohol extracts of Coffea arabica and Camellia sinensis as well as in extracts of black tea [65,66]. Utilization of fungi as reducing and stabilizing agents in the biogenic synthesis of silver nanoparticles is also attractive because they produce large amounts of protein. During biological synthesis, nanoparticles are coated with biomolecules derived from the fungus, which leads to improved stability and increased biological activity [17,67]. Konappa et al. capped silver nanoparticles with secondary metabolites secreted by the T. Harzianum fungus. The obtained AgNPs had high stability as well as a wide range of antibacterial activity against two gram-positive bacteria (S. aureus and B. subtilis) and two gram-negative bacteria (E. coli and R. solanacearum) [68].
Silver nanoparticles also could be stabilized with polymeric carbohydrates such as starch, sodium alginate and chitosan [69][70][71]. Muhammad et al. used sericin (a protein that is a part of silk) as a stabilizing agent. Sericin-coated nanoparticles proved to be highly effective against bacteria and maintained stability over a wide range of temperatures and pH concentrations. The authors suggested wide use of sericin in the future because of low cost and high stability [72]. Hydroxyl groups of sericin form complexes with silver ions, thus preventing their aggregation and deposition [71]. Azócar et al. used diclofenac (d) and ketorolac (k) as stabilizing agents, which are widely used as anti-inflammatory drugs in medicine. The results demon-strated that AgNPs-k were more stable than the uncoated nanoparticles. Under the influence of UV light (wavelength of 365 nm), capped nanoparticles generated anion radicals. This effect is probably associated with capping agents, because bare nanoparticles do not promote the formation of superoxide anion [73].
Surface charge of the nanoparticles
Gao et al. found that dispersion and stability of AgNPs are related to their surface charge. The negative surface charge contributes to the electrostatic stabilization of the nanoparticles against aggregation. However, the dispersion charge of particle can change depending on the pH value. The surface charge of AgNPs becomes more negative at higher pH, as confirmed by the high zeta potential value equal to -32.5 mV, which promotes the stability of the suspension. Conversely, lower pH values of 5 and 3 are characterized by a decrease in zeta potential to -22.5 and -18.2 mV, which, therefore, reduces the repulsive forces and stability [59]. Moreover, small AgNPs have lower zeta potentials than large AgNPs; thus, small particles have less electrostatic repulsion and more rapid aggregation. Surface charge is one of the most important factors of AgNPs toxicity. Recently, the antibacterial activity of positively and negatively charged AgNPs has been studied. Some studies demonstrated that positively charged AgNPs have a higher bactericidal activity against all microorganisms compared with negatively or neutral charged AgNPs [74,75]. El Badawy et al. found out that positively charged BPEI-AgNP were more toxic against bacteria, compared with negatively charged citrate-AgNP. It could be explained by negative membrane charge of both grampositive and gram-negative bacteria. Consequently, there is an electrostatic barrier between negatively charged citrate-AgNP and bacterial membranes that limits interaction between the cell and nanoparticles, thus reducing toxicity [76,77]. Qiao et al. synthesized zwitterion-modified AgNPs, which could change the charge depending on the environmental pH. These AgNPs demonstrated a pHdependent transformation of negative charge into positive charge. Therefore AgNPs were innocuous for healthy tissue cells (pH=7.4), while interacted effectively with negatively charged bacterial surfaces in foci of infection (pH=5.5) [78].
Influence of biological conditions on the nanoparticles properties
Such factors as pH, presence of dissolved oxygen, electrolytes and organic substances, in particular, proteins affect significantly the physicochemical properties and antibacterial activity of silver nanoparticles [79,80].
Oxygen availability
The dissolution of the nanoparticles occurs in the presence of dissolved oxygen. The surface of AgNPs is easily oxidized by O2 and other molecules in ecological and biological systems, resulting in the release of Ag + and defining their toxicity [81]. However, silver nanoparticles do not dissolve completely in the presence of molecular oxygen. A complete dissolution requires a stronger oxidizer such as H2O2. The AgNPs aggregation in oxygenated water is 3-8 times faster than under anaerobic conditions, indicating that dissolved molecular oxygen can also significantly affect this process [45].
Interaction with proteins
It is well known that nanomaterials can interact with various biomolecules of living organisms, primarily with proteins that are able to adsorb on the nanoparticles surface, forming the biomolecular corona [82]. About 300-500 human plasma proteins could be bound to different nanoparticles. Nanomaterials are rapidly coated by proteins in physiological fluids. The formation of the protein corona leads to the change in the physicochemical properties of nanoparticles, including hydrodynamic size, surface charge, and aggregation [84]. The structure of protein corona depends primarily on the nanoparticle material, size and surface properties, as well as on the composition of protein environment and experimental/physiological conditions [84][85][86]. Tai In addition, it contributes to the colloidal stability of the particle, that presents the aggregation of the nanoparticles and protective from aggressive environmental conditions [87,88]. Functionalization of silver nanoparticles surface significantly affects the formation dynamics of protein corona. The presence of a protein coating on the surface of the nanoparticles strongly reduces the binding degree to the protein. On the contrary, the formation of the corona on the uncoated nanoparticles improved significantly their stability in biological environment. The in vitro experiments showed that the physiological stability of AgNps caused by the corona formation may be directly associated with their binding to the cell, capture and toxicity [89]. The interaction between nanoparticles and different cells is determined by the composition of protein corona [90].
pH
The solution pH affects significantly the surface charge and oxidative dissolution of AgNPs. The behavior of nanoparticles differs under acidic and alkaline conditions. AgNPs are found to destabilize in acid and neutral pH, which results in higher aggregation rate. Under alkaline conditions, negatively charged hydroxyl ions promote the stabilization of nanoparticles [91]. Bélteky et al. found that AgNPs are more stable at alkaline and neutral values of pH than under acid conditions. An increase in the pH level leads to higher deprotonation degree of free organic functional groups and increasing negative charge on the nanoparticles surface. It facilitates the increasing electrostatic repulsion between particles and reduces the degree of aggregation [82]. Sivera et al. showed that gelatin-stabilized AgNPs have robust resistance to aggregation in a wide range of pH (2 to 13). Moreover, these AgNPs demonstrated long-term stability against aggregation and maintained high antibacterial activity under environmental conditions for several months [92].
Electrolyte concentration
An increase in the electrolyte concentration leads to an increase in the AgNPs aggregation rate [43]. Stebounova et al. concluded that AgNPs aggregate in solutions with high ionic strength regardless of stabilization [79]. Bélteky et al. found that in solutions containing 50 mm of NaCl, the aggregation is slow and the size of AgNPs agglomerates does not change significantly. However, the addition of 150 mm of NaCl induces the rapid aggregation of AgNPs up to micrometer size range. Sudden changes in aggregation occur due to an increase in the concentration of Na + , because in large quantities these ions can shield negatively charged surface groups, which provide electrostatic stabilization. With reduced repulsion, the particles form larger aggregates during collisions [82]. The presence of chloride ions in solution causes silver chloride deposition [53]. However, sulfide ligands significantly reduce the toxicity of AgNPs caused by generation of insoluble silver compounds [93]. The nanoparticle aggregation is very prominent in solutions with high concentration of divalent cations, such as Mg 2+ and Ca 2+ , due to the stronger neutralization of the surface charge. On the other hand, monovalent cations (K + and Na + ) can also enhance the aggregation of AgNPs, but they are much less effective in the shielding of surface charge than divalent ones [43]. Finally, the effect of the ionic strength on the AgNPs aggregation is more significant for smaller particles [93].
The main mechanisms of antibacterial activity
The main antibacterial properties of silver nanoparticles are provided directly by silver ions; in other words, nanoparticles are a kind of transporters of the main active substance [97]. Moreover, AgNPs obtained by the "green" synthesis are more toxic than AgNPs obtained by nonbiological approaches [64]. AgNPs release silver ions that interact with sulfa groups of membrane proteins, which disturb the membrane integrity and thereby increase its permeability. AgNPs interact with the cytoskeletal protein MREB, which plays an important role in the survival and formation of the bacterial cell [98]. Active adhesion of silver ions to the cell membrane leads to the change in its charge (depolarization and desensitization), lysis of cellular components and rupture of organelles. Moreover, silver ions participate in transduction processes by dephosphorylation of tyrosine residues, initiating the launch of the bacterial cell apoptosis program [13,99].
Once in the cell, free silver ions interact with respiratory chain enzymes (dehydrogenases) and bind to functional electron donor groups (thiols, phosphates, imidazole, indoles, and hydroxyl groups), disrupting the ATP synthesis and K + transport [14,98,100].
In addition, the pathological effect of silver nanoparticles is induced by the formation of reactive oxygen species (ROS): superoxide anion radical (O2• − ), peroxide (O2• −2 ), hydroxyl radical (•OH), hydrogen peroxide (H2O2), hydroxyl ions (OH − ). Different ROS have various pathological effects, for instance: OH − , H2O2 and O2• − have the greatest antibacterial activity. OH − interacts with positively charged cell membranes, while H2O2 has the greatest penetrating effect. High concentration of ROS leads to a decrease in the concentration of glutathione, an increase in lactate dehydrogenase, dysregulation of calcium channels, matrix metalloproteases and intracellular redox homeostasis [101]. Actually, ROS interacts with the thioredoxin system of S. aureus, which is one of the most important disulfide reductase systems that counteract the processes of oxidative stress. Oligomerization and dysregulation of thiol-redox homeostasis due to the depletion of intracellular thiol leads to disruption of the protective components and to activation of the oxidative stress [102].
Moreover, ROS interact with DNA, thereby triggering the processes of its modification. In addition to the indirect effect through oxidative stress, silver ions interact with sulfur and phosphorus groups of DNAs, which leads to the rupture of hydrogen bonds between the chains; the processes of replication and reproduction are disrupted (i.e., genome splitting is observed) [103]. Unipolar charge of DNA and AgNPs leads to additional destabilization of the chains. Damage to DNA molecules occurs due to oxidation and alkylation of its bases, which leads to the formation of various compounds: 8-oxoguanine, 7,8-dihydro-8-oxoguanine, 8-oxoadenine, unsubstituted and substituted with an imidazole ring of purines. These compounds are integrated into DNA under the influence of hydroxyl radicals [103,104].
In addition to DNA, silver ions also interact with RNA and block the subunits of ribosomes (30S), which are necessary for binding tRNA [105,106]. Being in cytoplasm AgNPs induce the processes of ribosome denaturation, inhibition of translation, protein synthesis and carbohydrate metabolism [102,107].
The disturbance of the signal transduction processes is equally important because suppressing the phosphorylation of tyrosine residues results in blocking the cell cycle, the synthesis of exopolysaccharides and capsular polysaccharides, which ultimately leads to the interruption of the bacterial cell division [103].
Thus, the antibacterial activity of silver has a complex effect on the microorganism, disrupting various aspects of its vital activity.
Gram-negative bacteria are more sensitive to the effects of silver ions than Gram-positive bacteria, which is due to a thinner layer of peptidoglycans in the cell wall [108]. For example, S. aureus, a Gram-positive coccus with a cell wall width of 30 nm, can effectively prevent the inward penetration of nanoparticles due to the high affinity of the peptidoglycan layer [109,110].
The antibacterial effect may depend not only on the size of the nanoparticles (nanoparticles smaller than 10 nm have high penetrating power), but also on the physicochemical properties, in particular, surface characteristics. So, positively charged nanoparticles have a greater binding ability with a negatively charged cell membrane due to the electrostatic interaction, in contrast to the negatively charged AgNPs. The higher positive charge of the ions results in the lower severity of the electrostatic barrier [97,108,109,111].
Despite the fact that silver ions play a crucial role in disrupting the basic processes of microorganisms' activity, nanoparticles also have an antibacterial effect due to the mechanism of the "contact destruction".
The antibacterial properties of silver nanoparticles also depend on their shape. For example, spherical and triangular AgNPs have greater activity against E. coli and S. aureus is than AgNPs of irregular shape, which may be due to a larger surface and increased release of silver ions [5,13]. However, Pal et al. elucidated that the triangular shape's AgNPs have a greater antibacterial effectiveness against E. coli, compared to spherical and rod-shaped particles [38]. Moreover E.coli is more sensitive to AgNP exposure than S. aureus regardless of AgNP size and surface properties [112].
There are published data indicating the ability of silver nanoparticles to suppress the formation of bacterial films on medical instruments without significant accumulation of silver ions in surrounding organs and tissues. The most promising direction is the approbation of this coating for catheters, drains and medical masks [5].
Moreover, numerous studies pinpoint the antiinflammatory properties of AgNPs, which are associated with a decrease in the pro-inflammatory cytokines synthesis (TNF, IL-12, IL-1, NF-kb) and the induction of apoptosis. In addition, the modulation role of silver in the process of wound and periodontal healing has been discovered [104].
Thus, the main antibacterial mechanisms of AgNPs ( Figure 3) are as follows: 1. Interaction with the membrane, impairment of its permeability and change in charge.
2. Disturbance of intracellular processes and organelles dysfunction.
3. Inhibition of mitochondrial processes (respiratory chain malfunction), activation of oxidative stress, synthesis of reactive oxygen species and lipid peroxidation.
4. Interaction with DNA and RNA, blocking the replication, transcription, and translation. 5. Inhibition of transduction of signals due to dephosphorylation of tyrosine residues. In vitro AgNPs could adhere to the virus surface, bind the viral ligand and therefore prevent its spreading in cell cultures (Figure 4a, 4b). Lv et al. indicate the property of silver nanoparticles to inhibit TGEV (transmissible gastroenteritis virus in pigs) induced apoptosis by binding to S glycoprotein. The suppression of the proapoptotic pathway is one of the possible mechanisms because TGEV causes an increase in concentration of protein BAX [114]. Based on the published data, there is assumption that the nanoparticles can also be effective against the novel coronavirus infection (COVID-19) because they have a possibility to interact with the spike glycoprotein and to decrease pH of respiratory epithelium, which would effectively prevent intracellular invasion of the virus (Figure 4c) [98,115]. There is a description of AgNPs possibility to bind the Gprotein of the respiratory syncytial virus in the hep-2 cell culture. AgNPs coated with polyvinylpyrrolidone inhibited the reproduction of RSC virus by 44%, while silver nanoparticles coated with biomolecules and RF-112 did not have a visible effect on the pathogen's life cycle [104,116].
In addition to the binding to surface proteins, AgNPs interact with the viruses' nucleic acids (for instance, of the hepatitis B virus) and disrupt their replication in the host cells [103]. The inhibition of HBV RNA/DNA synthesis and creation of extracellular virions are observed in the human hepatoma HepAD38 cell line [117].
Biologically synthesized AgNPs of small size (<20 nm) have greater effectiveness against herpes simplex virus type 1/2 and human parainfluenza virus type 3 due to a more pronounced blocking effect on the interaction between the virus and a cell of the Veroline cell line. Nanoparticles are able to interact non-covalently with the thymidine kinase ligand, thereby suppressing the activity of herpesviruses. In addition to inhibition of HPV 1 and type 2, there is an inhibition of oncogenic herpesviruses, for example, Epstein-Barr virus [13,120]. Moreover, AgNPs block the interaction of herpesviruses with heparan sulfate proteoglycans of cell membranes, preventing invasion processes. These properties could be enhanced by combining AgNPs with tannic acid [121,122].
Particles smaller than 10 nm can prevent the spreading of influenza virus in cell culture MDCK [123]. In 2017, Lin et al. published the data about the activity of combination treatment with Zanamivir and AgNPs against the H1A1 influenza virus in MDCK cell line. This combination not only demonstrated high thermodynamic and kinetic stability, but also suppressed effectively the replication of the influenza virus by regulating neuraminidase activity [124].
There are published data about the ability of silver nanoparticles to interact with HIV and inhibit significantly its reproduction by binding to the regions of disulfide bond (sulfur-containing residues of CD4-binding domain) of gp120 [125,126]. Moreover, silver nanoparticles could prevent infection of cervical tissue with HIV type 1 without cytotoxic effect in vitro [127].
The main mechanisms of antifungal activity
The mechanisms of AgNPs antifungal activity are not fully understood. There is a suggestion that the interaction of nanoparticles with the membrane leads to disruption of its function (the current of transmembrane ions, including protons) and division processes (especially in yeast). Also, AgNPs induce an inhibition of germ tube formation, growth of biofilm and secretion of hydrolytic enzymes [130,131]. Beyond the membranotoxic effect, silver nanoparticles could initiate a cascade of intracellular pathological processes leading to the fungus death: oxidative stress, interaction with thiol groups and phosphoruscontaining molecules, blocking protein synthesis. Thus, the fundamental antifungal mechanisms are similar to antibacterial ones (Figure 5a) [132]. The majority of the published data was acquired by studying the activity of AgNPs against plant pathogens and mold fungi [133]. However, AgNPs have antifungal properties against pathogenic fungi that cause dermatophytosis: Candida sp. (including Candida albicans) and Trichophyton mentagrophytes [134,135]. Even amphotericin-B resistant strains of Candida glabrate were found to be sensitive to nanoparticles [136]. Silver nanoparticles also have a biocidal effect on biofilms that are formed by Candida spp. [137].
There are data on the inhibition effect of particles on fungal keratitis pathogens (Fusarium spp., Aspergillus spp., Alternaria alternate) in vitro. At the same time, AgNPs had a potentially greater antifungal effect than antimycotic drug -natamycin [140].
AgNPs can enhance the effect of antifungal drugs. There are data about synergy of silver nanoparticles and ketoconazole against the main cause of seborrheic dermatitis -Malassezia furfur. Combination therapy leads to a decrease in the frequency of drug use and the frequency of relapses not only of seborrheic dermatitis, but also of other malasesiosis [141]. Also, AgNPs suppress the growth of different pathogenic Aspergillus species (Aspergillus niger, Aspergillus flavus, Aspergillus fumigatus), which can cause a wide range of diseases. Due to this effect, silver nanoparticles can play an important role in the treatment of aspergillosis, especially in patients with drug-resistant strains [142]. It is worth mentioning that Aspergillus niger can be used to synthesize AgNPs (one of the directions of biological synthesis). The synthesized nanoparticles have the ability to inhibit the growth and development of Allovahlkampfia spelaea, which causes resistant keratitis [143].
Thus, silver nanoparticles can play an important role in the treatment of fungal infections, especially due to the scarcity of the antifungal drugs and increasing number of drug-resistant species (Figure 5b). The combination of nanoparticles and drugs can boost the effect of the latter and have an independent bactericidal action in the case of multiple resistance of pathogenic fungi. AgNPs can also be used to prevent the spreading of mold on different surfaces [133]. However, the inhibition of fungal growth by silver nanoparticles is less prominent than that of bacteria, which may be due to the presence of chitin in the wall of the fungus, rather than peptidoglycans [142].
Mechanisms of antiparasitic activity
There are reliable data confirming the antiparasitic properties of silver nanoparticles, in particular, against the causative agent of cutaneous leishmaniasis (Leishmania tropica) (Figure 6a). AgNPs bind to the sulfo-and phosphorus-containing membrane and DNA proteins, blocking DNA synthesis and activation of oxidative stress. AgNPs have an anti-promastigote effect because of blocking the proliferation of promastigotes. In addition, AgNPs sup- press the vital activity of amastigotes and reduce their survival in infected host cells. Anti-amastigote properties are boosted by additional exposure of ultraviolet light. Probably, when exposed to ultraviolet light, there is an increase in the concentration of monosulfide radicals, which are formed from complexes. These complexes are formed by interaction of silver ions and cysteine groups of parasitic proteins [144,145]. Apart from Leishmania, the antibacterial activity of silver nanoparticlesis was observed against oocysts Entamoeba histolytica, Cryptosporidium parvum and several other protozoa [131,149]. For example, in the study of Costa et al. biogenic AgNPs inhibited the replication of Toxoplasma gondi (which causes toxoplasmosis) in cell cultures, such as BeWo, HTR-8/svneo, Hela and in villous explants. Moreover, the nanoparticles induced secretion of inflammatory cytokines in cells, for example, in the BeWo line: IL-4 and IL-10; in the HTR-8/SVneo line: IL-4 and the macrophage migration inhibitory factor (MIF). Toxoplasma gondi increased the MIF concentration in the BeWo cell culture and IL-6 in HTR-8/SVneo line. In villous explants the synthesis of IL-4, IL-6 and IL-8 decreased after infesting. In Hela cell line an increase in the NO concentration, oxidative stress and reduction of pro-inflammatory cytokines, in particular, IL-8 were observed. Thus, silver nanoparticles can significantly inhibit the spread of T. Gondi without developing dysfunction of the host cells ( Figure 6b) [150,151]. In addition to the cell cultures and chorion-ic villi, AgNPs suppresses toxoplasma's replication in liver and spleen tissues [152]. One of the possible antiparasitic mechanisms includes the suppression of mitochondrial function, disturbing mitochondrial membrane potential, redox signaling and destruction of leucine aminopeptidase (LAP) [153].
The current data plays an important role in the development of alternative approaches to the treatment of toxoplasmosis, particularly in pregnancy, because standard drugs used for the treatment of this pathology have teratogenic and myelosuppressive properties. Moreover, toxoplasmosis is part of the TORCH complex, which includes a group of intrauterine infections that lead to impaired fetal development and even death.
The results of Younis et al. demonstrated the effectiveness of AgNPs against Blastocystis hominis, which is the causative agent of blastomycosis. The most prominent antiparasitic effect in vitro was observed with a combination of particles and metronidazole. The concentration of B. Hominis decreased by 71.69% in the metronidazole group, by 79.67% in the AgNPs group and by 62.65% in the combination therapy group (AgNPs + metronidazole) after 3 hours (p<0.05). The nanoparticles are likely to interact with and modify glycoprotein and lipophosphoglycan molecules on the parasite surface they may induce oxidative stress, and inhibit ROS synthesis and DNA replication [154].
There are data on the possible sporicidal action of nanoparticles against Echinococcus (Figure 6c). Moreover, AgNPs have a synergistic effect with albendazole and are able to prevent the development of adverse reactions in the liver associated with this drug. For example, they decrease the severity of necrosis, steatosis, and reduce the level of transaminase and IFN-γ. Combination therapy is associated with a greater degree of structural changes in echinococcal cysts (reduction of cyst size and cyst mass) [155,156].
Additionally, silver nanoparticles are used for the development of new directions in the treatment of giardiasis (Giardia lamblia). The combination with chitosan and curcumin leads to the complete eradication of giardia in the intestine and feces of rodents without the development of adverse reactions [157]. The study of tropical malaria (Plasmodium falciparum) using AgNPs for the treatment is ongoing [158]. Moreover, silver nanoparticles could be used in ophthalmology for prevention of Acanthamoeba adhesion on contact lenses (amoebic keratitis prophylaxis) [159].
Main mechanisms of anticancer activity
Silver nanoparticles are promising for developing and modifying approaches to antitumor therapy. AgNPs can have a cytotoxic effect on tumor cells with subsequent suppression of the pathological process. The decreased lymphatic outflow in malignant tumors allows nanoparticles to accumulate and act longer [160]. Moreover, tumor cells absorb AgNPs (by endocytosis) to greater than normal cells [2].
There are main mechanisms of anticancer activity: induction of oxidative stress, changing the structuralcellular morphology and activation of pro-apoptotic processes (caspase 3 and 9, regulation of p53, p38 MAPK, HIF-1α, increase in BAX concentration and decrease in Bcl-2 concentration) [161]. Apart from apoptosis, there are also necrosis and autophagy in cancer cells are activated by stimulation of autophagosomes formation through the PtdIns3K signaling pathway. Except direct pro-apoptotic effect, there is also an indirect activation of apoptosis through oxidative stress and synthesis of proinflammatory cytokines (IL-6). An increase in the concentration of TNF-alpha and "NF-kB" nuclear factor contributes to the activation of pro-inflammatory processes in the tumor cell [162]. The increased concentration of oxygen radicals and significant depletion of glutathione leads to the dysfunction of mitochondria and the NADP/NAD system, impaired permeability of the outer mitochondrial membrane, destruction of the respiratory chain, blocking the ATP synthesis and release of cytochrome C into cytosol, which are important activating factors of caspase 3 (through apaf-1) and caspase 9 (Figure 7) [3,101,[163][164][165].
Besides inhibiting the mitochondrial activity, AgNPs affect structural and functional characteristics of DNA. AgNPs provoke the DNA methylation, increasing the number of chromosomal aberrations and malfunction of the repair system. For example, they cause the downregulation of proliferating cell nuclear antigen, i.e. the clamp of DNA polymerase, which plays an important role in the synthesis and reparation of DNA. As in bacterial cells, the released silver ions are able to disrupt the hydrogen bonds between the DNA bases, which leads to disorganization [3]. The activation of c-Jun NH2 terminal kinase (JNK) is an additional factor in DNA fragmentation and the atopic bodies formation [162].
Moreover, the action of AgNPs is characterized by the disturbance of metabolic processes and sensitization of the tumor cell, increasing its sensitivity to antitumor drugs, in particular, to 5-fluorouracil due to the modulating effect on expression of uracil phosphoribosyl transferase, which ultimately leads to active induction of apoptosis [166]. In addition, there are data on the pharmacological synergism of AgNPs and doxorubicin [167].
In lung fibroblasts and glioblastoma cells AgNPs induce different processes, such as metallothionein upregulation, downregulation of the actin-binding protein and filamine, cell cycle arrest in phase G (2)/M [2,168]. Biosynthesized AgNPs can block the cell cycle in G1 phase. One possible mechanism of the cell cycle arrest is the downregulation of cyclin B and cyclin E, whose normal functioning is of the paramount importance for division processes [162].
AgNPs have antiangiogenic properties. In particular, AgNPs inhibit growth of blood vessels in tumor, limiting its progression. Probably, this effect is associated with VEGF (vascular endothelial growth factor) blocking and angiogenic FGF-2 (fibroblast growth factor 2) synthesis, as well as the inhibition of the transduction processes of the signaling pathways through the phosphorylation of KDR tyrosinekinase (VEGFR-2) and PI3K/Akt [169][170][171].
Another mechanism of inhibiting cancer cell proliferation, vascular growth, and tumor progression implies a disruption of signaling transduction by suppressing the effects of hypoxia-induced factor-1a (HIF-1α) and matrix metalloproteases. Active growth and progression of the tumor are accompanied by insufficiently active formation of the vascular network, which leads to the cell hypoxia and, consequently, an increase in the concentration of HIF-1α. HIF-1α regulates the expression of genes responsible for cellular activity: division, growth, and angiogenesis [172]. Matrix metalloproteases have similar functions. Resistance to the therapy is often accompanied with high activity of these signaling pathways, so their blocking plays a potentially important role in the modification of contemporary treatment approaches [3].
The antitumor effect of nanoparticles can be enhanced by coating with nano transfers, for example, with chitosan. Chitosan-coated nanoparticles have a greater inhibitory effect and cause apoptosis at a lower concentration than uncoated AgNPs [2,173]. Smaller nanoparticles (10, 20 nm) have the strongest antineoplastic effect compared to the large AgNPs (100 nm), which may be due to the greater penetrating capacity. Moreover, charge and electrostatic interaction also affect internalization. For example, positively charged particles penetrate more quickly and have greater cytotoxicity than particles with a neutral or negative charge.
A positive charge allows the interaction both with a negatively charged membrane and albumin, which leads to formation of a protein shell. The "protein corona" allows AgNPs to enter cells through receptor-mediated endocytosis with further implementation of cytotoxic and genotoxic effects [75,174,175].
There are published data on the antitumor activity of AgNPs against a wide spectrum of oncological diseases in vitro: blood cancer (acute myeloid leukemia), breast cancer, hepatocellular carcinoma, osteosarcoma, lung cancer, melanoma of the skin and mucous membranes, squamous cell carcinoma of the skin, colon cancer, osteosarcoma, cervical cancer, prostate cancer, adenocarcinoma of the stomach, bladder cancer and pancreas cancer [1,3,13,[176][177][178][179].
Potential toxicity of silver nanoparticles
Despite the data indicating the broad therapeutic potential of AgNPs, it is important to assess their toxicity, since the main biochemical effects of nanoparticles do not have biological selectivity and can interact with the macroorganism cells. Studying of toxic effects allows us to determine the therapeutic properties of drugs contain AgNPs and to minimize adverse side effects. The toxic properties of AgNPs depend on their size, shape, surface feature (negatively charged particles are less toxic), stabilizing agent and coating. Moreover, local environmental factors have an equally important role in toxicity: the strength of the ion interaction, presence of ligands, macromolecules and bivalent cations, as well as pH parameters [74,182]. Toxic effects of nanoparticles have been studied either in vitro on cell cultures or in vivo on rodents. Different methodologies and approaches make it difficult to determine common parameters and characteristics of AgNPs toxicity. Pharmacokinetic features of AgNPs have wide distribution in organs and tissues (lungs, CNS, kidneys, heart, liver, spleen, etc.), independent of the route of administration. The clearance of nanoparticles can differ from 17 days to 4 months. In tissues containing natural physiological barriers, for example, brain, excretion of silver proceeds more slowly (up to 260 days), which creates additional conditions for its accumulation [183,184]. Excessive accumulation of AgNPs leads to disruption of cells activity in different organs and systems: skin (argyria, contact dermatitis), respiratory system (bronchitis, alveolitis, fibrosis, provocation of bronchial asthma exacerbations) visual system (conjunctivitis, argyrias), gastrointestinal tract (hepatobiliary and intestinal dysfunction), immune system (dysregulation of cytokine synthesis and function of cells), CNS (cognitive impairment, Alzheimer's disease, epileptic seizures), urinary system (acute tubular necrosis, glomerular dysfunction), cardiovascular system (bradycardia, AV block, ventricular arrhythmias) (Fig-ure 8) [3,161,185]. The possible nanoparticles accumulation in the reproductive system and damage of the germ cells structures were discovered by studying the pathological effects of AgNPs in mice. Apart from the metabolic disturbances in germ cells and reduction of female oocyte fertility, the dysfunction of Leydig and Sertoli cells in males, which leads to infertility and a decrease in testosterone synthesis, was observed [3]. An embryotoxic effect was detected in mice and zebrafish. This effect depended on the size and coating of AgNPs nanoparticles. Smaller AgNPs of 20 nm have greater toxicity than large particles of 110 nm, and polypyrimidine-coated particles are more toxic than citrate-coated AgNPs [186]. AgNPs also have a genotoxic effect due to chromosome damage, oxidative stress, and interaction with DNA [185,187]. Figure 8 Main spectrum of AgNPs toxicity. AgNPs have a wide spectrum of toxicity, which has been studied in mice. The collected data made it possible to predict the different consequences of nanoparticles accumulation in various organs and systems. Reproduced rom ref. [180].
The process of the silver ions permutation and their interaction with oxygen or sulfur, which induce a pathological biochemical cascade in cell, are considered the main mechanism of potential AgNPs cytotoxicity. It is believed that formation of Ag + plays a key role in activation of lysosomal acid phosphatases, dysfunction of the actin cytoskelet, inhibition of Na + /K + -ATPase, stimulation of apoptosis (through the protein p53, ACT, TP38), induction of oxidative stress, depletion of glutathione and diffusion of cellular components [101,185,188]. However, according to another hypothesis, independent mechanisms of metallic ions toxicity, which can be found for various nanoparticles, cause the pathological effects [189]. Except the direct entering into cell through the membrane (diffusion, endocytosis), it is supposed that AgNPs may penetrate through ion channels ("flip-flop" mechanism) and by the "Trojan horse" approach. The latter method consists of phagocytosis and further induction of petrochemical changes in active cells by silver ionization in the cytosol [101]. It is worth mentioning that the most prominent morphological changes are observed in the liver, lungs, and kidneys, which may be due to the greatest participation of these organs in the clearance of nanoparticles [190].
AgNPs were found to induce oxidative stress in rat liver cells due to disturbance of metabolism and mitochondrialal malfunction. These changes resulted in the focal liver necrosis, spleen edema and apoptosis in thymus cortex [185,191]. Exposure to the most effective small nanoparticles (10 nm or less) caused much more prominent pathological shifts [192]. The excretion of silver from the body is mostly carried out by the hepatobiliary system (more than 50%), which may be associated with more severe hepatotoxicity because of the greatest accumulation in hepatocytes, Kupffer cells and sinusoidal endotheliocytes [193]. Kidneys are the second important excretion system. Silver nanoparticles accumulate in all structural components of cortex and medulla in spite of a low urinary excretion fraction (less than 0.01%) [161,193].
Beyond hepato-and nephrotoxicity, silver particles may provokepathological changes in the intestinal wall despite a rather low level of absorption in the intestine ranging from 0.12% to 0.88%, which can be caused by the binding of nanoparticles to undigested food [193]. However, oral administration of silver nanoparticles in mice causes destruction of epithelial villi and glands. The implication of bowel dysfunction causes weight loss [194].
In addition, nanoparticles have toxic effect on hearing and retina due to the activation of oxidative stress in the mitochondria, which leads to the loss of hearing and vision [3,190].
The accumulation of AgNPs in the central nervous system leads to disorganization of the cytoskeleton, activation of neuroinflammation and increase in the insoluble beta-amyloid concentration. These processes are important pathophysiological entity of Alzheimer's disease. Thus, it is possible to conclude that AgNPs can induce the development of neurodegenerative disorders, particularly due to low clearance and pathological accumulation [195].
Positively charged silver nanoparticles are known to have toxic effects on myocardial INA and IK1 channels, which leads to significantly increased risk of severe bradycardia [196]. Prolonged inhalation of AgNPs causes reduction of tidal volume and enhancing of inflammatory processes in bronchopulmonary system [131,197].
Thus, it is necessary to study and compare different AgNPs in order to determine the possibilities to synthesize less toxic nanoparticles with the strongest therapeutic effect. The study of pharmacodynamics and pharmacokinetics properties of AgNPs to prevent pathological accumulation is equally important. Moreover, it is necessary to conduct further research, devoted to direct comparison of coating agents and to selecting the optimal coating approach, because the coating and stabilization have beenproved to have a huge modifying effect.
Conclusion
Given the wide range of biological, physical, and chemical properties of silver nanoparticles, a potential role of this compound in clinical medicine may be suggested. Approbation and usage of AgNPs are highly promising, especially in the era of growing antimicrobial resistance. In addition to activity against pathogenic bacteria, viruses, protozoa and fungi, silver nanoparticles are able to inhibit the activity of tumor cells or play the role of drug carriers in the structures of malignant neoplasms in order to increase the effectiveness of chemotherapy [198]. Despite the promising results of the studies, most of them are conducted mainly in cell cultures or mice. Therefore, pharmacodynamics and pharmacokinetics of silver nanoparticles in human have not been fully studied due to the reliable data on possible multisystem toxicity, which could restrict performing these kinds of studies. Moreover, silver nanoparticles are among the most toxic nanocompounds. Therefore, it is of tremendous importance to develop and evaluate potential AgNPs antidotes, such as sulfides [199]. A careful selection of the minimum toxic and at the same time the most effective dose of AgNPs is an important aspect of planning the clinical and paraclinical trials. The nanoparticles synthesized by the "green synthesis" methods are likely to have less toxicity and a wider biological spectrum against microorganisms, which makes this approach more preferable than chemical or physical synthesis [200]. Further studies of different properties of nanoparticles are needed to determine the most optimal concentration, shape, structure and enveloping substance in vivo. These may be achieved by evaluating the doseresponse parameters via comparison of the equivalent values with the results from animal studies and extrapolated potential effects on humans [188].
Thus, it can be concluded that the integration of nanotechnologies, AgNPs in particular, for various medical purposes is highly promising due to the clear biocidal effects. However, the lack of unified methodological approaches and the inconsistencies in the data leave a wide field for further research and development of unified algorithms to prevent biases, trial heterogeneity, inaccurate data processing and compilation.
Limitations
The type of article is a narrative review. It means a more relaxed literature search strategy, and it is affected by subjective approach (in contrast with PRSIMA and multiple search strategy by two or more authors), which limits the comprehensive data extraction and strict study selection. There is a high probability of heterogeneity among studies due to the lack of strict prespecified criteria (selection bias), which leads to the impossibility of generalizing results and performing the statistical summary effect assessment. All studies are carried out in vitro (cell lines) or on mice/rats which makes it difficult to extrapolate the obtained results to humans and assess a potential minimal clinical important difference. There is a high demand for systematic reviews and meta-analyses according to the PRISMA guidelines for every particular topic of the current review, including extensive analysis of the published/unpublished literature and following the evidencebased practice. Such approach could improve the accuracy and understanding of the research objective for the medical application of silver nanoparticles.
Supplementary materials
No supplementary materials are available.
Funding
This research had no external funding. | 2022-07-29T15:12:14.516Z | 2022-07-11T00:00:00.000 | {
"year": 2022,
"sha1": "8952b50a9eb6bfc3fa2a4b130a86c1c57d4a9ea9",
"oa_license": "CCBY",
"oa_url": "https://journals.urfu.ru/index.php/chimtech/article/download/5994/4570",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9dc5764a5f10324a464314ccfa56bed7bb177fb3",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
238812100 | pes2o/s2orc | v3-fos-license | “Red Vienna” and the rise of the populist right
While research on the spatial variation in populist right voting focuses on the role of “places left behind”, this paper examines the spatial distribution of populist right voting in one of the fastest growing capital cities of Europe, Vienna. Combining detailed electoral data of the 2017 national elections at the statistical ward level and the location of municipal housing units, the paper examines why the populist right “Austrian Freedom Party” (FPOE) performs better in the former bulwarks of socialism, in the municipal housing areas of “Red Vienna”. The paper links the socio-demographic development of Vienna and its municipal housing policy with election results and explores three possible reasons for elevated FPOE shares in municipal housing areas: rising housing costs pushed an increasing number of socially and economically vulnerable into the municipal housing sector and so increased the FPOE voter pool in those areas; European Union accession and changes in regulation allowed foreign citizens to apply to and obtain municipal housing flats triggering a backlash from Austrian municipal housing residents; and municipal housing is located in disadvantaged neighbourhoods further enhancing the FPOE voter pool. The paper demonstrates that higher FPOE vote shares in areas with high municipal housing shares are due primarily to higher shares of formally less educated residents, neighbourhood context and they are marginally elevated in those municipal housing areas experiencing a larger influx of foreign residents.
Introduction
While research on populist movements has long been the domain of historians and a selected group of political scientists (Mudde, 2016;Mudde and Kaltwasser, 2013), the recent surge of populist right parties and agendas in European countries injected new vigour in the debate on the drivers of populism (Mair, 2013;Mouffe, 2018;Mudde, 2016;Müller, 2016). The surprising outcome of the Brexit (the withdrawal of the United Kingdom from the European Union (EU) and the European Atomic Energy Community on 31 January 2020) vote, Donald Trump's presidency of the United States (20 January 2017-20 January 2021) and rapidly rising vote shares of populist parties in Europe during recent election cycles has compelled economists and economic geographers to examine the link between changing economic conditions for people and regions and the rise of populism (Autor et al., 2017;Dijkstra et al., 2020;Essletzbichler et al., 2018;Gordon, 2018;Rodriguez-Pose, 2018;Rodrik, 2018). They argue that uneven regional development, the focus on striving core cities and neglect of the "places left behind" (Rodriguez-Pose, 2018) has produced a "geography of discontent" (Dijkstra et al., 2020;McCann, 2018) where people feeling abandoned by ruling elites vote against traditional and for populist parties and candidates. Economic geographers thus complement the work of political scientists and their focus on individual characteristics to account for the impact of regional differences in economic fortunes on rising populist vote shares. This work focuses on between-region differences, differences between rural and urban areas and between structurally declining and expanding areas (Rodriguez-Pose, 2018), but glosses over differences in the vote within regions and cities.
This paper contributes to this literature as follows: the paper examines the variation of the populist radical right vote shares in 1290 statistical wards in the city of Vienna. Focusing on a city "not left behind" illustrates that regional decline is neither a necessary nor a sufficient condition for rising populist vote shares and that urban growth may generate a different set of challenges and resulting "geographies of discontent"; and in addition to the wellknown individual and geographical variables influencing voting behaviour, the paper examines whether increasing pressure on welfare services through immigration influences populist radical right voting shares (Cavaille and Ferwerda, 2017;Rodrik, 2018). More specifically, the paper examines if voters perceive increased immigration from non-EU countries as competition for a specific local welfare service, municipal housing, that may prompt them to vote for the populist radical right "Austrian Freedom Party" (FPOE). Vienna's high share of residents in municipal housing units, legal changes in access criteria for municipal housing and the rapid increase in foreign nationals over the last 20 years makes Vienna an interesting case study of the role of competition for welfare services to explain within city-variation in the populist vote.
The paper is structured as follows: the second section reviews the literature on the economic and cultural drivers of rising populist vote shares, the geography of discontent as well as recent literature on the role of immigration into social welfare states as possible explanatory variable for the rise of the populist right; the third section will briefly summarize recent demographic, political and legal changes, the impact on the Vienna housing market and its implications for the municipal housing sector. In particular, accession to the EU forced the city of Vienna to open up its municipal housing sector to EU and third country nationals apparently producing conflict between the "native" Austrian residents and "foreign" newcomers; the fourth section discusses the methodology and empirical operationalization of the theoretical drivers of populist right voting; the fifth section provides the empirical results; and the sixth section concludes the paper.
Populism and the geography of discontent
Explanations for the decline in centre party and rising populist party vote shares fall into those emphasizing the demand for populist parties, the supply of populist political programmes, messages and parties and the institutional context in which those parties operate (Mudde, 2007). The empirical focus of the literature is on demand side explanations separating into those emphasizing the role of economic and those emphasizing the role of cultural changes as the main cause for the rising demand of populist agendas (Inglehart and Norris, 2016).
One influential argument is based on the idea that recent economic developments and processes such as skill-biased technological change, globalization, de-industrialization, rising inequality and an increasing share of insecure and precarious work practices generated a new group of Modernisierungsverlierer (losers of modernization). These Modernisierungsverlierer include the low-skilled, poorly educated, blue collar working class whose jobs are under threat from digitalization, outsourcing and competition from cheap immigrant labour (Autor et al., 2017;Betz, 1994;Inglehart and Norris, 2016;Rodrik, 2018). Facing unemployment, the fear of unemployment or deteriorating economic conditions in the form of declining absolute or relative wages, unskilled or low-skilled workers are expected to vote against centre parties and for populist parties (Eribon, 2009;Golder, 2016).
Rather than focusing on the economic impacts of recent changes to explain the rise in the populist vote, a second set of explanations interprets the rise of the populist right vote as a cultural counter-revolution against the rise of progressive values including increased tolerance towards same-sex marriage, lesbian, gay, bisexual, transgender, queer (or sometimes questioning), and others rights, ethical norms, open-mindedness towards migrants, refugees, foreigners, multicultural life-styles, food and travel, cosmopolitan support for international cooperation, humanitarian assistance, multilateral agencies, environmental concerns, race and gender equality and human rights (Inglehart, 1990(Inglehart, , 1997Inglehart and Norris, 2016;Norris and Inglehart, 2009). Those in favour of progressive social change and humanistic values tend to be economically secure and better educated. Traditional values are held supposedly by the old, men, and those with little formal education, the group that also tends to lose economically. In this view, xenophobia, hostility and intolerance towards migrants, ethnic, religious and racial minorities are only part of a wider cultural backlash against social and cultural change. Empirically "culture" is measured as educational attainment, trust in governance, anti-immigrant sentiments, authoritarian values or rightwing ideology (Inglehart and Norris, 2016).
As economic and cultural changes interact and overlap, it is difficult empirically to reduce explanations of the rise in the populist vote to either economic or cultural explanations. What emerges from the empirical literature is that voters with relatively low levels of formal education (Essletzbichler et al., 2018;Gordon, 2018;Hobolt, 2016;Lee et al., 2018), older workers (Ford and Goodwin, 2017;Goodwin and Heath, 2016;Gordon, 2018;Hobolt, 2016;Rodrik, 2018) and those on lower incomes (Ford and Goodwin, 2017;Goodwin and Heath, 2016;Hobolt, 2016;Rodrik, 2018) and/or unemployed (Los et al., 2017) tend to vote for populist right parties for economic and/or cultural reasons. Furthermore, recent work on populism illustrate that explanations need to be formulated in the institutional contexts in which elections unfold. One of those institutional contexts entering explanations of populist voting are differences in the role and structure of national welfare systems (Cavaille and Ferwerda, 2017;Manow, 2018;Rodrik, 2018). Rodrik (2018) argues that deteriorating economic conditions of the (native) working class coupled with perceived competition from immigrants for increasingly scarce welfare services generates existential fears that can be exploited by populist radical right parties to mobilize against immigrants to increase their vote share. According to Rodrik (2018) national differences in welfare state systems thus explain the rise of right rather than left populism in countries with generous and easily accessible welfare services such as those of Central and Northern European welfare states. Once immigrants are able to enter these countries legally, they gain access to those services. The Austrian populist radical right Freedom Party (FPOE) stokes those fears and has a history of exploiting "welfare chauvinism" as part of its anti-immigration party program (Fallend, 2013;Marquart, 2013;Pelinka et al., 2008).
While political scientists and economists have focused on the temporal changes of populist voting at the individual or nation state scale, the spatial variation in the Brexit vote, the election of Donald Trump as president of the United States in 2016, and persistent regional variation in the populist vote in Italy and France reignited the interest of geographers Shin, 2017, 2020;Dijkstra et al., 2020;Essletzbichler et al., 2018;Gordon, 2018;Lee et al., 2018;Los et al. 2017;McCann, 2018;Rodriguez-Pose, 2018). Political geographers have long argued that voting preferences do not form in a vacuum but are shaped by highly localized social networks (Agnew and Shin, 2020;Zuckerman, 2005) such that local social and cultural practices that have evolved over time exert a locally specific influence on the formation of individual voting preferences. Building on the theoretical work discussed above, economic geographers have complemented the individual level drivers of the populist vote with region-specific characteristics expected to influence the regional vote share of populist parties, globalization and/or EU discontent. McCann (2018) have coined the term "geography of discontent" to describe the dissatisfaction experienced by people who live in stagnating or declining regions (usually rural or declining old industrial regions) offering few opportunities and development prospects that pushes them to vote against the established parties and for populist parties (Rodriguez-Pose, 2018). A number of territorial factors have been identified to account for rising populist vote shares. Immigration has been mobilized by populist right parties to exploit economic and cultural cleavages. On the economic side of the argument, the fear of job loss to immigrant workers or increased competition for increasingly scarce social welfare services have been brought forward (Autor et al., 2017;Dörre, 2016;Manow, 2018;Rodrik, 2018). On the cultural side of the argument, the increase in the share of Muslim and Roma migrants is associated with a loss or dilution of local and/or national identity and traditions prompting native populations to vote for populists (Ford and Goodwin, 2017;Hobolt, 2016). In addition to immigration, long-term economic, industrial and/or population decline have been identified as important territorially-specific explanatory factors of populist vote shares (Dijkstra et al., 2020;Essletzbichler et al., 2018;Lee et al., 2018;Rodriguez-Pose, 2018;Rodriguez-Pose et al., 2020).
Geographical differences in the populist vote shares are then expected to be the result of the unequal geographical distribution of individuals with different characteristics (compositional effects) and differences in the spatial context in which those individuals reside (contextual effects). First, the educated, young and high-skilled professionals shown to be less likely to vote for populist parties may be overrepresented in cities while the unskilled, uneducated, old and unemployed unable to move to dynamic places are "stuck" in the countryside and old industrial regions. 1 Second, after controlling for those compositional effects, voters in regions characterized by population decline, economic decline or industrial decline have been shown to vote for populist parties and agendas (Dijkstra et al., 2020;Essletzbichler et al., 2018;Rodriguez-Pose, 2018). Geographers focus on regional differences, on urban-rural, North-South comparisons, on "places left behind" and those surging ahead Shin, 2017, 2020;Essletzbichler et al., 2018;Rodriguez-Pose, 2018). The work is important but tends to ignore the drivers and underestimate the extent and spatial variation of the populist radical right vote in large cities "not left behind". This paper attempts to fill this gap and exploits detailed information at the ward level in one of the fastest growing European capital cities, Vienna. The analysis of the populist vote in a fast-growing city such as Vienna focuses our attention on the political-economic challenges of growth rather than decline and enables us to explore the drivers of radical right populism in an urban context. The main challenges of rapid urban growth are rising housing costs and pressure on the adequate supply of local amenities and welfare services (Cavaille and Ferwerda, 2017). Populist radical right parties can take advantage of resulting rising discontent if they are able to link possible negative effects of growth with rising immigration.
"Red Vienna", municipal housing and the rise of the populist right vote
Vienna appears, at first, an unlikely candidate for electoral success of populist right parties feeding on existential insecurity. The city prides itself on topping Mercer's "most livable city" table 2 for years in a row and its long history of municipal socialism endowed it with 220,000 municipal housing flats providing affordable accommodation for more than a quarter of Vienna's population. The city offers an outstanding public transportation system for one Euro a day, good public health care, a relatively equal distribution of income and a diverse economy offering jobs to the skilled and unskilled. Nevertheless, the FPOE vote share in Vienna climbed above 20% in 1994 and, with some exceptions, has remained above 20% since then. Within Vienna there is significant spatial variation between statistical wards ranging from a low of 5.4% to a high of 51.6% in 2017 (see also Figure 1). There is little systematic research on the detailed spatial variation in the FPOE vote in Vienna, but evidence from surveys and qualitative case studies suggests that the FPOE vote is significantly higher in the former bulwarks of socialism, in the municipal housing complexes of "Red Vienna" (Cavaille and Ferwerda, 2017;Reinprecht, 2012Reinprecht, , 2018Rosenberger and Permoser, 2012). This paper aims to fill the research gap and explain the spatial variation of FPOE vote shares in general, and the higher vote shares in wards with high tenant shares in municipal housing, in particular. In order to get a better understanding about why the FPOE vote share may be higher in municipal housing districts it is necessary to offer a brief discussion of the recent socio-demographic, political and regulatory changes altering the tenant composition of and relationship between the municipal and private housing sectors.
The provision of high-quality and affordable housing, physical and social infrastructure (outdoor pools, green spaces, and public transportation) is rooted in the prolonged experiment with municipal socialism -better known as "Red Vienna" (Kadi, 2015;Matznetter, 2002Matznetter, , 2019Novy et al., 2001). Following on from those policies, approximately 220,000 or 26.3% of all housing units are owned by the municipality of Vienna. 3 Not surprisingly, as the provision of affordable and high quality public housing for the working and aspiring middle classes provided a key cornerstone of Social Democratic policy in Vienna in the post-war period, municipal housing tenants have made up one of the core voter bases of the Social-Democratic Party (Rosenberger and Permoser, 2012). However, this changed in the late 1980s/early 1990s.
Up to the late 1980s, population decline, a high share of poor quality housing stock in the regulated private rental housing sector, 4 expansion of high quality municipal housing stock, and high maximum income thresholds for access to the public housing sector 5 made the publicly rented sector attractive for Austrian citizens while foreign residents were excluded from the public housing sector all together. These trends resulted in a socially diverse but ethnically homogeneous municipal housing tenant population, which entailed that the shares of the formally less educated and unemployed were only marginally higher in municipal housing wards and cultural conflicts with or economic threat from foreign residents made impossible through the legal exclusion of foreign citizens from municipal housing. But since the early 1990s, social diversity declined while citizenship diversity in the public housing sector increased for the following reasons. First, demand for cheap accommodation was driven up by rapid population growth. Vienna's population increased from 1.53 million in 1982 to 1.90 million in 2019. Over this period the number of Austrian citizens fell slightly while the number of foreign citizens increased 6 from 116,255 or 7.6% of the population in 1982 to 572,834 or 30.1% of the population in 2019 (Statistik Austria, 2021). Second, a series of changes to the strict rental laws in the regulated privately rented housing sector in 1982, 1986 and 1994 opened up a rent gap that made private investment in this segment of the housing market suddenly attractive (Kadi, 2015;Kadi and Verlic, 2019;Matznetter, 2019). Because of those changes, rents in the private housing sector doubled between 1985 and 1993 (Novy et al., 2001: 136). Between 2001 and 2010 sales prices rose by 153% in the private housing sector 7 and rents in the (now less) regulated private sector rose by 67% (in comparison to 37% for all rental types). Third, the Great Recession deflated housing price bubbles in United States and United Kingdom metropolitan real estate markets but had the opposite effect on Vienna's private housing market where investors continued to perceive rent gaps. Between 2008 and 2017, rents increased by 53.3% in the privately rented housing sector and by 20.5% in the public housing sector. The trends opened up a significant gap between privately and publicly rented accommodation costs. In 2016/2017 rental prices were 8€/square metre for municipal housing flats and 11.40€/square metre for flats in the private sector (Kadi and Verlic, 2019;Reinprecht, 2019;Tockner, 2017). Fourth, accession to the EU meant that the public housing market had to be opened to EU citizens in 1995 and to all foreign citizens with permanent residence cards in 2006.
In combination, those changes meant enhanced attractiveness of the public housing sector in comparison to the privately rented housing sector but also longer waiting lists, longer waiting times 8 and higher eviction rates in the public housing sector. The steep rise in accommodation cost and hence, decreasing accessibility of the privately rented sector for those on lower incomes, required Wiener Wohnen, the organization in charge of distributing available municipal flats and managing the municipal housing stock, to focus more intensely on those more vulnerable, those in insecure and precarious living and working conditions, less educated and poorer segments of society when flats become available. As a result, the relative economic situation and social status of those renting from the municipality relative to private renters and owners in a district worsened considerably. Social diversity in wards with high shares of municipal buildings has declined while citizenship diversity increased between 1991 and 2019 as the share of those without formal education, the unemployed, those threatened by absolute poverty, and third-country 9 foreign residents increased substantially (see Figure 2 below) (Simons and Tielkes, 2020). The rapid rise of foreign citizens in previously ethnically homogeneous social housing complexes allowed the FPOE to generate feelings of insecurity and fear (Marquart, 2013;Wodak and Forchtner, 2014) and exploit conflicts between old and new residents by narrating lines of conflict along, almost exclusively, ethnic lines (Reinprecht, 2012;Rosenberger and Permoser, 2012).
As a result of those changes and in accordance with the literature on the geography of discontent, FPOE vote shares in wards with high shares of municipal housing could be explained as follows: differences in FPOE vote shares could be the result of sorting of individuals with different characteristics into different wards. In this case and following the literature discussed above, those wards with a high share of old, unemployed, unskilled voters would be expected to depict higher FPOE vote shares. Because those groups of voters are now overrepresented in municipal housing units, this may explain entirely higher FPOE vote shares in wards with large shares of municipal housing units. If, in addition to those compositional effects, local contextual effects influence FPOE vote shares, then we would expect those wards with larger increases in foreign population shares as well as those with higher rates of population growth and growth in unemployment (as indicators of the economic deterioration of the neighbourhood) to exhibit higher FPOE vote shares. As argued above, growing cities exhibit different challenges than declining regions and hence, if pressure on local amenities exist, then local population growth, not decline (as argued in studies examining regional differences in populist vote shares) should increase local discontent; and higher FPOE vote shares in wards with high shares of municipal housing tenants coupled with fast increases of foreign residents could be an indication of a distributional conflict between native and foreign residents (Cavaille and Ferwerda, 2017;Rodrik, 2018).
We thus ask the following questions: Which factors identified by the literature on "the geography of discontent" are driving populist right voting in the fast-growing city of Vienna? Is the anecdotal evidence of higher FPOE vote shares in municipal housing units (Permoser and Reinprecht, 2012) substantiated by our research after controlling for compositional and contextual effects? Is the rapid increase in the share of non-EU citizens a driving force of populist right voting in Vienna? Based on the literature on the "competition for welfare state services", is a large increase in the share of non-EU citizens in municipal housing wards responsible for higher populist right vote shares in those wards?
Method and data
In order to evaluate the impact of composition, context and competition for municipal housing, the following cross-sectional model was estimated as given by Equation (1) where FPOE% denotes the FPOE vote share (in per cent) in the year t = 2017 national elections in statistical ward i, Public% refers to the percentage of municipal housing units, ΔFOR% refers to the percentage change in non-EU foreign residents between 1991 and 2017, Public% x ΔFOR% refers to an interaction term between Public% and ΔFOR%, X is a vector of other variables that have been identified in the literature to increase the populist vote and, following the discussion above, include age, education, unemployment, the share of foreign citizens, ΔX is a vector capturing neighbourhood population and employment change, α is a constant term and ε is the error term with the usual assumed properties. We estimate different combinations of this model to measure the relative impact of those variables on the FPOE vote shares. We compare the parameter estimates of our contextual and compositional variables to those found in the literature discussed above to evaluate their relevance in an intra-urban, growing city context. Our main focus is on the variables Public% and ΔFOR%. We want to establish whether FPOE vote shares are significantly higher in wards with a large share of municipal housing residents, if rising shares of foreign citizens influence the populist right vote (independent of social/private tenant composition) and if there is an additional effect of rising shares of foreign citizens in municipal housing wards. We then examine if the effects persist after controlling for compositional and contextual effects. If Public% remains positive this would confirm qualitative case studies on the higher prevalence of populist right voting in municipal housing complexes. If ΔFOR% remains significant (but Public does not remain significantly positive), we would attribute this result to cultural and economic fears of Austrian voters in general, rather than municipal housing residents in particular. If the interaction effect is statistically significant and positive, we interpret the result as consistent with the "welfare state service competition thesis" discussed above (Cavaille and Ferwerda, 2017). The data for the empirical analysis come from various sources. Information on the detailed electoral outcomes of the 2017 national elections at the electoral ward level, our independent variable, are available from the website of the City of Vienna. 10 Data for national, local, presidential and European elections are available from 1995 to 2019. Unfortunately, electoral ward boundaries change frequently and the implementation of those changes is unsystematic as the availability of voting locations or the number of eligible voters will change. Electoral ward boundary maps are only available from the local elections of 2005. Therefore, elections prior to 2005 cannot be matched to statistical wards. Because of those data limitations this paper focuses on the 2017 national elections when the populist right FPOE party won 21.35% of the vote. After 1999, this constitutes the highest vote share obtained by the FPOE in Vienna. Ideally, we would have liked to compare the results for the 2017 election with elections in the early 1990s in order to get a better understanding of the impact of the described socio-demographic and economic changes on electoral outcomes. Unfortunately, without historical boundary data for electoral wards we cannot examine long term differences in the spatial pattern of the FPOE vote shares. National elections are chosen to avoid idiosyncratic local electoral issues such as cycle path construction, new developments, park space reductions or tree planting that are often important issues for local elections. Although information at the block group level would have been preferable, statistical wards are the smallest geographical units for which socio-socioeconomic information is available. Moreover, as socio-economic and socio-demographic information was provided by the Division of Economy, Work and Statistics (MA23) of the Vienna City Government, ward data were only available for wards within the administrative boundaries of the City of Vienna. We define all individual variables in turn: Public%: The percentage of residents in municipal housing flats in 2011, the last census year for which this variable is available. We would expect this variable to be positively related to the FPO vote share.
ΔFOR%:
The percentage point change of the share of non-EU foreign citizens between 1991 and 2017. We chose non-EU foreign citizens, as this is the group that is targeted by the FPOE to stir anti-immigrant sentiments (Cavaille and Ferwerda, 2017;Marquart, 2013) and expect this variable to be positively related to the FPOE vote share.
FOR%:
The percentage of non-EU foreign citizens in 2017. We include the levels of immigration because two hypotheses on their influence on populist right voting exist. The share of non-EU foreign residents may lead to an increased probability of conflict between Austrian and foreign citizens such that a higher share of foreign citizens should result in higher FPOE vote shares. However, the contact hypothesis (Allport, 1954) states that increased contact with foreign residents reduces prejudice and increases mutual understanding and respect which should translate into lower FPOE shares. It is thus useful to distinguish between the share of foreigners and the increase in foreigner shares in an area. In line with Essletzbichler et al. (2018) we would expect this variable to be negatively related to the FPOE vote share.
Unemp_AUT%: The unemployment rate of Austrian citizens in 2017 in per cent. The variable captures economic insecurity of eligible voters.
Unemp%: The ward level unemployment rate (for all residents in working age) in 2017 in per cent. Contrary to Unemp_AUT%, Unemp% is interpreted as a contextual variable, a proxy for the economic conditions of a ward. We would expect both of the variables to be related positively to the share of the FPOE vote.
Educ_AUT%: The percentage of Austrian citizens with compulsory education only. We expect this variable to be positively correlated with the FPOE vote share.
Educ%:
The percentage of all residents with compulsory education only. We consider this variable as proxy of the social class/milieu of a ward.
Age%: The percentage of the ⩾ 60 years old population. Unfortunately, we do not have this variable separated for Austrian and non-EU citizens and hence, can only use the share of the older population for the ward population as a whole. According to the cultural cleavage theory, we would expect this variable to be related positively to the radical right vote share, but according to the election survey after the 2017 national elections, the share of the FPOE vote was higher among those who were 16-29 years old (30%) than among those who were ⩾ 60 years old (19%) (SORA/ISA, 2017) . We are thus uncertain about the sign of the parameter estimate.
ΔUnemp_ij%: Spatially weighted average rate of change in unemployment between 1991 and 2017 in per cent. We consider the variable as proxy for deteriorating economic conditions at the neighbourhood level. Extending the formal definition of the neighbourhood beyond individual wards assumes that voters are reacting also to economic conditions beyond their immediate wards of residence and that they are more likely to respond to conditions in close geographical proximity rather than to average conditions at the level of districts or the city as a whole. Using spatially weighted data (rather than individual ward information) also has the advantage that the variable does not correlate excessively with unemp_AUT%. We expect this variable to be positively related to FPOE vote shares. Neighbours are defined through queen contiguity (q), ten nearest neighbours (k10) and twenty nearest neighbours (k20) methods. The resulting spatial weights matrices are row standardized and include the main diagonal.
Unemp_ij%: Spatially weighted unemployment rate. Spatial weights are calculated as above. We expect this variable to be associated positively with the FPOE vote share.
ΔPop%: Spatially weighted rate of population changes in per cent, 1991-2017. We expect this variable to be related positively with FPOE vote shares as, everything else equal, population growth exerts pressure on local amenities and services (parks, doctors, schools, public transport, and housing) that could lead to frustration and hence, a higher propensity to vote for the FPOE. It is a measure of rising demand for local services that may or may not be met by an increase in supply. Unfortunately, we do not have information on levels and changes in the supply of local amenities. Spatial weights are calculated as above.
Because data on elections (our independent variable) are available at the electoral ward level and socio-economic information (our independent variables) is available at the statistical ward level, we had to merge geographically the two data sources by allocating vote shares of electoral wards to statistical wards based on the share of an electoral ward area located in a statistical ward area. The allocation of voting data to statistical wards was carried out with ArcGis. The result is a consistent dataset for 1290 statistical wards. Figure 1 illustrates substantial spatial variation in the FPOE vote as well as the distribution of municipal housing across the city.
Empirical analysis
The FPOE vote ranges from 5.4% to 51.6% while the share of municipal housing residents ranges from 0% to 99.8% (see also Table 2 in the Appendix). A first glance at the map suggests a relationship between the share of public housing and the FPOE vote share. The unstandardized parameter estimates are reported in Table 1. Table 1 confirms the positive relationship between the share of municipal housing residents and FPOE vote shares (Model (1)). We explore whether this positive relationship persists after controlling for rising shares of non-EU citizens, compositional and contextual effects next. Given the demographic trends, the rise in housing costs and regulatory changes described above, we would expect a faster increase of the non-EU citizen shares in wards with large municipal housing shares. Also, as a result of rising housing costs we now would expect a higher share of the economically vulnerable, the unemployed and those with little formal education, to live in wards with large municipal housing stock compared to earlier years. Figure 2 shows that to be the case. Figure 2(a) depicts a general rise of non-EU citizens across the whole city of Vienna, but this increase is larger in wards with higher shares of municipal housing residents. Figure 2(b) and (c) depict the trends for unemployment rates and compulsory education shares of Austrians. While unemployment rates and compulsory education shares were already higher in 1991, unemployment rates among Austrian citizens increased substantially in municipal housing wards while the general decline of those with compulsory education only was substantially lower in wards with high municipal housing shares. These trends are consistent with arguments about the socio-economic residualization of the public housing segment and with survey-based information on changes of the social and ethnic composition of the municipal housing sector (Simons and Tielkes, 2020). They point to the need to control for compositional effects in order to establish whether higher FPOE shares in public housing wards are simply due to the fact that they house a larger share of economically vulnerable and culturally left behind segments of the population.
We augment Model (1) with the percentage of non-EU citizens (FOR%) and the percentage change of non-EU citizens (ΔFOR%) in Model (2) and include an interaction term between the percentage of municipal housing residents and the change in non-EU citizens (Public% x ΔFOR%) in Model (3). Models (1) to (3) appear to be consistent with our expectations that FPOE vote shares are higher in wards characterized by higher shares of public housing residents and a faster increase in non-EU citizens as well as the "welfare sector service competition" hypothesis that expected FPOE vote shares in wards with more municipal housing residents coupled with faster increases in non-EU citizens. However, as these models are likely to suffer from omitted variable bias we control for the social and economic composition of voters in (Model (4)), a ward's socio-economic context (Model (5)) and voter composition and context in Models (6) and (7). Including unemployment, education and age in Model (4) illustrates that higher FPOE vote shares in wards with high percentages of municipal housing residents reflect the over-representation of unemployed and formally lower educated voters in these wards. Including these variables leads to an increase in the adjusted R-Square from 0.124 to 0.681.
Replacing the unemployment rates and education levels of Austrian voters with those for the population as whole produces comparable results suggesting that those variables are highly correlated (r Unemp, Unemp_AUT = 0.94 and r Educ, Educ_AUT =0.98). In order to enter voter-specific and neighbourhood effects together, the spatially weighted values of unemployment and population change are added in Model (6) and the rate of change in neighbourhood unemployment and population change in Model (7). Adding these variables improves the model fit and we thus focus our discussion on the parameter estimates of Model (7) and compare them with those of Models (4) to (6).
In those model versions, the percentage of municipal housing residents exhibits a negative and statistically significant parameter estimate. Everything else equal and in the case of zero change in non-EU residents, a shift from 1.5% (quartile 1) to 27% (quartile 3) of residents in municipal housing wards would result in a 0.82 percentage point decline in the expected FPOE vote share. Shifting from a ward with 0% to a ward with 100% of municipal housing residents would reduce the expected FPOE vote share by 3.2 percentage points. Shifting from wards with 1.5% in municipal housing residents and 0.5%age point increase of non-EU citizens (quartile 1) to wards with 27% municipal housing residents and 10%-age point increase of non-EU citizens (quartile 3) is associated with a 0.65 percentage point decline in the expected vote share, that is, the expected decline of the FPOE vote share in wards with high shares of municipal housing residents is dampened by higher non-EU citizen share increases. However, as the interaction effect is significant only at the 0.1 level, the impact on mediating the relationship between the share of municipal housing residents and radical right voting is modest at best. The positive interaction effect also tends to disappear in models where we bin Public% and/or ΔFOR% (see Tables OA2 and OA3 for these robustness checks in the Online Appendix). 11 With few exceptions, the parameter estimates for the share of non-EU citizens, the share of the older population and the rise in non-EU citizens are insignificant. Somewhat surprisingly the rate of unemployment among Austrian citizens is negatively related to the FPOE vote share, but this is likely the result of the high correlation with education (r = 0.64). 12 The most important variable linked to high FPOE voter shares is the low formal education of voters. Everything else equal, a shift from wards with 12.8% to 29.7% of voters with compulsory education only (from 25th to 75th percentile) is associated with an 8.1 percentage point increase in the expected FPOE vote share. Confirming other studies, education is the most important explanatory factor to explain geographical differences in radical right vote shares (Ford and Goodwin, 2017;Gordon, 2018;Hobolt, 2016;Lee et al., 2018).
After education, neighbourhood context exerts the strongest impact on the FPOE vote share. Both, the rate of unemployment in the neighbourhood (Model (6)) and the rate of change in unemployment (Model (7)) exhibit a positive and significant impact on FPOE vote shares. A neighbourhood with a one percentage point higher unemployment rate is expected to be associated with a 0.86 percentage point higher FPOE vote share, while a neighbourhood with a one percentage point higher rate of unemployment change is associated with an expected 1.32 percentage point higher FPOE vote share. 13 This may indicate that voters react more strongly to the economic situation in their neighbourhood rather than their personal economic circumstances (cf. Manow, 2018). The positive effect of population growth on the FPOE vote shares suggests that pressure on local services through local population growth (higher house prices, crowded parks and public transportation, fewer school places, etc.) may result in voter dissatisfaction that the FPOE can capitalize on. Population growth in the neighbourhood in general rather than the growth in non-EU citizens appears to drive the FPOE vote.
Conclusions
While recent work in economic geography has focused on high populist vote shares in "places left behind" (Rodriguez-Pose, 2018), this paper examined the spatial variation in populist right vote shares in Vienna, a fast-growing city in one of the most prosperous EU countries. Since the national elections of 1995, the vote share of the populist right party, FPOE, exceeded 20% in Vienna and 50% in individual statistical wards. The arguments linking globalization with regional decline and rising populist vote shares (Rodrik, 2018) appear, at first sight, inappropriate to explain intra-urban variation in populist right voting in a dynamic and rapidly growing city.
However, explanations on the link between economic and cultural changes, emerging cleavages and rising levels of insecurity among those with little formal education and the unemployed expected to increase the demand for populist messages, parties and leaders should hold across all geographical contexts including growing cities and regions even if the specific mechanisms generating grievances vary. Dynamic cities, such as Vienna, are characterized by rapid population growth exerting pressure on housing markets and social services. In the case of Vienna, the population increase was due entirely to an increase in the number of foreign citizens, a fact easily exploited by the FPOE (Marquart, 2013). The link between immigration and (perceived) resulting pressure on welfare services has been identified as a distinctive feature of populist right voting (Manow, 2018;Rodrik, 2018). We thus examined the relative importance of those factors for explaining the spatial variation in the FPOE vote.
Our analysis suggests that first, the share of Austrian citizens with compulsory education or no formal education is the most important factor to explain the spatial variation in the FPOE vote share. This suggests that intra-urban variation in populist voting is associated with sorting processes where those with relatively low education and consequently, low income and higher probability of unemployment, are forced to concentrate in neighbourhoods with lower housing costs including wards with large shares of municipal housing units. Second, neighbourhood economic conditions have a significant impact on the populist right vote share and appear more important than the economic conditions of individual voters (Manow, 2018). Neighbourhoods with relatively high levels and/or increases in unemployment are associated with higher populist right vote shares. Third, discontent in a rapidly growing city appears to originate from neighbourhood population growth, not decline. This suggests that pressure on neighbourhood amenities triggers discontent that can be exploited by radical right parties. Fourth, contrary to expectations (Cavaille and Ferwerda, 2017;Rodrik, 2018), we cannot find robust support for the argument that competition for municipal housing generates support for the FPOE. Once we control for the economic and social characteristics of residents as well as the economic conditions and population growth in the neighbourhood, the share of municipal housing residents is negatively associated with FPOE vote shares. The positive relationship between municipal housing and populist right voting emerges because of higher shares of economically and culturally vulnerable populations in those wards, in the case of Vienna, a result of rapid socio-economic and demographic change.
Given the exploratory nature of our analysis, these conclusions are tentative. In order to disentangle the impact of the changing socio-demographic and socio-economic composition of municipal housing wards from the impact of increasing ethnic competition for municipal housing units, we would require longer panels going back to the 1990s, a time when municipal housing was closed to foreign citizens, or access to block level or individual level data. Unfortunately, this information is not available. criteria with income being only one of several (see: https://wohnberatung-wien.at/wiener-wohn-ticket/ allgemeines (accessed 11 July 2020)). 6. The driving forces of this growth were the War in Former Yugoslavia, the Fall of the Berlin Wall, the accession of Austria to the European Union (EU) in 1995, the EU accession of Eastern European countries in 2004 and 2007, and the influx of displaced persons from the Syrian War in 2015. 7. The price increases were particularly severe in 3-4 story buildings constructed prior to 1914 with a large share of substandard, previously rent regulated apartments that investors improve and are then able to rent out or sell at a higher price (Kadi, 2015;Kadi and Verlic, 2019). 8. There are no exact numbers on waiting times and waiting lists. Per year between 8000 and 10,000 municipal flats are newly rented out. In the period 2009-2011 applicants had to wait between 1½ and 2 years for a municipal housing flat, and in 2018, 25,000 people applied to rent from the municipality (Simons and Tielkes, 2020). 9. Third-country foreign residents refer to non-European Union residents. 10. See: https://www.wien.gv.at/wahlergebnis/de/NR191/ index.html (accessed 7 July 2020). 11. Hainmueller et al. (2019) discuss the value of binning interactive models as the assumption of linearity is often violated in empirical studies employing interaction terms. We provide a number of robustness tests with different bins for ΔFOR% and/or Public% and find no significant interaction terms for most of the specifications 12. Removing education from the set of independent variables in Model (4) yields a positive and significant parameter estimate for unemployment, but the model fit declines substantially. The adjusted R-Square declines from 0.68 to 0.18. 13. These results are robust to changes in the spatial weights matrix (see Online Appendix Table OA1). | 2021-09-27T20:38:29.826Z | 2021-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "0b08c227453b0c553b9ad738f8ebca2930cd8fdd",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09697764211031622",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "560fc509ea4530f4829b371f77bc04449f01bae3",
"s2fieldsofstudy": [
"Economics",
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
220419626 | pes2o/s2orc | v3-fos-license | A snapshot of European neurosurgery December 2019 vs. March 2020: just before and during the Covid-19 pandemic
Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2 or Covid-19), which began as an epidemic in China and spread globally as a pandemic, has necessitated resource management to meet emergency needs of Covid-19 patients and other emergent cases. We have conducted a survey to analyze caseload and measures to adapt indications for a perception of crisis. Methods We constructed a questionnaire to survey a snapshot of neurosurgical activity, resources, and indications during 1 week with usual activity in December 2019 and 1 week during SARS-CoV-2 pandemic in March 2020. The questionnaire was sent to 34 neurosurgical departments in Europe; 25 departments returned responses within 5 days. Results We found unexpectedly large differences in resources and indications already before the pandemic. Differences were also large in how much practice and resources changed during the pandemic. Neurosurgical beds and neuro-intensive care beds were significantly decreased from December 2019 to March 2020. The utilization of resources decreased via less demand for care of brain injuries and subarachnoid hemorrhage, postponing surgery and changed surgical indications as a method of rationing resources. Twenty departments (80%) reduced activity extensively, and the same proportion stated that they were no longer able to provide care according to legitimate medical needs. Conclusion Neurosurgical centers responded swiftly and effectively to a sudden decrease of neurosurgical capacity due to relocation of resources to pandemic care. The pandemic led to rationing of neurosurgical care in 80% of responding centers. We saw a relation between resources before the pandemic and ability to uphold neurosurgical services. The observation of extensive differences of available beds provided an opportunity to show how resources that had been restricted already under normal conditions translated to rationing of care that may not be acceptable to the public of seemingly affluent European countries. Electronic supplementary material The online version of this article (10.1007/s00701-020-04482-8) contains supplementary material, which is available to authorized users.
Introduction
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2 or Covid-19) pandemic has forcibly affected healthcare in other subspecialties than the primarily involved: intensive care, infectious diseases, and general practice. Neurosurgery is influenced by the redistribution of medical resources to those acutely needing Covid-19 care and by the need to handle or prevent Covid-19 among neurosurgical patients and staff [1][2][3][4][5].
Several editorials, letters, and articles have given accounts of neurosurgery during the Covid-19 pandemic [1,[4][5][6][7][8][9]. The This article is part of the Topical Collection on Neurosurgery general Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00701-020-04482-8) contains supplementary material, which is available to authorized users. immediate responses include diversion of ventilators and intensive care resources to prepare management of Covid-19 patients with subsequent decrease in neurosurgical resources and extensive postponing of elective patients. Patients are triaged and prioritized to manage all patients' medical needs. During the pandemic, regular neurosurgical emergences still occur [4] and the needs of these patients must be coordinated with the extraordinary demands of healthcare for Covid -19 patients. With terminology such as "triage" and "prioritization," the public and professionals communicate that all medical needs can be met, although extreme adjustment and measures are necessary (Mathiesen T., submitted). Still, rationing can also become necessary. Practically, rationing can be initiated horizontally by limiting resources for urgent neurosurgery and vertically by change of indications for surgery or intensive care.
We have undertaken a questionnaire survey of 25 neurosurgical departments in Europe to identify differences and similarities of resources, caseload, and indications during 1 week of presumed regular practice in December 2019 compared with a week in March 2020, when practice was expected to be heavily influenced by the pandemic. The aim was to survey differences and similarities in how neurosurgical care was affected by the Covid-19 pandemic.
Methods
A brief questionnaire (Appendix 1) was constructed to survey catchment areas, neurosurgical bed availability, caseload, need of rationing/prioritization, and indications for treatment during 1 week in December 2019 (Monday, December 9, 2019, to Sunday, December 15, 2019) compared with 1 week in March 2020 (Monday, March 23, 2020, to Sunday, March 29, 2020). The questionnaire was sent to one local investigator identified either as a board member of the European Association of Neurosurgical Societies (EANS) or a member of the EANS Ethico-legal committee or chosen from personal networks to represent countries not represented in the two previous bodies, between March 31 and April 3, 2020. The intention was to cover different European regions via member countries of EANS with one department in every country and to obtain better coverage of Italy and Spain, two countries that were initially most severely struck by Covid-19. Each local investigator was asked to select five qualified neurosurgeons including themselves and respond to the questionnaires. The local investigators were asked to either return all five forms or to synthesize the center's five responses and submit the aggregated result before April 7, 2020. The forms were collected, and results were compiled centrally by the first author (TIM). Unclear data was clarified via telephone contact with the local investigators. Numeric data for beds and treated patients were normalized to 1,000,000 catchment area population for comparability. Data on Covid-19 caseload in different countries was assessed from reported cumulative deaths from Covid-19 and diagnosed in May 2020 cases of Covid-19 per million population via the Worldometers website [10]. Statistical analyses were comprised of Mann-Whitney U test, sign test, Fisher exact test, and t test as specified in results.
Results
Requests to fill out questionnaires were sent to 34 centers in 26 countries, and responses were obtained from 25 centers in 18 countries (Table 1). The results comprise mean survey responses from five responders at each site (totally 125 responders). Quantitative responses varied < 10% from individual responders in each center. Responses on the management of different hypothetical patients showed complete intracenter agreement in 20 centers (80%) and differed for one assessment in 1/5 responders in the residual four centers. Nineteen centers responded with separate responses from the five surgeons, while six centers replied with one unified response collated by the local investigator. Subjective assessment of whether medical needs were met showed intracenter agreement in 20 centers (80%), while 1 or 2 of the five responders differed from the majority in the remaining five centers (20%).
Catchment areas and subjective evaluation of therapeutic challenges
The responding centers were regional or national tertiary university referral centers (n = 23, 92%) or regional neurosurgical hospitals (n = 2, 8%). The catchment areas varied from 450,000 to 5,000,000 persons. Four departments (16%) had smaller catchment areas than 800.000 (in Estonia, Spain, Turkey, and Belgium), eleven departments (40%) had catchment areas between 800,000 and 1,200,000, and ten departments (44%) had larger catchment areas than 1,500,000 (two in Sweden, England, Scotland, Finland, Denmark, two in Germany, Netherlands, and Ireland).
Twenty-four of the 25 responders (96%) graded their situation at the end of March as either "stable, but with concerns" (n = 14), difficult with extreme measures (n = 8), or "desperate" (n = 2). One center did not respond to this question.
Neurosurgical beds
The number of neurosurgical beds (regular + intermediate care) varied from 3 to 84/1,000,000 inhabitants and Neuro-ICU beds from 2 to 42/1,000,000 inhabitants in December 2019. Eight departments had fewer than 25 beds per 1,000,000, and 6 departments had 5 or fewer neuro-ICU beds per 1,000,000 inhabitants (England, Netherlands, two in Sweden, Ireland, Germany).
Most responding centers reported fewer elective craniotomies for brain tumors during the surveyed week in March 2020 than December 2019 (z = 3.3, p < 0.001, sign test). Five centers (20%) reported an equal or increased number of elective craniotomies, while 20 (80%) reported decreased activity. Six of the centers with decreased activity did not perform any elective craniotomies during the surveyed week of March. The mean number of elective craniotomies was 5.4 (range 1.5-10) vs. 2.3 (range 0-10)/1,000,000 catchment population in December and March, respectively.
Attitudes toward medical need and available resources
Patients with legitimate medical needs Eighteen centers (72%) reported that all patients with legitimate needs got cared for in December, but not in March (mean 39.4 neurosurgical + 12.5 neuro-ICU beds/million catchment area in December, 21.3 + 4.4 in March). Five centers (in Finland, Israel, Spain, one in Germany and Switzerland) reported that all patients with legitimate needs received care at both time points (40.2 neurosurgical + 9.5 neuro-ICU beds/million catchment area in December, 27.2 + 5.8 in March), while two centers (8 + 3.5 beds in December, 6.0 + 3.5 in March) stated that some patients were left without legitimate care at both time points.
Demand for healthcare
Responders from twelve centers (48%) reported a consensus that demand for medical services will always be higher than the supply at either time point. Nine centers (36%) reported that the demand was higher than the supply in March, but not in December. Only four centers (16%) reported that the demand was not higher than supply at either time point (in Finland, Switzerland, one in Germany, Israel).
Prioritization
Prioritization was an issue that was discussed already in December 2019 in 15 centers. Twelve centers (48%) also reported to have a system for prioritization at that time, while seven (28%) reported to have initiated discussions and nine (36%) implemented a system in March 2020. Prioritization was neither discussed nor systematized in two centers (8%) (in Israel and Spain).
Indications and waiting list for seven hypothetical patients (Figs. 1 and 2)
Previously healthy 75-year-old patient with mild symptoms, surgically accessible glioblastoma (GBM; Figs. 1 and 2a) All centers would operate the GBM patient in December 2019. The waitlist for the GBM patient was reported as 7 days or less in 15 centers (60%) with a mean of 35.7 n e u r o s u r g i c a l i n t e r m e d i a t e a n d g e n e r a l c a r e beds/1,000,000 catchment area and 10-18 days in ten centers (40%) with a mean of 34.4 beds/catchment area.
Nineteen/25 centers (76%) would also operate the patient in March 2020, four (16%) with a doubled time to surgery (one center centralized elective neurosurgery to one regional hospital), and one with more rapid access. Six centers (24%) would not operate this patient (Turkey, Ireland, Scotland, England, Greece, Sweden). The mean number of beds was higher in the 19 (76%) former than six latter centers (55/million in December 2019 and 16 in March 2020) vs. the six latter (29 and 14, respectively; the difference in March 2020 was, however, not statistically significant (p = 0.46, Mann-Whitney U test)). Ten of eighteen decreased their number of beds < 30% in the former group vs. one of five in the latter (p = 0.08, Fisher exact test).
Previously healthy 75-year-old patient with mild symptoms, surgically accessible convexity meningioma (Figs. 1 and 2b) All centers would operate the meningioma patient in December 2019, ten (40%) within 14 days, eight (32%) between 2 and 6 weeks, and five (20%) after 10 weeks (several months). Two centers (8%) gave no estimate of waitlist. The waitlist for the meningioma patient was reported as 14 days or less in eight centers with a mean of 44.3 neurosurgical intermediate care and general care beds/1,000,000 catchment area, 36 weeks in eight centers with a mean of 36.9 beds/1,000,000 catchment area, and 8-20 weeks in five centers with a mean of 19.6 beds/1,000,000 catchment area.
Ten centers (40%) would operate the patient in March 2020 within 4-42 days; the waiting list would be increased by 1-4 weeks in four centers and unchanged in six. Four centers (16%) would not offer surgery to this patient, which may reflect "not during the pandemic" (with previous waiting list of 2, 4, and > 8 weeks). Eleven centers (44%) would postpone surgery until "after corona," which was projected as 10-24 weeks.
Previously healthy 60-year-old patient with cervical spinal stenosis and moderately progressive mild myelopathy (Figs. 1 and 2c) All 23 applicable (two centers, Israel and Denmark, subspecialized to not perform spinal surgery) centers would operate the patient in December 2019, four within 14 days, fourteen between 2 and 6 weeks, and three after 12 weeks. Two centers gave no estimate of waitlist. The waitlist for the spinal stenosis patient was reported as 4 weeks or less in 15 centers with a mean of 46.7 neurosurgical intermediate care and general care beds/1,000,000 catchment area and more than 6 weeks in seven centers with a mean of 25.7 beds/1,000,000 catchment area.
Seven centers would operate the patient in March 2020 with an unchanged waitlist of 7-90 days; two would increase the waitlist from 1 and 2 weeks to 2 and 3 weeks, respectively; nine would postpone surgery until "after corona," which was projected as 10-24 weeks; five stated they would not operate the patient, which may reflect "not during the pandemic" (Scotland, England, Ireland, Turkey, Greece). Of the latter seven centers, six had access to 6.0 or fewer neuro-ICU beds/1,000,000 catchment areas while twelve of 18/1,000,000 (p = 0.02; Fisher exact test).
The centers that would operate in December 2019 but not in March 2020 initially had 7.0 neuro-ICU beds/1,000,000 catchment areas which decreased by 87% to 0.9 per 1,000,000, while the eleven centers that would operate at both times had access to 13.5 neuro-ICU beds/1,000,000 that decreased by 50% to 6.7 per 1,000,000 (the numbers of ICU beds were significantly lower in the centers that would not treat, than those who would treat in March 2020 (p = 0.013; Mann-Whitney U test), but not in December (p = 0.46; Mann-Whitney U test).
Previously healthy 65-year-old patient with hemiparesis, GCS 11, 60 cc, surgically accessible lobar ICH (Fig. 1) Eighteen centers (72%) would operate the lobar ICH in a 65year-old patient in both December 2019 and March 2020; four centers (16%) would offer surgery in December but not in March (one in Italy, Greece, one in Sweden, Ireland); three Previously healthy 50-year-old patient with hemiparesis, GCS 11, 60 cc, surgically accessible lobar ICH (Fig. 1) Twenty-two (88%) centers would operate the lobar ICH in a 50-year-old patient in both December 2019 and March 2020; three centers (12%) would neither operate in December 2019 nor March 2020 (Germany, Scotland, England). These three centers had six or fewer neuro-ICU beds/1.000.000 which was significantly less than available in the centers that would operate the patient (p = 0.04; Fisher exact test).
Previously healthy 50-year-old patient with subarachnoid hemorrhage (SAH), Hunt and Hess grade 4 ( Fig. 1) Twenty-one centers (84%) would admit the SAH patient to the ICU and place an external ventricular drain (EVD) in December and March; four centers (16%) would admit in December 2019 but not in March 2020 (one in Spain, Scotland, England, Ireland). These centers had 0.95 (range 0.2-2) neuro-ICU beds/1,000,000, while the centers that would admit the patient had 5.7 (range 0-21). The difference was not statistically significant (p = 0.09; t test).
Changes in resources and activity between December 2019 and March 2020
Thirteen centers (52%) treated an equal number of SAH patients during the weeks in December 2019 and March 2020. Their neuro-ICU capacity had decreased with a mean of 50% compared with eleven centers (44%) that treated fewer SAH patients, with a capacity decrease of 80%. The proportional decrease was significantly higher in the latter group than the former (p < 0.05, Mann-Whitney U test).
Eight centers (32%) treated an equal number of TBI patients during the weeks in December 2019 and March 2020. Their neuro-ICU capacity had decreased with a mean of 20% compared with seventeen centers (68%) that treated fewer TBI patients, with a capacity decrease of 84%. The proportional decrease was significantly higher in the latter group than the former (p < 0.05, Mann-Whitney U test).
Seven centers (28%) made an equal number of elective craniotomies during the weeks in December 2019 and March 2020. Their neurosurgical bed capacity had decreased with a mean of 20% compared with seventeen centers (68%) that made fewer craniotomies, with a capacity decrease of 84%. The proportional decrease was significantly higher in the latter group than the former (p < 0.05, Mann-Whitney U test).
Discussion
We obtained a snapshot of neurosurgical caseloads and indications during December 2019 and March 2020 and found major differences. The resources for neurosurgical patients decreased dramatically with fewer intermediate and regular neurosurgical beds in March 2020 than December 2019 in 19 of 25 responding centers; 16 centers reported fewer ICU beds. Correspondingly, admission and surgery of emergency cases as well as elective craniotomies decreased in most centers, while a minority of centers appeared to continue neurosurgical care almost unchanged. Nineteen centers reported that all patients with legitimate medical needs could not expect to have those needs met during the Covid-19 pandemic in the March 2020 week, and several centers gave examples of patients with neurosurgical emergencies that would not be treated.
Survey of practice during pandemic as reflected by changes
Preexisting differences in neurosurgical capacity and practice would fail to adequately reflect effects of the pandemic if only practices during the pandemic would be surveyed. Hence, we constructed a questionnaire that recorded practice and attitudes before and during the pandemic. The time points in December and March were chosen to reflect "regular practice" during a regular working week and practice affected by the Covid-19 pandemic in the vicinity of its European peak, respectively. We postulated that the supply of neurosurgical care would be in balance with demand and expectations in December 2019, but that supply would decrease and affect neurosurgical care during the pandemic. Several guidelines and recommendations have been recently published on how to manage the pandemic and possible shortage of resources [4,11,15,22]. The guidance comprises prioritization by postponing non-urgent cases, triaging of cases to use resources effectively and optimally meet individual medical needs, and, finally, rationing of cases: the process of selection of who will not be treated to optimally meet a medical need. We observed prioritization and rationing but with differences between centers. Optimization of care by reorganization was employed in some Italian and Spanish centers, where the responsibility of neurosurgical emergencies was concentrated to some centers while others took responsibility for regional elective care and still others shifted tasks to manage Covid-19 patients. Such task shifting [23] and concentration by sub-specialization are possible in healthcare systems that have several smaller centers, but not in countries like UK, Ireland, Denmark, Sweden, and Finland that already have fewer large regional centers with catchment areas often over 2,000,000 inhabitants. Moreover, the number of neurosurgical beds/1,000,000 was already low in the latter groups of countries, providing for only a small number of potential beds; these beds may have been used as efficiently as already under normal conditions. The mechanisms of demand and supply suggest that available resources affect how many and which patients are treated via local indications for admittance and surgery. Accordingly, regional centers in Europe with lowest number of beds responded that not all patients received therapy according to medical needs even before the pandemic in December 2019.
Adjustment of services to decreased supply of resources via prioritization and rationing
First, the demand for neurosurgery appears to have decreased during the pandemic. The number of SAH and TBI patients treated during the pandemic was significantly lower in March 2020 than December 2019, although indications for emergency treatment of neurosurgical patients have remained similar and guidelines specifically state that neurosurgical emergencies need to be handled according to already accepted knowledge and experience [5,11,22,24]. Only three centers stated that they would not admit one hypothetical typical neurosurgical SAH patient in need of intensive care. Several authors have described a decrease in trauma, probably secondary to lockdown and decreased travel [12], which agrees with fewer TBI patients in intensive care in March 2020. A similar finding was evident for SAH patients, but the difference was smaller. It has been suggested that SAH incidences have decreased also and the incidence of SAH may be prone to chance or seasonal flux. However, it is also possible that patients avoid medical consultations during the pandemic or get misdiagnosed because of confirmation biased healthcare workers suspecting Covid-19 in patients suffering from headaches, as suggested recently [25].
The obvious default response to decreased resources was to prioritize patients who would risk death or permanent loss of function unless treated and postpone care that could wait. Accordingly, most centers reported fewer elective craniotomies and increased time on the waitlist before surgery. Interestingly, a hypothetical patient with glioblastoma had an unchanged or shortened expected time to surgery in 15 centers and a doubled time in four. Hypothetical patients with meningiomas or cervical spinal stenosis were treated with a similar timeframe in six and seven centers, respectively, while waitlist would be increased with a few weeks in three and four and postponed until "after corona" in eleven and ten. Postponing care implies that the patients will still be treated and would not risk permanent deficits from the extended wait. One might even speculate that patients scheduled for meningioma or cervical stenosis surgery might experience decrease or stabilization of symptoms that may change the original surgical indication and reverse the decision to operate. Conservative management has a place for slowly growing meningiomas and cervical pathology [18,26], and even surgeons probably operate a higher number of patients than theoretically necessary to avoid harm from the progressive disease. It is not possible to make an exact prognosis of which patients benefit from any treatment, and healthcare statistics typically deals with "number needed to treat" (NNT) as a measurement of how many patients must be treated for one to have the intended benefit. In pharmacological management, NNTs may be higher than 100 [27], while surgical therapies require values much closer to one; yet, the ideal of NNT = 1 is probably impossible to reach. It is probable that longer waitlist may force NNT closer to 1, if patients are followed closely and operated rapidly if their conditions deteriorate.
Under conditions like a pandemic, resource constraints may ensue and health services must be provided with attention to cost efficiency and inevitable priority settings. All prioritization decisions bring controversy [21,28]. "Prioritization" and "triaging" suggest that patients will receive adequate treatment and do not necessarily risk harm from waiting or triaging to a certain treatment (Mathiesen T., submitted). Practically prioritization assesses the severity of untoward consequences and necessary urgency to treatment to avoid harm [4]. Notably, the algorithm entails the possibility that postponed care can lead to irreversible loss of function or even death. "Prioritization" under such conditions is no longer compatible with everybody getting access to legitimate care. It is more correct to use the term "rationing," which explicitly clarifies that treatment is not offered to everyone with a legitimate need. Extensively postponed treatment until "post-corona" may prove to be a form of disguised rationing. Rationing was more transparent when centers reported changed indications for surgery and intensive care between the December and March weeks. Elective craniotomies would not be offered to our hypothetical patients with glioblastoma or meningioma in six centers, and five would not treat the elective patient with spinal stenosis. It is likely that inability to offer surgery to the hypothetical GBM patient in or questionnaire would be an example of rationing that actually shortened the expected survival of the patient, since surgery postponed for several months would no longer be a relevant treatment option. The centers that modified surgical indications for elective surgery also reported restricted admission for the hypothetical emergency patients. One might argue that surgery of a meningioma or spinal stenosis would be undertaken after corona restrictions were reversed, but not offering surgery to a GBM patient or patients with the urgent conditions ICH and SAH clearly constitutes rationing of care. In summary, some form of rationing via changed surgical indications was evident in ten of the 25 responding centers (40%). Rationing was clearly implemented to adjust to Covid-19. The ten centers that reported changed indications for surgery also stated that not all patients could get treatment according to legitimate needs.
Immediate changes of neurosurgical services in response to the pandemic It appears that the immediate response in all centers was to postpone non-urgent care and to reconsider indications for patients in higher age groups or with severe conditions where benefit from neurosurgical treatment was uncertain, such as elderly patients with intracerebral hematoma or comatose patients with SAH. These measures were also typically recommended in guidelines and other articles, although publications became available when measures already were taken [5,6,11]. European neurosurgeons largely lack specific training, knowledge, or experience of practice during a pandemic, but acted similarly and it appears effectively, since only marginal changes appear to have affected the long-term prospects of most neurosurgical patients. This observation illustrates a human faculty to practically use available knowledge and solve a new unexpected problem. Moreover, guidelines gave only general recommendations and left interpretation and application to local medical experts. The post hoc publication of guidelines may seem ironical or useless, but we argue that the publications filled a need; they reflected extensive consensus and might serve as retrospective confirmation that the challenge was met appropriately and in agreement with peers. Still, our survey indicated that application varied extensively. It may well be affected by differences in severity of the pandemic, but we also found major differences in neurosurgical capacity per catchment population, catchment area of individual centers, and neurosurgical culture.
Collateral damage
The term "collateral damage" describes untoward effects on health that were indirectly caused by SARS-CoV-2 [14], which may have taken several forms related to resource shortage, misdiagnoses, and reluctance to fill available hospital beds.
It is obvious that deviation of healthcare resources to a new entity deprives existing therapies, at least during an adjustment phase. It is obvious from our questionnaire that neurosurgical care has been rationed and patients have been deprived of neurosurgical care because of lacking resources and redistribution of resources. The most obvious shift was maybe that ventilators and anesthesiology staff were shifted from neurosurgical care to lifesaving Covid-19 therapy.
Many countries in Europe became prepared in advance because they had realized what had happened in unexpectedly hit, unprepared Italy. Many centers in countries that were not severely hit by Covid-19 decreased their elective activity and restricted indications for surgery very rapidly, while their capacity remained comparable with centers with extensive Covid-19 loads. The response reflected a principle of precaution, since lack of preparation for the pandemic would have been negligent if the pandemic would have reached the specific region. A draconic shutdown of "non-emergency" treatment also in centers with a small load of Covid-19 was probably rational but might have unnecessarily deprived patients of neurosurgical care. Still, it is probable that the dichotomization between urgent and non-urgent care failed to handle patients with intermediate needs and may even have emphasized "urgency" over "benefit." Our hypothetical 75-year-old patient with ICH may have had a very urgent condition with limited benefit of surgery, while a less debilitating elective condition in our hypothetical elective patients may have had more to lose with extensively postponed surgery. Sufficient empirical data on postponed "non-urgent" surgery is not available, and the sudden need to prioritize has identified an important topic to survey-also to critically analyze practices in centers that already had institutionalized long waitlists prior to the pandemic.
The coronavirus can induce neurological disorders such as polyneuropathy, encephalopathy, and demyelinating lesions [29]. Headache, disturbance of consciousness, olfactory nerve dysfunction, and seizures have been reported among the symptoms of the disease. Subsequently, there is evidence of misdiagnoses when non-Covid-19-related symptoms of neurosurgical disease were mistakenly evaluated as Covid-19 symptoms, leading to patient's and doctor's delay [15,25]. Patients and physicians may have been reluctant to occupy hospital beds for fear of getting infected by SARS-CoV-2 [17]. Likewise, it was important to ensure that neurosurgical wards and the operating room were maintained free from Covid-19. This requires continuous active surveillance and testing [30].
Moreover, we have pushed a dual burden ahead. Many patients had their treatment postponed for months: it will become necessary to treat these patients while new patients get diagnosed and the total number of patients needing care has accumulates. The other burden is the healthcare economical debt. The financial resources were stretched by collapsing income and failing economy, while healthcare spending to save lives increased.
Differences between centers and neurosurgical culture
The burden of Covid-19 differed between European countries. The fact that pandemics' burden of death varies from country to country is well known [20]. Five participating centers considered that all patients with legitimate medical needs received care as needed in December 2019 and March 2020, while fifteen centers could no longer meet all medical needs in March 2020, and five centers reported that some patients were left with unmet legitimate needs already in December 2019. Still, four centers apparently continued business as usual although some had considerable numbers of Covid-19 patients in the country.
As expected, our figures indicate that centers were worst affected if their regions were severely hit and if their pre hoc resources were comparatively low. In fact, some centers had fewer beds and longer waitlist even before the pandemic than those others had during Covid-19 measures. Thus, small margins forced changes in indications and services to a high extent. It was also evident that the centers with comparatively low resources/million inhabitants stated doubts whether legitimate needs were met already before this pandemic.
Another issue is cultural difference of indications. Some centers did not consider any of the three hypothetical patients with intracerebral hematomas as surgical candidates. The observation may reflect a quest for evidence of benefit from prospective randomized trials, while others may have used evidence based on other literature and experiences. One issue to explore is whether the limitations in indications reflect adaptation to limited resources or whether resources have been limited secondary to decreased demand. The ethics of extrapolating results from the negative prospective randomized trials [19] with questionable external validity [13] merit separate studies. It is important to survey whether a relative lack of resources might have influenced the readiness to accept trial outcomes as evidence for non-neurosurgical management of intracerebral hemorrhage.
It appears that most centers and countries, even among those hit worst, have been able to deliver extensive neurosurgical services so that emergency surgery has only been limited in few centers, while most centers have postponed surgery of non-urgent character. Thus, a backlog has been created, and the impact of this backlog may also differ depending on available resources [2,16]. It is probable that large numbers of available beds will create less morbidity from extended waitlists than already limited resources. Taken together, our analyses indicate that resources in terms of available neurosurgical beds vary extensively in Europe. Centers with large margins have responded very effectively, and centers with small margins appeared to have provided cost-effective services before the pandemic but have upheld services only if their regions were hit mildly by Covid-19. In this context, the lack of consensus on a "constant need to prioritize healthcare" needs to be studied further. Several of the seemingly most affluent countries had few neurosurgical beds, reported an ongoing discourse on prioritization, and restricted services and indications to a higher extent than centers that did not report a discourse of prioritization. It is possible that the discourse of prioritization rapidly had focused on how to prioritize and decrease healthcare spending rather than to discuss which rationing is necessary and acceptable. The idea that prioritization of healthcare in the sense of "rationing" is maybe not necessary might need further exploration in the future.
Taken together, there was no unifying feature for the centers that maintained services. The majority was situated in areas with limited Covid-19 burden and none had very few beds per population. A family likeness might be a combination of sufficient numbers of beds, which would be higher in the regions struck hardest, and work ethics where all patients were considered to be entitled to having legitimate medical needs met.
Limitations
Our survey is entirely dependent on reports from the responding neurosurgeons and selected centers; we had a short timeframe to receive responses and may have failed to get relevant input. The responses from each center are representative for the centers but may not necessarily reflect larger regions, countries, or Europe, although they provide a snapshot from different areas with different healthcare systems and SARS-CoV-2 exposures. Even during the course of the pandemic in a particular department, patient management policy could have changed more or less. This is why we use the term "snapshot" when we are examining a rapidly changing situation. Quantitative data on sizes of catchment areas and available beds agreed well internally within reports from each center, but such data may be differently defined in different healthcare systems. Strictly regionalized large centers in Nordic countries have very strictly defined populations to serve, while smaller neurosurgical centers in densely populated areas may have an overlap of catchment areas with other hospitals. Moreover, the normalization of data from 1 week in departments of different sizes is prone to flux and possible disproportionate influence from chance fluctuations in centers with small catchment areas, since their reported figures were multiplied while figures from the largest centers were divided.
Moreover, the assessment of medical needs is subjective and value laden. Assessment of surgical indications may vary between individual surgeons. We attempted to maximize reproducibility and representability by inviting five senior surgeons to fill out the questionnaires from each center and thereby compensate for individual idiosyncrasy and thereby estimate that the responses do represent local consensus.
We compared 1 week in December with 1 week in March. One can question whether a week during March in a different year without a pandemic would have been a better control. The choice of December was made to allow for the detection of the probable sudden sharp change in the care of neurosurgical patients because of the pandemic, which we considered would be better reflected with comparison with a week close in time. Moreover, one can question which measure of Covid-19 burden would best reflect an impact on practice. Practice is affected by a combination of expectations and need to react to a real situation, while corona statistics reflect disease spread and need for intensive care with a delay. The number of Covid-19 reflected deaths can reflect several weeks of intensive care utilization, and Sars-CoV-2 can be transmitted several weeks before infection is diagnosed. Moreover, there is a delay before cases appear in statistics and country statistics fail to reflect regional differences. For these reasons, any surrogate parameter of Covid-19 burden is diffuse. We chose population-adjusted national values of diagnosed cases and Covid-19-related deaths as a cumulative approximation of the perceived and actual pandemic challenges during the surveyed pandemic week.
Conclusions
We have conducted a rapid survey of changes of neurosurgical care during the Covid-19 pandemic. Rationing of neurosurgical care was common and neurosurgical activity was decreased in 80% of responding centers. Yet, differences were unexpectedly large in available resources and adaption to the pandemic. We need to further survey how attitudes to neurosurgical care in different series affect our populations and which margins of resources might be needed to provide neurosurgical care according to professional ethics.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Ethical approval All procedures performed were in accordance with the ethical standards of the institutional and national research committee (Swedish Ethical Review Authority) and with the 1964 Helsinki declaration and its later amendments. For this type of study, a questionnaire survey, formal consent is not required. Yet, informed consent was obtained from all individual responders to the questionnaire. | 2020-07-09T09:09:28.941Z | 2020-07-08T00:00:00.000 | {
"year": 2020,
"sha1": "2899f3b57f43ca6d2812a5d04708a5b8044b0e97",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00701-020-04482-8.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0232c3a0a6dccfa904836d4b6b242e508ddf24fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21563952 | pes2o/s2orc | v3-fos-license | Studying quantum spin systems through entanglement estimators
We study the field dependence of the entanglement of formation in anisotropic S=1/2 antiferromagnetic chains displaying a T=0 field-driven quantum phase transition. The analysis is carried out via Quantum Monte Carlo simulations. At zero temperature the entanglement estimators show abrupt changes at and around criticality, vanishing below the critical field, in correspondence with an exactly factorized state, and then immediately recovering a finite value upon passing through the quantum phase transition. At the quantum critical point, a deep minimum in the pairwise-to-global entanglement ratio shows that multi-spin entanglement is strongly enhanced; moreover this signature represents a novel way of detecting the quantum phase transition of the system, relying entirely on entanglement estimators.
Collective behavior in many-body quantum systems is associated with the developement of classical correlations, as well as of correlations which cannot be accounted for in terms of classical physics, namely entanglement. Entanglement represents in essence the impossibility of giving a local description of a many-body quantum state. In particular entanglement is expected to play an essential role at quantum phase transitions, where quantum fluctuations manifest themselves at all length scales. The behavior of entanglement at quantum phase transitions is a very recent topic, so far investigated in a few exactly solvable cases [1,2,3,4]. Moreover, entanglement overwhelmingly comes into play in quantum computation and communication theory, being the main physical resource needed for their specific tasks [5]. In this respect, the perspective of manipulating entanglement by tunable quantum many-body effects appears very intriguing.
In this letter we show that entaglement estimators give important insight in the physics of spin systems. In particular, we focus on two striking features of anisotropic spin chains in an external field: the occurrence of a factorized ground state at a field h f and of a quantum phase transition at h c . We propose a novel estimator to understand the role of quantum fluctuations in the quantum critical region.
We focus our attention on the 1D XYZ model in a field: where J > 0 is the exchange coupling, i runs over the sites of the chain, and h ≡ gµ B H/J is the reduced magnetic field. In Eq. (1) we have implicitly performed the canonical transformationŜ x,y i → (−1) iŜ x,y i with respect to the more standard antiferromagnetic hamiltonian. The parameters ∆ y , ∆ z ≥ 0 control the anisotropy of the system. In particular, for ∆ z = 0 Eq. (1) reduces to the exactly solvable XY model in a transverse field [6]. For ∆ z = 0 the model does not admit an exact solution [7], and it has been recently studied within approximate analytical and numerical approaches [8,9,10]. Interestingly, the general model with finite ∆ z is experimentally realized by the S = 1/2 quantum spin chain Cs 2 CoCl 4 [11], with strong planar XZ anisotropy, ∆ y ≈ 0.25, ∆ z ≈ 1, and J ≈ 0.23 meV.
In our study, we concentrate on the case 0 ≤ ∆ y ≤ 1, ∆ z = 1, defining the XYX model in a field [12]. The more general case of the XYZ model in a magnetic field with ∆ z < 1 should exhibit the same qualitative behavior, as it shares the same symmetries with the XYX model in a field, and therefore it belongs to the same universality class. The analysis is carried out via Stochastic Series Expansion (SSE) Quantum Monte Carlo (QMC) simulations, based on a modified directed-loop algorithm [13,14], for chains of various length, from L = 40 to L = 120. Ground state properties have been determined by considering inverse temperatures β = 2L, in order to capture the T = 0 behavior.
The ground-state phase diagram of the XYX model in the ∆ y − h plane is shown in Fig. 1. The model displays a field-driven quantum phase transition at a critical field h c (∆ y ), which separates the Néel-ordered phase (h ≤ h c ) from a disordered phase (h > h c ) with short-range antiferromagnetic correlations [7,8,10]. The transition line h c (∆ y ) has been determined by a scaling analysis of the correlation length ξ xx , whose linear scaling ξ xx ∼ L marks the quantum critical point. Using the critical scaling of the structure factor S xx (q = 0) ∼ L γ/ν−z , we verified that the transition belongs to the 1D transversefield Ising universality class (γ/ν = 7/4 and z = 1), in agreement with analytical predictions [8].
Besides its quantum critical behavior, a striking feature of the model of Eq. (1) is the occurrence of an exactly factorized ground state for a field h f (∆ y ) lower than the critical field h c , as predicted in Ref. [7]. In the case of the XYX model, this factorizing field is [7] where the single-spin states |ψ i are eigenstates of (n 1(2) ·Ŝ) with n 1(2) being the local spin orientation on sublattice 1 (2). Taking n = (cos φ sin θ, sin φ sin θ, cos θ), one obtains [7] φ 1 = 0, φ 2 = π, θ 1 = θ 2 = cos −1 (1 + ∆ y )/2 . The factorized state of the anisotropic model continuously connects with the fully polarized state of the isotropic model in a field for ∆ y = 1 and h = 2.
Despite its exceptional character, the occurrence of a factorized state is not marked by any particular anomaly in the experimentally measurable thermodynamic quantities shown in the lower panel of Fig. 1. However, we will now see how entanglement estimators are able to pin down a factorized state with high accuracy.
To estimate the entanglement of formation [15] in the quantum spin chain of Eq. (1) we make use of the one-tangle and of the concurrence. The one-tangle [16,17] quantifies the T = 0 entanglement of a single spin with the rest of the system. It is defined as i+L/2 | 1/2 in the QMC on a chain of length L), σ α are the Pauli matrices, and α = x, y, z. In terms of the spin expectation values M α , τ 1 takes the simple form: It can be easily shown that the vanishing of τ 1 implies a factorized ground state, and viceversa. The concurrence [18] quantifies instead the pairwise entanglement between two spins at sites i, j both at zero and finite temperature. For the model of interest, in absence of spontaneous symmetry breaking (M x = 0) the concurrence takes the form [17] C ij = 2 max{0, C where with g αα ij = Ŝ α iŜ α j . The T = 0 QMC results for the model Eq. (1) with ∆ y = 0.25 are shown in Fig. 2, where we plot τ 1 , the sum of squared concurrences and the field and space dependence of the concurrence.
The following discussion, although directly referred to the results for ∆ y = 0.25, is actually quite general and applies to all the other studied values of ∆ y . Unlike the standard magnetic observables plotted in Fig. 1, the entanglement estimators display a marked anomaly at the factorizing field, where they clearly vanish as expected for a factorized state. When the field is increased above h f , the ground-state entanglement has a steep recovery, accompanied by the quantum phase transition at h c > h f . For ∆ y = 0.25, e.g., h c = 1.605(3) and h f = 1.5811. The system realizes therefore an interesting entanglement switch effect controlled by the magnetic field.
As for the concurrence, Fig. 2(b) shows that its range is always finite at and around the critical point, and it never extends farther than the fourth neighbor. Moreover, the factorizing field divides two field regions with different expressions for the concurrence: whereas C ij = 0 at h = h f . In presence of spontaneous symmetry breaking occurring for h < h c , the expression of the concurrence is generally expected to change with respect to Eqs. (4),(5), as extensively discussed in Ref. 19 . For the model under investigation, this happens when the condition [ ij is satisfied, i.e. for h > h f . This means that our estimated concurrence is accurate even in the ordered phase above the factorizing field; in the region 0 < h < h f it represents instead a lower bound to the actual T = 0 concurrence. Alternatively it can be regarded as the concurrence for infinitesimally small but finite temperature.
In Fig. 2 the sum of squared concurrences τ 2 is always smaller than or equal to the one-tangle τ 1 , in agreement with the Coffman-Kundu-Wootters conjecture [16]. This shows that entanglement is only partially stored in twospin correlations, and it is present also at the level of three-spin entanglement, four-spin entanglement, etc. In particular, we interpret the entanglement ratio R = τ 2 /τ 1 as a measure of the fraction of the total entanglement stored in pairwise correlations. This ratio is plotted as a function of the field in Fig. 3. As the field increases, we observe the general trend of pairwise entanglement saturating the whole entanglement content of the system. But a striking anomaly occurs at the quantum critical field h c , where R displays a very narrow dip. According to our interpretation, this result shows that the weight of pairwise entanglement decreases dramatically at the quantum critical point in favour of multi-spin entanglement. Unlike classical correlations, entanglement shows the special property of monogamy [16], namely full entanglement between two partners implies the absence of entanglement with the rest of the system. Therefore multispin entanglement appears as the only possible quantum counterpart to long-range spin-spin correlations occurring at a quantum phase transition. This also explains the somewhat puzzling result that the concurrence remains short-ranged at a quantum phase transition ( Fig. 2(b)) while the spin-spin correlators become long-ranged, and it evidences the serious limitations of concurrence as an estimate of entanglement at a quantum critical point. Strong indications on the relevance of multi-spin entanglement in quantum-critical spin chains come also from the study of the entanglement between a block of L contiguous spins and the rest of the chain [3]. Finally, multi-spin entanglement involves a macroscopically coherent superposition of quantum states, and this result is consistent with the picture of macroscopic (i.e. longwavelength) quantum fluctuations occurring at a quantum phase transition.
In turn, we propose the minimum of the entanglement ratio R as a novel estimator of the quantum critical point, fully based on entanglement quantifiers. This result appears general for the whole class of models described by the hamiltonian of Eq. (1). Inset (b) of Fig. 3 shows in fact that an analogous dip in the entanglement ratio signals the quantum phase transition in the Ising model in a transverse field (∆ y = ∆ z = 0), occurring at the critical field h c = 1/2. Work is in progress to test the universality of such novel signature of quantum critical behavior for completely independent quantum phase transitions.
The use of the QMC method enables us to naturally monitor the fate of entanglement when the temperature is raised above zero. In this regime the concurrence is the only well-defined estimator of entanglement, whereas the one-tangle has not yet received a finite-temperature generalization. Fig. 4(a) shows the nearest-neighbor (n.n.) concurrence as a function of the field for different temperatures in the XYX model with ∆ y = 0. We observe that C i,i+n < C i,i+1 for n > 1, and at high enough temperature (T > ∼ 0.1J) only the n.n. concurrence survives. The most prominent feature is the persistence of a field value (or an interval of values) at which the concurrence is either zero (for T > ∼ 0.05J) or ∼ 10 −3 (for 0 < T < ∼ 0.05J). In particular, the field values for which the concurrence vanishes are temperature-dependent, so that two-spin en- tanglement can be switched on and off tuning both the field and the temperature. Fig. 4(b) shows a highly non-trivial temperature dependence of the two-spin entanglement at h = h f = √ 2. Although vanishing at T = 0, the n.n. concurrence has a quick thermal activation due to thermal mixing of the factorized ground state with entangled excited states. Although the spectrum over the ground state displays a gap of order 0.1J [8], in one dimension strong fluctuations induce thermal entanglement [20] already at temperatures which are an order of magnitude lower. The appearence of thermal entanglement is directly related to the increasing behavior of the correlators g yy = g yy i,i+1 and g zz = g zz i,i+1 entering the expression of the C (1) component, Eq. (4) (also shown in Fig. 4(b)). In particular the appearence of a finite g yy is a purely quantum effect, since ∆ y = 0. Because of the non-monotonous behavior of the correlators, at higher temperatures thermal entanglement disappears and reappears again, revealing an intermediate temperature region where two-spin entanglement is absent.
In summary, making use of efficient QMC techniques we have provided a comprehensive picture of the entanglement properties in a class of anisotropic spin chains of relevance to experimental compounds. We have shown that the occurrence of a classical factorized state in these systems is remarkably singled out by entanglement esti-mators, unlike the more conventional magnetic observables. Moreover we find that entanglement estimators are able to detect the quantum critical point, marked by a narrow dip in the pairwise-to-global entanglement ratio. Therefore we have shown that entanglement estimators provide precious insight in the ground-state properties of lattice S = 1/2 spin systems. Thanks to the versatility of QMC, the same approach can be used for higher-dimensional systems. In this respect, investigations of the occurrence of factorized states in more than one dimension are currently in progress. Finally, the proximity of a quantum critical point to the factorized state of the system gives rise to an interesting fielddriven entanglement-switch effect. This demonstrates that many-body effects driven by a macroscopic field are a powerful tool for the control of the microscopic entanglement in a multi-qubit system, and stand as a profitable resource for quantum computing devices. | 2018-04-03T04:44:32.784Z | 2004-04-16T00:00:00.000 | {
"year": 2004,
"sha1": "dae4c1ae91b7527ce00cdd01fd7ae06c793e8cf5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0404403",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7fedd506380a0baab3ae88074f8da384d6d357ae",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
267594868 | pes2o/s2orc | v3-fos-license | Preventive and therapeutic effects of the peel powder of P. granatum in a rat sepsis model
effects of the peel powder of P. granatum in a rat sepsis model
Introduction
Sepsis is a disease with a high incidence and mortality, resulting in death as a result of immune response disorder and circulatory and/or organ dysfunction due to infection.The original source of sepsis is an infection, and the path from infection to sepsis is a complicated pathophysiological process that includes invasion by pathogens, cytokine release, capillary leakage, and microcirculation disorder (KAUKONEN et al., 2014).Sepsis is the primary cause of death from infection, especially if not recognized and treated promptly.Diagnosis is important and urgent.Sepsis is a syndrome that is shaped by pathogen factors and host factors (sex, race and other genetic determinants, age, and environment) and its features develop over time.What distinguishes sepsis from infection is the presence of an abnormal or irregular host response and organ dysfunction.Sepsis-induced organ dysfunction may be occult, therefore, its presence should be considered in every patient presenting with an infection (SINGER et al., 2016).The underlying mechanism of tissue and organ dysfunction in sepsis is decreased oxygen delivery and oxygen use in cells as a result of hypoperfusion.Hypoperfusion occurs due to the cardiovascular dysfunction seen in sepsis.Impairment of the barrier function in the endothelium, vasodilation and increased leukocyte adhesion occur.This causes edema fluid to accumulate in the interstitial spaces, body cavities, and subcutaneous tissue (POELAERT et al., 1997;JONES and PUSKARICH, 2011;VIEILLARD-BARON, 2011).There is a disruption of the alveolar-endothelial barrier, with an accumulation of protein-rich fluid in the lungs, interstitial lung spaces and alveoli.In extreme cases, this can cause a ventilation-perfusion mismatch, hypoxia and decreased lung compliance, resulting in acute respiratory distress syndrome (ARDS).Decreased renal perfusion, acute tubular necrosis and varying degrees of acute kidney injury occur in the kidneys.In the liver, the clearance of bilirubin is suppressed, which produces cholestasis.Endothelial changes weaken the blood-brain barrier, causing the entry of toxins, inflammatory cells, and cytokines.Cerebral edema, neurotransmitter disruption, oxidative stress and subsequent changes in white matter damage lead to a clinical spectrum of septic encephalopathy, ranging from mild confusion to delirium and coma.Sepsis is known to produce a catabolic state.Rapid and significant muscle breakdown occurs to produce amino acids for gluconeogenesis to nourish immune cells.In addition, increasing insulin resistance causes a state of hyperglycemia (SINGER et al., 2016).It is reported that more than 30 million people worldwide are affected by sepsis every year, and 6 million people die.As a result, Chinese emergency medicine specialists introduced the concept of "prevention and prevention" of sepsis, and conducted the "Campaign for Sepsis Prevention in China" (PSCC) throughout China.In addition, they put forward the principles of performing targeted diagnosis, examination and treatment in the "early stage of sepsis" and "peri-sepsis period" in order to realize early prevention, early discovery and early intervention, and reduce morbidity.Research on the prevention of sepsis-related deaths, and thus the diagnosis and treatment of patients with acute severe infection is important (LEVY et al., 2012).
Recently, natural resources have been investigated for the treatment of many diseases.Punica granatum L., a member of the Punicaceae family, is one of the oldest edible fruits.It is known that Punica granatum is known to possess pharmacological properties, such as antioxidant, radical scavenging and anticancer properties (LERMAN et al. 2005).However, previous studies demonstrated that various parts of P. granatum also showed anti-oxidant, anti-bacterial, antidiarrheal, anti-viral, anti-diabetic, antihelmintic, hypolipidemic, hepatoprotective, anti-neoplastic and protective activity for vessel and digestive systems (MIGUEL et al. 2010).Although P. granatum has been used against various diseases, there are no studies on the anti-microbial and histopathological effects of the peel powder of P. granatum.Therefore, the aim of this study was to investigate the potential anti-microbial and histopathological effects of P. granatum peel powder on cecal ligation and puncture-induced (CLP) sepsis in rats.
Materials and methods
Preparation of P. granatum peel powder.The extraction procedure was conducted according to our previously published data (STojanović et al., 2017).P. granatum peel powder (100 g) was extracted with 50% ethanol in an ultrasonic bath for 40 min at 60ºC.The obtained extract was filtered and evaporated to dryness using a rotary evaporator.
Animals.Twenty-four, healthy, 10-week-old male Wistar Albino rats were used in this experiment.The rats were housed in polysulfone cages at 21-24 °C and 40-45% humidity, and with light-controlled (12 h light/12 h dark) conditions at the Laboratory of the Animals Breeding and Experimental Research Center of Etlik Central Veterinary Control and Research Institute (Ankara, Turkey).The animals were fed with a standard pellet diet and water ad libitum throughout the experimental procedure.The rats were maintained in accordance with the directions of the Guide for the Care and Use of Laboratory Animals.All experimental protocols were approved by the Experimental Animal Ethics Committee of Etlik Central Veterinary Control and Research Institute (EDAM/2020-4).After acclimation for one week, all the rats were randomly divided into four groups consisting of six rats in each group, as follows: Sham-operated (S) Group, Control (C) Group, Treatment-1 (T1) Group, and Treatment-2 (T2) Group.
Induction of rat sepsis model.Sepsis was induced by cecal ligation and puncture (CLP) as previously described (HU et al., 2019).The rats were intraperitoneally anesthetized with xylazine hydrochloride (10 mg/kg) and ketamine hydrochloride (50 mg/kg).Abdominal shaving was performed after the anesthesia procedure.The rats were placed in the supine position.After routine disinfection of the abdomen, a 3-cm midline vertical incision was performed.The subcutaneous and muscle layers were separated, and the abdominal cavity was opened.The cecum was exposed, and the ileocecal region was ligated with USP 4/0 polyglactin (Lactasorb PGLA, Orhan Boz, Turkey).The cecum was perforated with an 18-gauge needle and gently squeezed to remove a small amount of feces.The cecum was then placed back into the abdominal cavity.The muscle layers of the abdomen and skin were closed with USP 4/0 polyglactin (Lactasorb PGLA, Orhan Boz, Turkey).In the sham-operated group, the same procedures were performed, however, cecal ligation and puncture were not applied.
The treatment procedure.The 50% ethanol extract of Punica granatum peel powder (200 mg/kg;FADDLADDEEN and OJAIMI, 2019) prepared in distilled water was administered one hour before (group T1) and 10 hours after (group T2) the operation in 2 mL volume per os.The shamoperated and control groups were given distilled water at 2 mL dose by oral gavage.
Termination of the experimental procedure.72 hours after the treatment procedure, all the rats were sacrificed by taking blood from the heart under general anesthesia (10 mg/kg xylazine hydrochloride and 50 mg/kg ketamine hydrochloride).Blood samples were collected by cardiac puncture for bacterial culture analysis.The liver, lung, heart, kidney, spleen, and pancreas were dissected.The blood samples were collected in heparinized tubes for bacterial culture analysis, and added to nutrient agar.In addition, blood agar, MSA agar, and EMB agar were used to identify the isolated bacteria.
Histopathological evaluations.The liver, lung, heart, kidney, spleen, and pancreas were sampled from rats in all groups (n=6 for each group).These organs were examined according to macroscopic evaluation criteria, and all tissue samples were fixed in 10% buffered formalin for 48 hours.After the fixation, the tissues were treated with graded alcohol and xylene series (Leica, TP1020, Germany) and blocked in paraffin (Thermo Shandon, Germany).Five µm thickness sections were cut by rotary microtome (Shandon).From paraffin blocks, sections were stained according to the hematoxylineosin (H&E) staining procedure (LUNA, 1968) and evaluated under a digital optical light microscope, and images were taken with a camera attachment (Olympus BX51 digital microscope, DP25 Japan).For scoring histopathological findings, a number was obtained by counting 10 fields at 400x magnification (10 HPF).The counted fields were calculated as proportions and expressed as (%) percentages.according to the density of findings (inflammation, vascular changes, degeneration and necrosis), the mean results per animal in each group were calculated.Then, the total mean results were analyzed statistically.
Statistical analysis.Statistical analyses were performed using Graphpad Prism 8.4.2The results are expressed as the mean ± standard error of the mean (SEM).The two-way analysis of variance test and post-hoc Bonferroni multiple comparison test were used to determine the significance of differences between groups.Statistical significance was assumed at the level of P<0.05.
Results
Survival rate.The survival rate was 83.33% (5/6 rats) in group T2, whereas the survival rate was 100% for the other groups.There was no statistically significant difference in survival between the P. granatum -treated CLP groups and the distilled water-treated CLP group.
Blood bacterial culture.After the cecal ligation and puncture were performed, sepsis occurred due to fecal spillage in this model.The blood culture results are given in Table 1.At the end of the experimental procedure, a bacterium (E.coli) was isolated in only one case from group S. Bacteria colonies were detected in all cases in group C. While E. coli and Staphylococcus aureus were identified in four cases, the presence of E. coli was noticed in two cases in group C. Bacteria (S. aureus and E. coli + S. aureus) were determined in fewer cases (n=2) in group T1 when compared with groups C and T2.In group T2, S. aureus and E. coli + S. aureus were isolated in two case each, but there were no bacteria in two cases.Histopathological findings.inflammation, degeneration, necrosis, and vascular changes were the main lesions in the organs mentioned.However, the inflammatory cells were mainly composed of neutrophils and macrophageinflammatory proteins.in degenerative changes, cells lost their nuclei and their cytoplasms shrank into a dark pink color.Cells that underwent degenerative changes had vacuoles with clear pronounced edges.In some areas, degeneration of parenchymal cells increased in many areas, and necrotic areas were observed.In the necrotic areas, cellular borders were lost.Vascular changes were also prominent in the histopathological findings.Some vessels, including veins and arterioles, were enlarged with many erythrocytes.In some areas, hemorrhage also occurred.Some areas, especially the lungs and liver, showed microhemorrhages and blood extravasations.Edema was also present in these dense vascular disturbances.Edema was seen densely in the alveolar lumina of lungs, sinusoids and portal region of the liver, and the interstitium of the kidney.
In group C, only vasculature changes, which constituted predominantly mild hyperemia, were observed in each case.Acute cell swelling and vacuolar degeneration and necrosis in the hepatocytes of the liver, the alveolar epitheliums of the lungs, and cortical tubule epitheliums of the kidneys were encountered in many areas in all cases.Degeneration and necrosis were found less often in the islet cells of the pancreatic glands.Parenchyma degeneration was seen densely in myocardiocytes.In the spleen, hyperplastic lymph follicles were present.Some of them included free erythrocytes along with hyperemia.There were no inflammatory cells in any of the organs mentioned in this group.in group T1, the findings were localized in the same organs in many cases.Lesion distribution, in terms of degeneration and necrosis, was less than the control group findings in these cases.Lymphoid follicles were hyperplastic in some areas.Islet cells were not affected by degeneration in any case in this group.Lesions were generally restricted to certain areas.They were not widespread in every field.When the histopathological findings of group T2 are examined, they show similarities with the findings of group T1.Likewise, the density of inflammatory cells differed less when comparing groups T1 and T2.In group T2, these findings were observed in the same organs, while the number of affected cases was found to be low.In the liver, fewer degeneration and vasculature changes were found in four cases.In the kidney and spleen, milder lesions were found with the same appearance.In the lungs, mild vasculature changes, including hyperemia and edema, were found in two cases.In the heart, there were moderate changes in three cases.There was no inflammatory cell infiltration in this group as in previous groups.in group S, the findings were localized in the liver, lungs, heart, spleen, and kidneys in general.There was no inflammatory infiltration.vascular changes and cellular alterations, including degeneration and necrosis, were common in every field of the organs in almost all cases.No lesions were found in the pancreas in group S, even though there were severe degenerative and necrotic changes in group C, and milder or less degeneration in groups T1 and T2 (Fig. 1.).The histopathological scores of the liver, lungs, heart, kidneys, spleen, and pancreas are given in Tables 2, 3 and 4 and Fig. 2.
Discussion
In the management of sepsis, providing tissue oxygenation and perfusion, and applying appropriate antimicrobial therapy against the causative organism are some of the therapeutic goals.For this purpose, the appropriate antibiotic use, at the right time, fluid therapy, vasopressors and inotropes, airway support and oxygen, and cortisone are used for sepsis treatment (KEELEY et al., 2017).Although there are many treatment options used in sepsis cases, specific therapy, targeting the sepsis mediators has not yet been proven to be effective (EVANS, 2018).Therefore, new drug candidate molecules, including promising natural products, have been investigated for the treatment of sepsis.The effectiveness of medicinal plants in the treatment of many diseases has been investigated for many years.The use of polyphenols in the treatment of inflammatory diseases has become increasingly important due to their anti-inflammatory effects.Phenolic compounds are generally found in the fruit, leaves, seeds, bark, and roots of plants (COLOMBO et al., 2013;MANSOURI et al., 2016) According to previous studies, many plants such as Ferulago pauciradiata (KUTLU et al., 2020), Andrographis paniculata, Zingiber officinale, Curcuma longa, Piper nigrum, Syzygium aromaticum, Momordica charantia, and Centella asiatica (LIEW et al., 2020) are used for the treatment of sepsis due to their anti-inflammatory and antioxidant properties.The antibacterial, antioxidant, and anti-inflammatory effects of P. granatum have been revealed in previous studies (AVIRAM et al., 2004;LANSKY and NEWMAN, 2007;DE NIGRIS et al., 2007).Therefore, in the present study, we investigated the preventive and therapeutic effects of the peel powder of P. granatum in a rat sepsis model.The results showed that the peel powder of Punica granatum used in group T2 displayed beneficial effects in the sepsis model in rats, considering the histopathological changes when compared to the control and T1 groups.
The endotoxic model induced by lipopolysaccharide (LPS) mimics poisoning rather than infection.The cytokines peak early in the LPS model, whereas in the CLP model, the pro-inflammatory response is delayed and continues over time.In the LPS model, mortality is thought to occur early, most likely due to the effects of the intense inflammatory response on the cardiovascular system (RUIZ et al., 2016).In the CLP model, mortality is delayed by multi-organ failure complicating induced peritonitis.The most widely used CLP model for experimental sepsis is currently considered as the gold standard in research because it mimics the nature and evolution of severe sepsis in humans (RUIZ et al., 2016).The timing of antibiotic administration is directly related to overall survival.When antibiotic administration is 12 hours after CLP, mice with IL-6 concentrations higher than 14,000 µpg/mL have 0% survival, while administration of antibiotics at 6 hours to mice with similar IL-6 concentrations is reported to increase overall survival to 25% (LEWIS et al., 2016).
When choosing an experimental animal species in experimental sepsis models, it is important for the purpose of the study that the animal species is easily accessible and cost-effective.For these reasons, small experimental animals, such as mice, rats, and guinea pigs are frequently used.These species are also used in survival studies, and histopathological examinations (İSKİT, 2005). in the present study, the survival rate of the rats in group T1 was higher than that of rats in group T2.It was considered that the administration of P. granatum L. peel powder 6 hours before CLP may increase the survival rate in the treatment of sepsis.
A literature search showed that antimicrobial therapy is an essential factor for sepsis management.Therefore, bacterial identification should be performed in the treatment process (MONTRAVERS et al., 2009;VAITTINADA et al., 2020).In a study of 16 rats, blood cultures after CLP found E. coli 88% (14/16), Enterococcus faecalis 81% (13/16), and Enterobacter cloacae 75% (12/16) (VAITTINADA et al., 2020).In this CLP-induced study, blood cultures (E coli and S aureus) were determined to be compatible with common polymicrobial infections in humans with stercoral peritonitis.It was observed that bacteria were isolated in fewer animals in group T1 (n=2) than in C (n=6) and T2 (n=4).It was thought that the administration of P. granatum L. peel powder 6 hours before CLP may have a beneficial effect on blood bacterial elimination in sepsis.
In conclusion, the present study using a rat sepsis model, induced by cecal ligation and puncture, showed that inflammation, vascular changes, degeneration, and necrosis of visceral organs, especially the lungs, were caused by S. aureus and/ or E. coli.However, it was indicated that blood cultures could be used as a diagnostic marker in the pathogenesis of sepsis.In addition, pomegranate peel powder administration was determined to be effective in the treatment of sepsis, with its antimicrobial and anti-inflammatory functions.Thus, ellagic acid may be an alternative therapeutic agent against sepsis.
Table 1 .
Blood culture results according to groups
Table 2 .
Histopathological score of organs in experimental groups
Table 3 .
Statistical correlation between groups by Two-way ANOVA
Table 4 .
Comparative group results according to histopathological findings in organs by post-hoc Bonfererroni's multiple comparison test | 2024-02-11T16:02:12.539Z | 2023-12-30T00:00:00.000 | {
"year": 2023,
"sha1": "9944e4766f1311442cfd878b086fe2765ce377c1",
"oa_license": null,
"oa_url": "https://doi.org/10.24099/vet.arhiv.1973",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "33b1d24867cae7e9ddf3b7032a829dd5418da6c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
233947930 | pes2o/s2orc | v3-fos-license | Investigating Pre-Service Biology Teachers’ Diagnostic Competences: Relationships between Professional Knowledge, Diagnostic Activities, and Diagnostic Accuracy
Teachers’ diagnostic competences are essential with respect to student achievement, classroom assessment, and instructional quality. Important components of diagnostic competences are teachers’ professional knowledge including content knowledge (CK), pedagogical knowledge (PK), and pedagogical content knowledge (PCK), their diagnostic activities as a specification of situation-specific skills, and diagnostic accuracy. Accuracy is determined by comparing a teacher’s observation of classroom incidents with subject-specific challenges to be identified from scripted instructional situations. To approximate diagnostic situations close to real-life, the assessment of science teachers’ diagnostic competences requires a situated context that was provided through videotaped classroom situations in this study. We investigated the relationship between professional knowledge (PCK, CK, PK) of 186 pre-service biology teachers, their diagnostic activities, and diagnostic accuracy measured with the video-based assessment tool DiKoBi Assess. Results of path analyses utilizing Rasch measures showed that both PCK and PK were statistically significantly related to pre-service teachers’ diagnostic activities. Additionally, biology teachers’ PCK was positively related to diagnostic accuracy. Considering higher effect sizes of PCK compared to PK, the findings support previous findings indicating the importance of PCK, thus demonstrating its importance in the context of subject-specific diagnosis as well.
Introduction
In order to make efficient decisions during classroom instruction, teachers' abilities to identify and interpret relevant situations and events that influence student learning have been described as an important part of teachers' professional competence (e.g., [1,2]). Similar to the processes that a medical person performs (identifying and interpreting symptoms of a patient to decide on how to treat the patient best), the ability to assess classroom situations and events (e.g., how to implement an experiment in science instruction best, or how to deal with student misconceptions) in order to adapt instruction can be considered in the context of diagnostic competences [3,4]. Therefore, the consideration of diagnostic competences is a critical element in teacher education, which is significantly important with respect to student achievement, classroom assessment, and instructional quality [5][6][7]. Its importance is also stated in German standards of teacher education [8]. At this point, it is important to remark that in German research, diagnostic competences are used largely synonymous with assessment competences [4,9]. For reasons of consistency, we primarily use the term diagnostic competences instead of assessment competences throughout this article. The plural is used to indicate that there is no global construct of diagnostic competence but that its conceptualization depends on the study's specific focus [10,11].
Nonetheless, courses and practicums that promote diagnostic competences vary between universities [12,13]. Likewise, approaches to support diagnostic competences vary based on different diagnostic contexts (e.g., assessing learning outcomes, diagnosis of instructional tasks, monitoring the teaching and learning process) [14]. Additionally, results on the efficiency of particular programs are mostly scarce [12]. In order to effectively foster diagnostic competences and adapt university programs, we need to understand the components that constitute diagnostic competences and how these components are interrelated.
Diagnostic Competences as a Specification of Professional Competence
Until a few years ago, diagnostic competences were often studied in terms of the accuracy of teachers' judgment, with diagnostic accuracy referring to the difference between a teacher's judgment or a teachers' critical observation and more objective assessments of performance [15,16]. However, supporters of a broader understanding of competence criticized that diagnostic competences cannot be limited solely to measures of accuracy (e.g., [15,17]). According to Schrader [14], a broader understanding of diagnostic competence covers the entire process of diagnosing, including the ability to use appropriate strategies and methods of data collection and processing, as well as to interpret the obtained data properly. This understanding of the diagnostic process is also necessary to effectively help prospective teachers develop diagnostic competence [15]. Therefore, judgment accuracy can only be regarded as one component of diagnostic competences within the broader understanding of competence that takes situated contexts into account.
The refinement of additional components of diagnostic competences can be guided by research on the expert paradigm [18]. The shift within teacher professionalism research from a merely cognitive perspective to a situated perspective on professional competence is considered decisive in order to examine teacher competences as closely as possible to the demands of the real world [9,15,[19][20][21]. In an attempt to integrate both perspectives, Blömeke et al. [1] modeled professional competence on a continuum including personal dispositions such as cognition or affect-motivation, that underlie situation-specific skills, which again mediate between teachers' dispositions and their performance. A particular strength is the consideration of those situation-specific skills that teachers require to succeed in specific situations such as diagnosing. This approach offers the advantage of considering diagnostic competences more broadly, rather than limiting investigations to diagnostic abilities [5] or operationalizing diagnostic competences as accuracy of teachers' judgments only [15,16]. Therefore, in accordance with the competence as a continuum model, diagnostic competences can thus be understood as a latent trait including "those dispositions, situation-specific skills and performance that teachers need for diagnosis in the context of teaching and learning" [6] (p. 43).
Following this understanding, several components must be considered when examining diagnostic competences bringing together components studied in research on teachers' professional competence [1] and teachers' judgment accuracy [15,16]. First approaches to define diagnostic competences with regard to the competence as a continuum model mainly referred to teachers' professional knowledge and diagnostic skills such as diagnostic activities [4,6]. In that vein, Heitzmann et al. [4] defined diagnostic competences as "individual dispositions enabling people to apply their knowledge in diagnostic activities according to professional standards to collect and interpret data in order to make decisions of high quality." Therefore, three components of diagnostic competences are highlighted: (1) teachers' cognitive disposition and, in particular, their professional knowledge, (2) the application of diagnostic activities as a specification of situation-specific skills in the context of diagnosing, and (3) the need for a measure of diagnostic accuracy to check the agreement with professional standards (see Figure 1).
In the following sections, we describe the three components of diagnostic competences in more detail before referring to empirical findings concerning the relationship between these components. In the following sections, we describe the three components of diagnostic competences in more detail before referring to empirical findings concerning the relationship between these components.
Professional Knowledge
According to Heitzmann et al. [4], knowledge counts as a crucial prerequisite that enables teachers to execute diagnostic activities effectively. Teachers need to apply their knowledge in different diagnostic situations and recall what they know about effective teaching, diagnosing students' (mis)conceptions, and how to support students' learning progress [5,22]. Different facets describe teachers' professional knowledge based on Shulman's division into PK, CK, and PCK [23,24]. The three facets cover knowledge that teachers need for effective teaching. This comprises general pedagogical-psychological knowledge (PK), which is knowledge about classroom management and generic strategies and methods of teaching, learning, and assessment [24][25][26]; content knowledge (CK) that is knowledge about subject-specific facts, concepts, and methods [27]; and pedagogical content knowledge (PCK) that is knowledge about how to make a particular content accessible for a particular group of students taking into account content-dependent (mis)conceptions of students and instructional strategies of the subject [20,23,28,29]. PK is assumed to be content-independent [30], and therefore, PK seems more relevant for diagnosing general characteristics such as classroom management (cf. [6]). Both CK and PCK are mainly considered subject-and content-specific, and thus, they are applied in subjectspecific instructional situations [31,32]. As specific instructional quality features characterize effective science teaching, science teachers' subject-specific PCK counts as the knowledge facet with a high influence on instructional quality and student achievement [33][34][35]. PCK can therefore be considered a pivotal knowledge facet of teachers' diagnostic competence when the diagnostic focus is on subject-specific instructional aspects. However, regarding the relationship between the three knowledge facets, researchers assume CK to be necessary but not sufficient for the development of PCK, while PK counts as an important precondition to applying CK and PCK in subject-specific instruction [30,31,36]. In order to measure teachers' knowledge facets in a standardized way, different methods such as paper-pencil assessments with multiple-choice or open-ended items, semi-structured interviews, or concept mapping have been used [29,[37][38][39]. [4], knowledge counts as a crucial prerequisite that enables teachers to execute diagnostic activities effectively. Teachers need to apply their knowledge in different diagnostic situations and recall what they know about effective teaching, diagnosing students' (mis)conceptions, and how to support students' learning progress [5,22]. Different facets describe teachers' professional knowledge based on Shulman's division into PK, CK, and PCK [23,24]. The three facets cover knowledge that teachers need for effective teaching. This comprises general pedagogical-psychological knowledge (PK), which is knowledge about classroom management and generic strategies and methods of teaching, learning, and assessment [24][25][26]; content knowledge (CK) that is knowledge about subject-specific facts, concepts, and methods [27]; and pedagogical content knowledge (PCK) that is knowledge about how to make a particular content accessible for a particular group of students taking into account content-dependent (mis)conceptions of students and instructional strategies of the subject [20,23,28,29]. PK is assumed to be content-independent [30], and therefore, PK seems more relevant for diagnosing general characteristics such as classroom management (cf. [6]). Both CK and PCK are mainly considered subject-and content-specific, and thus, they are applied in subject-specific instructional situations [31,32]. As specific instructional quality features characterize effective science teaching, science teachers' subject-specific PCK counts as the knowledge facet with a high influence on instructional quality and student achievement [33][34][35]. PCK can therefore be considered a pivotal knowledge facet of teachers' diagnostic competence when the diagnostic focus is on subject-specific instructional aspects. However, regarding the relationship between the three knowledge facets, researchers assume CK to be necessary but not sufficient for the development of PCK, while PK counts as an important precondition to applying CK and PCK in subject-specific instruction [30,31,36]. In order to measure teachers' knowledge facets in a standardized way, different methods such as paper-pencil assessments with multiple-choice or open-ended items, semi-structured interviews, or concept mapping have been used [29,[37][38][39].
Diagnostic Activities
When teachers assess specific instructional situations, they engage in situation-specific diagnostic processes. These processes require the execution of situation-specific skills that have been termed as assessment skills [40], professional vision [41,42] or noticing [43] in the context of classroom assessments, and diagnostic skills [5,6], or diagnostic activities [4,17,44] considering the specific context of diagnosis. Diagnostic activities can be described as those activities teachers execute to evaluate data on, for example, learning conditions and prerequisites of learners in order to optimize the overall instructional pro-cess [4,44]. With regard to a specific diagnostic situation, different diagnostic activities may be relevant. Diagnostic activities may also vary regarding the weight attributed to each activity and the way these activities are performed [4,45]. This variability of possible activities makes adaptation to specific diagnostic contexts possible. Overall, eight diagnostic activities have been differentiated following scientific reasoning processes: problem identification, questioning, generating hypothesis, constructing artefacts, generating evidence, evaluating evidence, drawing conclusions, and communicating the process/results [4,45]. Descriptions of the eight diagnostic activities can be found in Table 1. Table 1. Taxonomy of the diagnostic activities according to Heitzmann et al. [4] and Fischer et al. [45]. Note that not each diagnostic activity is appropriate for a given situation, and thus, the number and type of the executed diagnostic activities may vary.
Diagnostic Activity Description
Identifying problems A noteworthy event that may influence student learning is noticed by the teacher.
Questioning
The teacher asks questions to find out more about the identified problematic incident or its cause.
Generating hypothesis
The teacher generates a hypothesis about possible sources of the identified problem.
Construct or redesign artefacts
The teacher creates content-specific tasks suitable for identifying underlying instructional problems or detecting students' misconceptions.
Generating evidence
Evidence is generated either by the use of a constructed test or a created task or through systematic observation and description of the problematic incident.
Evaluating evidence
The teacher assesses the generated evidence regarding its support to a claim or theory. He/she interprets the data, thus making sense of the generated evidence with regard to his/her belief, knowledge, and expertise (cf. [46]).
Drawing conclusions
As a result of evaluating evidence, the teacher predicts consequences regarding student learning or makes suggestions for alternative instructional strategies.
Communicating the process/results
The teacher scrutinizes diagnostic results to colleagues, students, or parents.
A more analytical operationalization of situation-specific skills for assessing classroom situations is described in the concepts of "professional vision" and "teacher noticing" (e.g., [41,43]) (see also Section 3.2.2). Overall, situated approaches for the measurement of situation-specific skills such as diagnostic activities that science teachers apply in practices close to the assessment of classroom instruction include, for example, classroom observations, reflection on lesson plans, responding to students' ideas, or video-based analyses [47][48][49]. All these approaches are based on evidence collected in the specific context in which the skills are applied.
Diagnostic Accuracy
Diagnostic competences are also reflected by the quality of the diagnosis, which can be operationalized by means of accuracy measures [4,17]. Accuracy has often been investigated in terms of judgments accuracy that has been assessed at different levels, for example, at the student-related level by focusing on teachers' judgments about student achievement [15], or at the classroom level by assessing instructional features such as task demands [5]. Considering the student level approach, researchers investigated correlations between teachers' judgments of student characteristics and students' outcomes in a standardized test, or the accuracy of the rank order of student performance according to competence levels as measures [17,40,50]. At the classroom level, judgments can be compared to specific standards of the domain (e.g., features of instructional quality) in order to consider the quality of the information basis and to obtain a measure of accuracy (cf. [15,51]). However, other measures of accuracy referred to the ability to apply situation-specific skills accurately. For example, researchers focused on perception accuracy, which describes "the precise observation of a professional situation" [2] (p. 373). Carter et al. [52] investigated perception accuracy in terms of immediate perception of science classroom environments of rapidly presented visual classroom stimuli on presentation slides. By limiting the time available for observing the situations, differences in perception accuracy between experts and novices, and thus in the participants' perceptual skill, were revealed. Therefore, accuracy measures should be considered as one component of diagnostic competences as well.
Empirical Evidence on the Relationships between the Components of Diagnostic Competences
Depending on the diagnostic focus (rather generic: e.g., classroom disruption [53]; or rather subject-specific: e.g., diagnosing biology instruction [54]), either PK or subjectspecific facets such as CK or PCK may be relevant to the application of diagnostic activities and diagnostic accuracy (cf. [55]). Tolsdorf and Markic [12] studied different knowledge types (conditional, technological, knowledge of change, competence knowledge) relevant in the context of diagnosing in chemistry. However, they only studied to what extent pre-service teachers that were at different stages of their education differed in, for example, different beliefs, attitudes, and knowledge about the importance of diagnosis in science or knowledge about how to change learning material concerning the needs of the learners. Among others, they found the most positive attitudes toward diagnosis or clear ideas for changing learning materials for more experienced pre-service teachers indicating the role of teacher education in university. However, they did not explicitly investigate other components of diagnostic competence such as diagnostic activities. Furthermore, Tolsdorf and Markic [12] assumed the practical applications of what the students had previously studied and experiences to be crucial for the change of diagnostic competence (defined here rather cognitively) in the course of pre-service teachers' studies. This points to the need to investigate diagnostic competences in situated approaches [21].
Situation-specificity of diagnostic competences was emphasized by Hoth et al. [6]. To investigate teachers' situation-based diagnostic competences, they investigated mathematics teachers' situation-specific skills in order to identify different perspectives of situation-based diagnoses. Furthermore, they examined interrelations between these diagnostic perspectives and teachers' knowledge. Results showed that a content-related perspective in the given classroom situation was related to high average mathematical CK. In addition, teachers using both a didactical and a content-related mathematical perspective had the highest average mathematics PCK scores. Furthermore, there were instances with a higher general PK score associated with a more pedagogical focus on classroom situations, with teachers focusing on aspects of classroom management, organizational aspects, and other pedagogical aspects. Overall, their qualitative analyses suggested that "teachers with greater knowledge not only interpret classroom events more adequately but also knowingly focus their attention to the relevant aspects" [6] (p. 52). Similar results have been found by König et al. [25], who showed that interpreting general pedagogical classroom situations correlated with general pedagogical knowledge. Furthermore, higher values of mathematics teachers PCK were positively connected with noticing relevant teaching and learning incidents, with "relevant" referring to the accurate diagnosing of aspects that matter in terms of students' learning in math [21]. Moreover, teachers with below-average PCK focused on rather superficial characteristics that were irrelevant to the diagnostic problem and thus not accurate. In addition, Blömeke et al. [56] emphasized that pre-service teachers who were prepared to teach both lower-and upper-secondary school (mathematics) had stronger prerequisites and more mathematics-related learning opportunities that resulted in a stronger cognitive base. Furthermore, their cognitive base (including CK and PCK) was better connected to situation-specific skills. Comparable results on the relationship between knowledge and skills can also be found in research on professional vision (e.g., [38,57]).
Within research focusing on skills relevant during the diagnostic process, Wildgans-Lang et al. [17] showed that the quality of executed diagnostic activities was more important for diagnostic accuracy, and thus, for the quality of the overall diagnosis than the frequency of the diagnostic activities. Furthermore, they differentiated between diagnosing competence levels and diagnosing students' misconceptions and assessed the diagnostic accuracy with regard to both aspects. Results indicated the accurate diagnosis of misconceptions to be more challenging than the accurate diagnosis of competence levels. However, they did not analyze or link knowledge facets in their study but exposed the relation between diagnostic activities and professional knowledge as an important issue for further research.
In educational research, the correlates of diagnostic accuracy such as personal cognitive traits or professional knowledge facets are still not well enough studied [14,58,59]. Accuracy has mostly been considered in terms of comparisons of pre-service teachers' answers/observations with expert ratings/answers that served as a measure of correctness (e.g., [40,55,58]) but without explicitly investigating relationships to other components. In clinical research, content-specific knowledge counts as the basis for diagnosing clinical cases accurately, and accuracy is assumed to depend on skills relevant for correct interpretations [60,61]. Transferred to teacher education, this may imply that subject-specific knowledge facets (CK, PCK) are more relevant for diagnosing subject-specific cases accurately. Accuracy may rely on skills such as the elaborate execution of diagnostic activities that are assumed to be quite poor in pre-service teachers [52,62]. However, so far, only a weak correlation could be found for the relationship between the accurate rank order of tasks and student performance, and teachers' PCK (on text-image integration in biology, geography, and German) [50]. Studies focusing on teachers' adaptive teaching skills in terms of adequate planning and carrying out instruction found moderate correlations with teachers' accuracy of the rank order of student performance [63,64]. While some studies thus focused on student-related accuracy, there is, to our knowledge, a lack of studies on accuracy measures regarding the diagnosis of instructional features in science classrooms that improve the quality of science instruction, and thus, student achievement. Karst et al. [10] traced this back to the greater effort when measuring features of instructional quality that require, for example, the use of videography.
Overall, depending on the diagnostic focus, a correlation between teachers' situationspecific skills, their diagnostic accuracy, and corresponding knowledge facets can be assumed but has not been systematically investigated yet. The use of videotaped classroom situations may be one approach to examine the different components collectively and in a situated way.
Video-Based Assessment of Diagnostic Competences
For pre-service science teachers (PST) it might be challenging to succeed in diagnostic situations within the complex environment of a science classroom, as PST are less experienced and less skillfully with regard to their situation-specific skills [41,62]. Video-based programs and instruments have been developed, in which videos were used in different ways: to reflect on teachers' own or other teachers' practice; to show best practice training, in which teaching strategies can be observed and adapted for own teaching; or to promote situation-specific skills for the interpretation of important features of classroom interactions [42,65,66]. Researchers assume that supported by the situated context, teachers' professional knowledge can be activated, and necessary situation-specific skills can be applied [2,6,57]. The advantage of videos is that they approximate practice, reduce the complexity of the diagnostic situation, and thus, can promote PSTs' competence development [67]. However, video-based instruments are also considered promising for assessing teachers' diagnostic competences within a situated context, and thus, for measuring their diagnostic activities and diagnostic accuracy [2,6]. Investigation and training of diagnostic competences within learning environments have often been student-or interaction-based [68]. Diagnostic tasks within video-based instruments referred mostly to the diagnosis of teacher-student interaction on mathematical content (e.g., [6,40,57]), or students' thinking (e.g., [37]), but diagnostic contexts regarding the assessment of the instructional behavior of the teacher with regard to features indicating instructional quality that impact learning have rarely been applied. Researchers, however, emphasized the relation between teacher knowledge and instruction. Knowing about effective instructional strategies and being able to diagnose instructional situations and offer effective instructional alternatives is considered a crucial skill that was found a significant predictor of student learning [57,69]. A first approach can be found by Meschede et al. [38], who assessed pre-service science teachers' situation-specific skill professional vision with regard to the two instructional aspects cognitive activation and structuring learning situations. However, the authors claimed that future research needs to conceptualize and investigate assessment skills with respect to other aspects of instructional quality or in other contentspecific domains. The study of diagnostic competences and its components with respect to further subject-specific features of instructional quality is still pending.
Summary
Overall, three observations can be made: First, investigating diagnostic competences should go beyond judgment accuracy and should also include components such as professional knowledge and diagnostic activities to fully grasp the construct. Second, previous analyses mostly referred to individual components of diagnostic competences, for example, teachers' situation-specific skills for making situation-based diagnoses [6]. Third, considering all components and their relationships is helpful to plan further studies in the future by building on existing relations. This aspect can be considered particularly worthwhile in relation to teacher education, in which diverse offers and courses for the development of competence are embedded without systematically addressing (aspects of) diagnostic competences [13]. Knowing about components that influence teachers' diagnostic competences and which can be modified in teacher education is therefore of great importance [70].
Aims and Hypotheses
As described in the previous sections, effective diagnoses, and thus, diagnostic competences, are considered to depend on the activation of appropriate knowledge facets that underly the execution of diagnostic activities (cf. [6,38]). Since concrete empirical evidence regarding diagnostic activities, the inclusion of accuracy measures in situated approaches, as well as interrelations between the professional knowledge base, diagnostic activities, and diagnostic accuracy as components of diagnostic competences is still rare, we want to address these issues within a biological context in higher education since the development of competence needs to start within university teacher education [12,13]. Thus, the present study makes an effort to measure the three components of pre-service biology teachers' diagnostic competences in order to investigate the relationship between them as starting point for a programmatic approach to the investigation of pre-service biology teachers' diagnostic competences (cf. [71]). Therefore, we addressed the following research question: How do the different knowledge facets PCK, CK, and PK relate to diagnostic activities and diagnostic accuracy?
Considering the demand for practical approaches within the investigation of PST diagnostic competences [6,40], we investigated this question within a situated context using videotaped classroom situations showing whole-class biology instruction. A wholeclass diagnostic focus is considered more complicated than diagnosing a particular student or a group of students [12] but reflects teachers' everyday-life practice. Therefore, it is necessary to understand how components of diagnostic competences relate to each other in order to develop or adapt university programs. The following hypotheses were derived from previous research: Hypothesis 1. With regard to the assessment of science instruction (cf. [38]) and previous findings on the importance of content-related knowledge facets for subject-specific instructional quality [33,34], PCK can be considered as a pivotal knowledge facet of teachers' diagnostic competences when the diagnostic focus is on subject-specific instructional aspects. Therefore, we assume PCK to be strongly related to the application of diagnostic activities and diagnostic accuracy [6,21,35,38,55,57]. Hypothesis 2. CK was found to be a necessary condition for PCK development, but research about its relation to diagnostic activities or accuracy is scarce. Therefore, we assume CK to be correlated with PCK, while a connecion between CK and diagnostic activities or diagnostic accuracy is not assumed (cf. [30,33,34,36]).
Hypothesis 3.
Finally, a relationship between PK, diagnostic activities, and diagnostic accuracy should not exist, since the focus of the diagnosis lies on subject-specific instructional quality, and thus, not on general aspects of teaching and learning (cf. [6,41]).
Design and Sample
Data collection was embedded in a mandatory seminar attended by pre-service biology teachers at the beginning of their teacher education. Using the video-based assessment tool DiKoBi Assess (German acronym for diagnostic competences of biology teachers in biology classrooms) was compulsory for all PST. Still, participation in the study, and thus, releasing their data for analysis was voluntary. All participants signed informed consent documents stating an anonymous and voluntary participation.
The present study had a cross-sectional design with two points of data collection. According to Spector [71], cross-sectional designs are most useful to provide initial evidence of the extent to which variables "are related without introducing the complexities of temporal flows that might distort relationships" (p. 130). Since we were interested in understanding the relationships between the three components of diagnostic competences which we defined, the cross-sectional design provided a useful starting point within our research that will become more complex in subsequent studies.
First, we asked PST of the subject biology to complete three professional knowledge tests to measure PCK, CK, and PK. Second, we used the video-based assessment tool DiKoBi Assess to measure PSTs' diagnostic activities and diagnostic accuracy. This means the data set allowed the computation of five different PST measures (a PCK measure, a CK measure, a PK measure, a diagnostic activities measure, a diagnostic accuracy measure).
In DiKoBi Assess, PST had to use diagnostic activities to diagnose a biology teacher's subject-specific instruction within a real-life teaching situation to capture diagnostic competences as ecologically valid as possible (cf. [72]). The sample consisted of 186 PST of the subject biology (72.0% female; average study semester: M = 3.3, SD = 1.3; age in years: M = 23.0, SD = 3.8). A percentage of 36.6% of the PST strove for a certification qualifying them for the academic track of German secondary education (German Gymnasium), and 63.4% attended programs for the non-academic track that qualifies students for a vocational career. For an overview of the German school system, see Cortina and Thames [73].
Professional Knowledge Tests
Professional knowledge was assessed through the use of paper-pencil tests that included open-ended items (responses were written in text fields), single best answer (SBA) items: one correct answer must be selected from a set of possible responses consisting of multiple distractors and one correct answer), or multiple true/false items (all of the possible responses must be assessed for their validity) [74].
The PCK-and CK-tests considered the biology topic skin as this was the same topic covered in the video-based assessment tool DiKoBi Assess. The tests were adapted versions of the professional knowledge tests, which have been utilized in the ProwiN project (Professional Knowledge of Teachers in Science) [75,76]. In the tests, aspects of PSTs' declarative and action-related knowledge were assessed. Declarative knowledge (knowing that) includes knowledge about terms, facts, and principles (e.g., listing the advantages and disadvantages of a specific model); action-related knowledge (knowing how, knowing when and why) is needed for successful instruction in different situations [77]. Knowing how is knowledge about an individual science teacher's (instructional) practices and processes (e.g., knowing how to deal with student ideas). Knowing when and why refers to "knowledge about conditions under which decisions and practices are appropriate and knowledge about reasons for performing specific practices" [77] (p. 6).
The PCK-test covered knowledge of instructional strategies and knowledge of student (mis)conceptions. Both issues count as important components of teachers' PCK [22,31]. Utilizing the model of Tepner et al. [78], eight open-ended items and five SBA items concerning three PCK dimensions were included in the test (see Table 2). The CK-test included 13 open-ended items and 15 SBA items. Topics that were covered with the items are shown in Table 3. Criteria for item scoring of both the PCK-and CK-test were provided in two separate coding manuals. Two independent raters used the coding manuals to code ten percent of both the PCK-and CK-tests. Results of two-way random intra-class correlations (ICC absolute ) showed a high agreement between the two raters (PCK-test: ICC absolute (310,310) = 0.84, p < 0.001; CK-test: ICC absolute (341,341) = 0.97, p < 0.001) [79]. The knowledge facet PK was assessed by utilizing a paper-pencil test that was adapted from the BilWiss project [80,81]. The adapted version contained one out of six different dimensions originally used in BilWiss. For this study, the short scale of the dimension instruction was used, because it contained items about generic features of instructional quality such as classroom management, supportive climate, and cognitive activation, which are referred to as basic dimensions of instructional quality [33,82,83]. The instrument also contained items concerning general pedagogical issues of teaching such as teaching methods (see Table 4). Therefore, the selected dimension of the BilWiss test was the most important one with regard to the differentiation between generic and subject-specific features of instructional quality, which was important for accurate diagnosis of the videotaped classroom situations. The PK-test contained five SBA items and ten multiple true/false items. The item scoring followed the instructions from BilWiss [80]. Data sets from the BilWiss project, in which the PK test from this study was developed and used, are publicly available on the IQB website [81]. The three tests were evaluated utilizing Rasch theory and Rasch analysis techniques [84,85]. The Rasch Partial Credit Model (PCM) was used utilizing the Winsteps program [86]. The use of Rasch allowed "person measures" to be computed for each respondent for each instrument. Therefore, the data collected from the PCK-, CK-, and PK-tests allowed PCK, CK, and PK Rasch measures to be computed for each respondent. Reasons for utilizing Rasch are that raw scores (be it from a multiple-choice test, a partial credit test, or a rating scale) are non-linear. Rasch allows one to take that non-linear data and compute Rasch person measures which are expressed on a linear scale [85,87]. It is those linear measures that are needed for parametric statistics. Additional reasons why Rasch analysis techniques should be used when test and survey data is evaluated include that Rasch methods (1) express items and respondents on the same measurement scale, (2) provide wide-ranging Rasch indices to evaluate the functioning of items, (3) allow the computation of the measurement error of each item and each respondent, (4) allow respondent measures to be computed even if data is missing, (5) correct for the non-linearity of raw test scores, and (6) enable alternate forms of an instrument to be developed (through linking items), such alternate forms enable respondent performance to be expressed on the same scale regardless of test form completed [85,87].
One important component of an analysis utilizing Rasch methods is an assessment of the "fit" of items. To evaluate data fit, item Outfit-MNSQs (mean-squares) were utilized. Additionally, Rasch person reliability and Rasch item reliability were computed and evaluated. It has been argued that for a productive measurement, item Outfit-MNSQ values should not exceed 1.5 [88]. High values of item reliabilities demonstrate that both the range of item difficulty and the sample size are appropriate to measure the variables precisely. Person reliability is impacted by the length of the test and the range of abilities of respondents [86]. Item fit statistics of the knowledge tests showed good fit values in which all items exhibited an Outfit-MNSQ below 1.5 (PCK: 13 item outfit-MNSQ < 1.18; item reliability = 0.96; person reliability = 0.55; CK: 28 item outfit-MNSQ < 1.35; item reliability = 0.97; person reliability = 0.67; PK: 15 item outfit-MNSQ < 1.34; item reliability = 0.98; person reliability = 0.50).
Video-Based Assessment Tool DiKoBi Assess Measuring Diagnostic Activities
To measure PSTs' diagnostic activities, we used the video-based assessment tool DiKoBi Assess that was embedded in an online-survey platform [89]. DiKoBi was developed to provide diagnostic situations of real-world demands for biology teachers, in which subject-specific knowledge and skills can be applied to assess biology teaching [54]. Six staged videos were embedded in DiKoBi to address six different challenging situations that biology teachers have to confront when teaching biology (e.g., elaborate use of threedimensional models). All of these six challenging situations addressed the biological topic "skin," each challenge concerning a different subject-specific dimension of instructional quality. The embedded dimensions and including features have been found important factors that impact student achievement [90]. These dimensions are (1) level of students' cognitive activities and creation of situational interest, (2) dealing with (specific) student ideas and errors, (3) use of technical language, (4) use of experiments, (5) use of models, (6) conceptual instruction.
In DiKoBi, every challenging situation to be diagnosed started with the video of the classroom situation. Afterward, PSTs had to complete three tasks. Each task required the execution of a diagnostic activity. Since not all diagnostic activities were found to be useful for diagnosing in DiKoBi, a selection of three activities that can be considered crucial in the context of video analyses was made in comparison with conceptualizations reported in research previously [91]. Besides conceptualizing situation-specific skills as perception, interpretation, and decision-making, originated from the competence as a continuum model [1], other conceptualizations exist. These refer to reflective skills for video viewing or classroom observation and are discussed, for example, under the term professional vision, which includes several critical activities [42,92]. Within a teaching context, professional vision means the ability to notice (that is paying attention to relevant events in the classroom) and to reason about relevant features of classroom interaction [41]. The reasoning process is knowledge-based and can be differentiated into three further activities: describing the situation without making judgments, explaining the situation by linking the observation to professional terms and concepts, and predicting possible consequences from the observed situation [41,43]. Considering the skills displayed in the competence as a continuum model and the aforementioned activities described within the concept of professional vision, four diagnostic activities (DA) can be considered crucial within diagnostic processes. First, science teachers have to identify noteworthy events in the science classroom (DA = problem identification), and systematically observe and describe the noteworthy events (DA = evidence generation). Second, the teachers explain the situation by actively drawing on their declarative and action-related knowledge (e.g., by using professional terms and theories of teaching and learning that support the relevance of their observation) (DA = evidence evaluation). Third, they make decisions on how to continue with instruction or respond to students' activities, or they even have to propose alternative teaching strategies (DA = drawing conclusions) (cf. [2,93]). Since problem identification occurs rather invisibly in the participant's mind, we assumed problem identification to be indirectly included in the events described without explicitly measuring it. This assumption is based on the fact that in order to describe an incident, a teacher must show awareness of exactly this incident or problem that occurred [92].
Three tasks prompted the diagnostic process in DiKoBi. Each task focused on a specific diagnostic activity. For Task Describe (DA = evidence generation), the PST had to identify and describe challenging aspects of each classroom situation; for Task Explain (DA = evidence evaluation, the PST had to reason about their described challenges by linking their description to scientific theories and concepts; and for Task Alternative Strategy (DA = drawing conclusions), the PST had to propose an alternative teaching strategy, and give reasons why this would improve instruction. For a more detailed description of the development and the design of DiKoBi, see Kramer et al. [54].
A coding manual was utilized to analyze the PSTs' answers to each task of the six challenging situations. The coding manual is based on subject-specific instructional quality features and corresponding indicators described in the science literature (e.g., using challenging tasks to foster conceptual understanding [83]). In empirical studies, a positive impact of these features on student learning has been found. Examples from the manual and corresponding references can be found in Kramer et al. [54]. For each task, the answers were coded according to the content-related knowledge facets (PCK, CK, PK), and the quality of the appropriate diagnostic activity was assessed. Zero (0) points were used to indicate very low (not accurate) answers. For correct answers of improved quality, 1 or 2 or 3 points could be utilized for a rating. Appendix A provides an overview of the procedure utilized for coding and the codes of the quality levels that have been used for coding the PSTs' answers.
Three independent raters used the coding manual to code statements of Task Describe, Task Explain, and Task Alternative Strategy to ensure objective coding of the answers. 337 statements from 10 PST were coded by all three raters. Results of a two-way random intra-class correlation (ICC absolute ) analysis of these ratings suggested a high agreement between the three raters (ICC absolute = 0.90, F (1520, 3040) = 10.26, p < 0.001, N = 1521) [79]. For the small number of discrepancies in coding that were observed, these differences were discussed by all three raters prior to the rating of the remaining data by a single coder. Complex cases continued to be discussed together during the ongoing coding process.
After coding the different tasks and situations, the research team calculated Rasch person ability measures for each respondent. Similar Rasch techniques as that described previously for the PCK-, CK-, and PK-tests were utilized. This Rasch person measure that we computed expressed the level of each PST's ability to execute diagnostic activities accurately. Thus, the Rasch person measure provides an assessment of the PSTs' diagnostic level utilizing the data collected for evidence generation, evidence evaluation, and drawing conclusions for each of the six classroom situations. Fit statistics of the Rasch model showed productive measures (diagnostic activities: 29 item outfit-MNSQ < 1.43; item reliability = 0.95; person reliability = 0.76).
Calculating Diagnostic Accuracy
On the one hand, accuracy was already taken into account in the scoring of the diagnostic activities. Thus, it was included as a quality criterion in the measurement of the diagnostic activities, since zero points were assigned if the description of the preservice teachers referred to a not-accurate observation. However, for investigating the relationship between components of diagnostic competences, we established another, more explicit measure of accuracy, which is operationalized as perception accuracy, referring to the precise perception and description of biology instruction (cf. [52,94]). Therefore, diagnostic accuracy was assessed by a teachers' individual observation compared to the scripted challenges embedded in the videos. These scripted challenges and corresponding indicators of instructional quality features served as an objective criterion for comparison since they were derived based on empirically proven features for effective teaching. In other words, we examined whether PSTs' descriptions of the challenging situations referred to the PCK-challenges that addressed missing features of subject-specific instructional quality. Descriptions that referred to superficial or general pedagogical observations not relevant for teaching and learning in the specific situation were not counted. To better understand the accuracy calculation, we will illustrate the procedure with an example: Given the classroom situation use of models, the two challenging instructional aspects elaborate model use and critical reflection were included as subject-specific features of instructional quality. Indicators addressing the lack of an elaborated model use were statements such as "The model is used for illustrative purposes only" or "The model is described incompletely." Indicators addressing the lack of critical reflection were statements such as "Teacher does not initiate critical reflection of the model" or "Teacher does not discuss the model with students." PST could describe up to ten observations per classroom situation. Subsequently, each described observation was compared with indicators of subject-specific instructional quality listed in the manual. If this was the case, the response was scored with one point. If this was not the case, zero points were given. Subsequently, all points (i.e., correct observations) for a challenging classroom situation such as use of models were added up and divided by the overall number of observations made in this classroom situation. For example, a participant who described four observations in the classroom situation use of models, but only two of them were indicators of biology-specific instructional quality and thus considered accurate, the calculated diagnostic accuracy of this classroom situation was 0.5. In the end, we calculated the average of the accuracy measures of the six classroom situations for the final measure of diagnostic accuracy.
Data Analysis
For the knowledge variables PCK, CK, and PK, as well as for the variable diagnostic activities, we conducted Rasch analyses [84] using the software Winsteps 3.81 [86]. As mentioned, we computed Rasch person measures, which we utilized for our subsequent statistical analysis. To test our hypotheses, we used path analyses in AMOS 26 [95] with the equal interval person abilities resulting from Rasch analysis of PCK, CK, and PK as predictor variables and the person abilities of diagnostic activities as well as the calculation of diagnostic accuracy as outcome variables. The model was estimated with maximum likelihood. For model fit, we used the comparative fit index (CFI), the root-mean-square error of approximation (RMSEA), and the standardized root-mean-square residual (SRMR). Model fit was estimated by guidelines of Hu and Bentler [96]: CFI > 0.90, RMSEA < 0.05, SRMS < 0.08. Results of the path analyses are shown as standardized values.
Results
An overview of all variables of the path model, including means and standard deviations, and correlations are shown in Table 5. For the variables PCK, CK, PK, and diagnostic activities, we used person abilities from the PCM that take item difficulties into account [85]. Negative mean values for PCK, CK, and diagnostic activities indicate that the tests used were rather difficult for the PST in our sample. There was a moderate correlation between PCK and almost all other variables, indicating the great importance of PCK for diagnostic competences. Additionally, there was a strong correlation between diagnostic activities and diagnostic accuracy. This observation can partly be explained by the fact that accuracy was also considered when scoring the executed diagnostic activities (see Appendix A). For example, if a PST described an incident that has been found as non-problematic according to the coding manual, the PST's description was assessed as "not accurate." Additionally, there were small correlations of CK and PK with diagnostic activities and diagnostic accuracy. To investigate the relationship between the professional knowledge facets, diagnostic activities, and diagnostic accuracy, we calculated a path model. Figure 1 shows the model with standardized parameter estimates and levels of significance. The model demonstrated that PCK (β = 0.29, SE = 0.07, p < 0.001) and PK (β = 0.15, SE = 0.09, p = 0.031) was significantly related to PSTs' diagnostic activities; both PCK and PK together explained 17% of the variance of diagnostic activities (R 2 = 0.17). Furthermore, 12% of the variance of diagnostic accuracy was attributable to PCK (β = 0.23, SE = 0.02, p = 0.002). The model had no degrees of freedom, and its fit values were CFI = 1.000, RMSEA = 0.315, SRMR = 0.000. The rejection of the model by RMSEA could be due to the sample size smaller than N = 250 [96].
However, whereas results confirmed hypotheses 1 and 2, hypothesis 3 must partly be rejected since the predictor variable PK was positively related to diagnostic activities as well (see Figure 2).
Discussion
This study aimed to contribute to teacher education by investigating the relationship between three components of diagnostic competences that have been defined as professional knowledge (PCK, CK, PK), diagnostic activities, and diagnostic accuracy [4]. Following the understanding of competence as described in the competence as a continuum model, we assumed professional knowledge as part of teachers' cognitive dispositions related to teachers' application of diagnostic activities and their diagnostic accuracy [1,6]. However, since we chose a whole-class diagnostic focus on biology instruction in our study, not all knowledge facets were assumed to be equally related to diagnostic activities and diagnostic accuracy. Therefore, PCK, CK, and PK of pre-service biology teachers (PST) were measured with adapted versions of objective, reliable, and valid paper-pencil
Discussion
This study aimed to contribute to teacher education by investigating the relationship between three components of diagnostic competences that have been defined as professional knowledge (PCK, CK, PK), diagnostic activities, and diagnostic accuracy [4]. Following the understanding of competence as described in the competence as a continuum model, we assumed professional knowledge as part of teachers' cognitive dispositions related to teachers' application of diagnostic activities and their diagnostic accuracy [1,6]. However, since we chose a whole-class diagnostic focus on biology instruction in our study, not all knowledge facets were assumed to be equally related to diagnostic activities and diagnostic accuracy. Therefore, PCK, CK, and PK of pre-service biology teachers (PST) were measured with adapted versions of objective, reliable, and valid paper-pencil tests [76,80]. Diagnostic activities as an operationalization of situation-specific skills were measured with DiKoBi Assess. This video-based assessment tool provided subject-specific challenges that a biology teacher has to deal with in real-life instruction. The challenges focused on empirically proven features of instructional quality in the science classroom. As part of their diagnostic competences, pre-service teachers need to know about effective instructional strategies, but they also need to be able to diagnose instructional situations and offer effective instructional alternatives. Being able to apply skills such as diagnostic activities was found a significant predictor of student learning [57,69]. With regard to the aspects of professional vision (description, explanation, prediction), and with regard to the situation-specific skills perception, interpretation, and decision-making that are depicted in the competence as a continuum model, tasks in the video-based assessment tool prompted the use of the diagnostic activities evidence generation, evidence evaluation, and drawing conclusions as relevant situation-specific skills in the context of video analysis and diagnosis of classroom instruction [2,42]. Following this, we want to discuss the results of our path analysis.
Results showed that PSTs' PCK was positively related to the application of diagnostic activities (hypothesis 1), thus further supporting the tendency in existing results on the relation between PCK and situation-specific skills in domain-specific situations [22,38,57]. Furthermore, our results also highlight the importance of PCK for diagnostic accuracy (defined as precise perception and description of incidents relevant in biology instruction). Similar findings have been reported by Hoth et al. [21], who already indicated a connection between high subject-specific knowledge of teachers and their ability to reason about student errors more accurately, "while teachers with low knowledge focus on aspects that are not directly connected to the student's learning" [21] (p. 1). However, it must be noted that both PCK and CK were included in Hoth et al.'s knowledge conception. Therefore, our results explicitly indicate the relationship between PCK and diagnostic accuracy, not only for student errors but also for other dimensions of subject-specific instructional quality. Since many incidents occur simultaneously or in very short succession in biology classrooms, being able to focus on relevant incidents is crucial in terms of a biology teachers' effective instructional behavior [13]. Pre-service teachers' diagnostic accuracy might therefore also be considered important for implementing instructional quality in real-life teaching that should be studied in future research.
The results also showed that for the application of diagnostic activities and in terms of diagnostic accuracy, CK is not critical, but CK is moderately connected to PCK (Hypothesis 2). This result also confirms previous research findings highlighting the role of CK in defining the scope of the development of PCK [30,31,34].
Contrary to our hypothesis 3, the knowledge facet PK was positively related to the application of diagnostic activities. To understand this relation, we want to take a closer look at the utilized items (see Table 2; Table 4). Items of the PCK test referred to three PCK dimensions, which were use of models, use of experiments, and student errors [78]. The three PCK dimensions were covered in the video-based assessment tool as well. However, the video-based assessment tool also covered additional dimensions of PCK, which were level of students' cognitive activities and creation of situational interest, use of technical language, and conceptual instruction. An extension of the PCK paper-pencil test would be useful to cover the same PCK dimensions in both measurement instruments. Subsequently, it would have to be verified whether the present relationships remain the same. On the other hand, two items of the PK test showed similarities to PCK-dimensions utilized in the video-based assessment tool. For example, item PK-11 referred to the role of activating students' prior knowledge for instruction. Activating students' prior knowledge can be considered important in terms of cognitive activation, which is one of the three basic dimensions of instructional quality that have been described for effective instruction relevant in different domains [33,83]. At the same time, the implementation of cognitive activation has to be concretized from a subject-and content-related perspective [11,90]. Such content-related concretization was done in the corresponding classroom situations of the video-based assessment tool that referred to the level of students' cognitive activities and conceptual instruction. That is why those two dimensions were assigned to PCK. In the same vein, there is an overlap between PK-8 (constructive handling of errors/analysis of student errors) and the PCK dimension dealing with (specific) student ideas and errors. Again, this is a content-related concretization with regard to very specific student ideas and errors on the topic skin. Thus, whereas in PK-8 the general principle of dealing with errors was focused, in the items PCK-1a, PCK-1b, and PCK-1c it was about the application of the principle in a subject-specific context. Therefore, there is some overlap in the operationalization of PK and PCK at this point that may be the reason for the unexpected relationship between PK and diagnostic activities.
However, the results might also be interpreted in the sense that both PK and PCK are positively related to the application of diagnostic activities. Considering the situated diagnostic context that we used to assess the diagnostic activities, the multidimensional nature of the teachers' classroom performance might have activated both scientific and pedagogical concepts as suggested by Depaepe et al. [20]. We believe PK is mainly correlated with evidence generation, whereas for evidence evaluation PCK should be decisive (cf. [25]). Therefore, coding the diagnostic activities as a total person ability measure, as done in this study, may have influenced the result as well. A next interesting step would be to investigate the relation between teachers' knowledge facets and the different diagnostic activities as it can be assumed that subject-specific knowledge is more important for evidence evaluation than for the other diagnostic activities (cf. [25]).
A well-developed PK including knowledge of general principles of teaching and learning may thus be seen as a precondition for being able to apply knowledge of contentrelated basic dimensions to the diagnosis of subject-specific situations (cf. [11,30]). Here, further research can follow that examines the use of knowledge facets in relation to different dimensions of instructional quality in which diagnostic activities are applied. Contentrelated dimensions such as cognitive activation could be related to both PCK and PK, while subject-specific dimensions such as use of models or use of experiments could correlate more strongly with PCK (cf. [11]). Since the video-based assessment tool DiKoBi Assess provided diagnostic situations in the biology classroom, the higher effect sizes for PCK (see Figure 2) and the moderate correlations between PCK and the other components of diagnostic competences indicated the importance of subject-specific knowledge for effective diagnosing in the science classroom.
As is the case in all studies, there are of course some limitations to our study. First, DiKoBi Assess was used to measure the diagnostic competences of a sample of PSTs within a situated context. Since the videos were scripted and presented specific challenges in a very condensed way, there might be a gap between the classroom situations in the videos and real-life teaching, even when teachers perceived the classroom situations as authentic [54]. By using short videos of classroom situations, the complexity of diagnostic situations can be reduced. In our case, this was done by focusing on specific challenges of instruction within one video, so that the complexity of each diagnostic situation was reduced by "breaking down practice into its constituent parts for the purposes of teaching and learning" [67] (p. 2058). This procedure may be beneficial for inexperienced PSTs' learning [40], but it must be kept in mind that instruction within the real classroom may involve solving more complex challenges.
Furthermore, the described relationships apply to the selected diagnostic activities that were relevant for diagnosing the classroom situations embedded in the video-based tool. Prompting other activities might change results. In future studies, we also want to include affective-motivational variables such as teachers' beliefs, motivation, and selfrelated cognitions that may impact teachers' diagnostic activities as well [7,38,94].
Another potential limiting factor to our study could be the person reliabilities of our knowledge tests. Many factors can impact Rasch person reliability. Often such reliability values are impacted by the targeting of a test. If a test is too difficult, or too easy for a sample, the person reliability can be impacted. Such off-targeting of a test to a sample's ability level is not uncommon in studies. A common rule of thumb is that there should be less than a 1 logit difference between the average person measure on a test and the average item measure [97]. For our instruments, only the value of the PCK-test violated this rule of thumb. The difference between the average person measure and the average item measure was −1.22, meaning that the test was too hard for the respondents. Since the PSTs of this study were at the beginning of their studies, higher reliabilities might be observed if the test instruments were administered later in the curriculum of the PSTs. Linacre [86] has also suggested that an instrument's reliability might be increased if longer versions of a test are utilized, or if a sample with greater variance in ability took an instrument. Since our sample's ability range was rather narrow and PSTs with extremely high or low abilities were not part of the sample, the measure of person reliability decreased.
Furthermore, we did not investigate what pre-service teachers are able to implement in real-life performance. Thus, being able to diagnose instructional features of biology instruction does not necessarily mean being able to implement those features in real-life instruction. The linking of diagnostic competences and practical implementation as well as the investigation of the effects of diagnostic accuracy on instructional quality can follow such considerations (cf. [14]).
Implications and Further Research
The results serve as a basis for further, more complex investigations of science teachers' diagnostic competences within the COSIMA project (Facilitating diagnostic competences in simulation-based learning environments in the university context) and can be used to design ways of fostering pre-service teachers' diagnostic competences systematically in the future. Following considerations by Meschede et al. [38], we have investigated diagnosis with regard to further features of instructional quality in the field of biology instruction and thus linked dimensions of subject-specific instructional quality with diagnostic competences. It is now important to investigate in more detail which diagnostic activities are related to which knowledge facet and what differences might exist with respect to different dimensions of instructional quality.
The results of the present study support the importance of PCK for biology teacher education, particularly for pre-service biology teachers' diagnostic activities and diagnostic accuracy. In this regard, video-based tools provide the opportunity to apply knowledge and diagnostic activities to classroom situations, which are particularly relevant for biology instruction. Thus, these tools can be used not only to measure skills (cf. [6,11,38]) but also as an effective way for supporting knowledge acquisition as well as for providing varying opportunities in which diagnostic activities can be applied and trained (cf. [42]). Therefore, besides utilizing DiKoBi as an assessment tool, its potential as a learning tool (DiKoBi Learn) will be investigated in future studies. Taking the results of the present study into account, DiKoBi Learn has then to provide additional instructional information concerning the subject-specific dimensions of instructional quality that are addressed in the videos used.
At the same time, the present results also point to the relevance of a well-founded pedagogical knowledge base. The findings could serve as an incentive to focus more on general dimensions of instructional quality and activities such as the general description of instruction in the pedagogical training of teachers, while subject-specific features are then given priority in subject-specific courses on the basis of practical examples (such as those provided in the video-based tool DiKoBi). Accordingly, in subject-specific courses, the interpretation and evaluation of subject-specific instruction should gain more weight. Data Availability Statement: Information and queries on the data used can be obtained from the authors of this article.
Conflicts of Interest:
The authors declare no conflict of interest.
Coding Procedure for Diagnostic Activities
The coding procedure for the individual tasks was as follows: Task Describe, DA = evidence generation: (1) For each observed and described challenge, we assessed whether it addressed subject-specific pedagogical content (PCK) or merely pedagogical content (PK). Aspects referring to the subject matter (CK) have not been mentioned. For the assignment to PCK or PK, the descriptions were compared with the itemization (indicators) in the coding manual. Consequently, the descriptions could be assigned either to the scripted, evidence-based PCK challenges which we had recorded and embedded in the video clips [53], or to further PCK-aspects or PK-aspects. (2) We evaluated how well each PST executed the diagnostic activity evidence generation to assess the descriptions' quality. The quality of the statements was assessed on three levels (see Table A1).
Task Explain, DA = evidence evaluation: (1) Depending on the knowledge facet to which the description was assigned to, we evaluated (2) whether the statement was per se an accurate explanation and how well did the given explanation relate to subject-specific pedagogical theories. The quality of the statements was assessed on four levels (see Table A1).
Task Alternative Strategy, DA = drawing conclusions: (1) For each described strategy, affiliation to PCK or PK was assessed first by comparing the statements with the itemization in the coding manual. Strategies addressing PCK were further assessed on whether they covered aspects of the scripted PCK-challenges. (2) For quality assessment, we evaluated how well the alternative teaching strategy was set up. The quality of the statements was assessed on three levels (see Table A1).
The code "0" was assigned for not accurate statements. This was the case, for example, when incorrect observations have been made that were not visible in the videos (Task Describe), when false or incomprehensible explanations were given (Task Explain), or when the described alternative teaching strategy did simply not represent an (appropriate) alternative strategy (Task Alternative Strategy). The students should briefly repeat what was discussed last week, but only superficial, general terms are discussed. Presence of the teacher is minimized -less activating and motivating -room use is almost not given.
Empty phrase
The statement is more of an everyday phrase than an explanation, partly meaningless.
It doesn't really make sense to say we repeat the last lesson and then stop after two aspects.
2 Simple reference to concepts/theories Appropriate to or based on the corresponding description, the subject-specific pedagogical theory is named as a keyword or embedded as a phrase in a sentence.
The teachers' questions do not allow for cognitive activation.
Comprehensive explanation
Observation and theory are related to each other.
Calling one student is not enough, The teacher neither asked for explanations nor did she engage the students to recognize or call on conceptual connections. The activation of prior knowledge could be extended to activate the students more deeply. Promoting the motivation of the students by using different skin types (elephant skin, crocodile skin), activating prior knowledge (e.g., describing similarities/differences of the different skin types), asking for students' prior experiences (e.g., experiments on the sense of feeling/touch). | 2021-05-08T00:02:53.106Z | 2021-02-25T00:00:00.000 | {
"year": 2021,
"sha1": "6988852de65ab88015fd051c733c24a08a4dfa69",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7102/11/3/89/pdf?version=1615972573",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ddb7366089aa8a2db588cfc6ef4fa7f7e1fa68e4",
"s2fieldsofstudy": [
"Biology",
"Education"
],
"extfieldsofstudy": []
} |
89477139 | pes2o/s2orc | v3-fos-license | Analysis and selection of experimental models of osteoarthritis
© O. A. Grygorieva, O. V. Monina, E. R. Skakovsky, 2014 B and joints diseases are one of the most common causes for severe long-term pain and prolonged disability (Chen, 2012). In the United States OA affects 13,9–33,6% of adults or approximately 27 million Americans of all ages (Lawrence, Felson, Helmick et al., 2008; Gregory et al., 2012). Pain, joint stiffness and reduced motion lead to disability and loss of independence of patients with OA. Global cost of osteoarthritis increases. According to this situation, the World Health Organization proclaimed the Bone and Joint Decade (2000 2010) in order to advance understanding and treatment of joint diseases by means of prevention, education and research. Osteoarthritis (OA) is one of the common forms of joint diseases. It is usually considered as a part of the aging process (Bondeson et al., 2010). It is a group of chronic, painful, disabling conditions affecting synovial joints. It is possible to defi ne primary OA, which develops without any predisposal factors and secondary OA when the patient suffered from any traumatic injury (Sakkas and Platsoucas, 2007). Secondary OA is a mechanically induced disorder in which the consequences of abnormal joint mechanics provoke biological effects that are mediated biochemically. OA usually develops when the mechanical stresses on the normal joint are excessive, or in the case of metabolic abnormalities of the joint (Brandt K. D., Radin E. L., Dieppe P. A. & van de Putte L., 2006; Sakkas and Platsoucas, 2007). UDC616-097:616-091.8
Aim. The aim of the work is to examine relevant literature on different patterns of experimental osteoarthritis and choose the most acceptable one for providing its preclinical investigation.
Methods and results. Osteoarthritis (OA) is one of the common forms of joint disease. OA is mainly in the head of the top fi ve causes of disability. Notwithstanding a great number of revealed predisposing factors and the involvement of mechanical distress, the exact pathogenesis of OA is still a subject of investigation. Particularly the earliest changes are largely unknown, because they appear long before clinical manifestation, and that is why they are not able to be studied in humans. The great number of OA researches is devoted to the methods of early detection and developing strategies of its treatment. Animal experimental models help in investigating mechanisms of OA development and fi nding morphological, physiological criteria for strict diagnosis of OA.
Conclusion. It was settled that small animal experimental models of OA are more suitable for preclinical investigations of OA pathogenesis: its genetic and molecular mechanisms; large animal experimental models of OA allow to investigate biomechanical changes of the joint and to provide intraarticular interventions. Methods which are necessary for perfect investigation of the joint are also described in the article.
Modern View on OA Etiology and Pathogenesis
OA is a systemic multiple etiological musculoskeletal disease (Sakkas and Platsoucas, 2007). Notwithstanding a great number of revealed predisposing factors and the involvement of mechanical distress, the exact pathogenesis of OA is still a subject of investigation. Particularly, the earliest changes are largely unknown, because they appear long before clinical manifestation, and that is why they are not able to be studied in humans.
In the case of OA, the whole joint is impaired, its lesion is associated with irreversible articular cartilage loss, concomitant sclerotic changes in the subchondral bone, destruction of the articular capsule, synovitis, and appearance of osteophytes. Articular cartilage has been the focus of research into OA for decades; it is affected predominantly in OA. This affection is mainly connected with gradual loss of extracellular matrix, composed mainly of aggrecan and type II collagen. Different types of collagen are distinguished in the contents of the extracellular matrix of the articular cartilage. Type II collagen forms about 90% of the fi bers in the mature articular cartilage. Proportion of different types of сollagen changes with age and under different pathological conditions, and needs further examination. Very little, however, is known about the mechanisms and role of the extracellular matrix and its interactions with cell receptors in the regulation of mineralization events of chondrocytes. Binding of type II collagen and/or type X collagen to annexin V stimulates its Ca 2+ channel activities, leading to an infl ux of extracellular Ca 2+ ions into chondrocytes. Annexin V is expressed in human osteoarthritic cartilage, but not in healthy human articular cartilage. Type X collagen expression was also detected in human osteoarthritic cartilage, suggesting hypertrophic and terminal differentiation events occur in articular cartilage during osteoarthritis (Kim and Kirsch, 2008).
Tide mark, which consists of proteoglycans, provides elasticity of the mature articular cartilage (Goldring M.B., Marcu K.B.). Its thickness and contents change in OA which brakes biochemical and biomechanical barrier between subchondral bone and articular cartilage; therefore, it is followed by cartilage destruction. Vascular pathology also plays a leading role in the initiation and progression of OA. Vascular invasion of the calcifi ed cartilage from the subchondral bone is one of the early factors in the progression of the disease (Findlay, 2007). Correlation between proinvasive and antiinvasive factors in OA has not been determined yet and needs further investigation.
Subchondral bone is the next «violin» in OA development. The subchondral bone is not only a structural support for articular cartilage, but it also necessary for biochemical interaction between the articular cartilage and bone. Vascularity of the subchondral bone is altered with increasing severity of OA. Sclerosis of the subchondral bone, its stiffness, and formation of cysts within it are the morphological changes of the subchondral bone in OA (Mastbergen and Lafeber, 2011).
OA is also characterized by a broad spectrum of changes in the synovial membrane, ranging from small infi ltration of the synovial membrane with lymphocytes and macrophages to intense infi ltration, thickness of the vascular wall and general edema of the articular capsule. That is, it is impossible to say, that OA is only a degenerative joint disease (Sakkas and Platsoucas, 2007). Synovial infl ammation aggravates symptoms of OA. In fact, OA synovial macrophages exhibit an activated phenotype; they produce vascular endothelial growth factor and proinfl ammatory cytokines, such as TNF-α, IL-1, IL-10, matrix metalloproteinases, and tissue inhibitors of metalloproteinases (Bondeson et al., 2010). According to lymphocytes, the two main different populations are distinguished in the synovial membrane: residual with γ/δ TCR + T cells, which fulfi ll morphogenic function and control over the proportion of different cell types, range of proliferation; the second population takes part in a T cell immune response, they express activation antigens, Th1 cytokines (Sakkas and Platsoucas, 2007). Dynamics of lymphocytes, their phenotype, and their role in OA development are worth investigating.
Directions of OA Researches
The great number of OA researches is devoted to the methods of early detection and developing strategies of its treatment. It is impossible to analyze the development of the human joint in normal conditions and under the infl uence of different factors during the fetal period of development, because of many bioethical problems, great number of attendant pathology etc. It is also impossible to analyze molecular mechanisms of OA development in patient in dynamics. So, it is necessary to investigate these processes on experimental animal models with further extrapolation of obtained data. Animal experimental models help in investigating mechanisms of OA development and fi nding morphological, physiological criteria for strict diagnosis of OA.
Analysis and selection of experimental models of osteoarthritis
In some animal models, more advanced stages of OA can be studied. It is often considered that experimental animal research is necessary for the treatment of human diseases, but not all the animal models are adequate for clinic. Some methodological problems of animal experiments, such as nuances in laboratory technique that may infl uence on the results etc., still exist (Mastbergen and Lafeber, 2009).
Results
To study the changes associated with cartilage, subchondral bone and synovial membrane impairment during OA numerous animal models have been developed and proposed. For this purpose, different types of animals are used: mouse, rat, dog, horse, sheep, goat, guinea pig, etc. It is important to remember that translation of knowledge from animal models to humans should be done with caution (Gregory et al., 2012).
Experimental animal models of osteoarthritis induction include some ones, based on intraarticular injections of different solutions such as sodium monoiodoacetate, Escherichia Coli Lipopolisacharide, etc. These models were developed on the eve of OA investigation. Later on traumatic impact models, instability-based models were proposed.
Unfortunately, a single gold standard model for OA is steel absent. Each model is characterized by its own unique advantages and disadvantages. It is necessary to take into consideration age, size, and sex of the animals. Small animals (mice, rats, guinea pigs, rabbits) are suitable because of their easy upkeep, low cost, and genetic manipulations. So, small animal models are most advantageous for investigation of specifi c disease mechanisms, preclinical initial screening of therapeutics. They, especially mouse models, are useful in elucidating the genetic (for example, Kniest and Sickler syndrome) and molecular pathogenesis of OA (Gregory et al., 2012). In the case of morphological examination of joint, in general, these models are also favorable, because of the size of the joint. Consequently, it is possible to make one histological sample, which includes all the components of the joint -articular capsule, articular cartilage, subchondral bone, menisci and intraarticular ligaments. That makes possible to have a glance on changes of the joint in the whole. It allows examining quantitative and qualitative composition of cells and substances of the extracellular matrix of all the components of the joint. Mouse experimental models for a long time were the pioneers in biomedical animal models. The Brtl (a knockin model of human osteogenesis imperfecta) mouse model demonstrates that destruction of articular cartilage is secondary to altered architecture of the subchondral bone (Blair-Levy et al., 2008).
Hartley guinea pigs spontaneously develop degenerative changes of articular cartilage that are similar to established in humans. Guinea pig model of OA helped in the description of subchondral sclerosis as a biphasic process, characterized with primary thickness of the subchondral bone, which is followed by its stiffness (Muraoka et al., 2007). Rabbit experimental instability model of OA is one of the most widely used experimental models of OA; it means transection of the anterior cruciate ligament. It is of particular clinical relevance to posttraumatic OA in humans (Tiraloche, Girard, Couinard, Sampalis, Moquin, Ionescu, Reiner, Poole & Laverty, 2005).
Large animal models of OA induction are most advantageous in biomechanical and anatomical similarity to humans, ability to use routine diagnostic imaging, capabilities for arthroscopic interventions and postoperative management. They (dogs, pigs, goats, horses etc.) can undergo routine MRI in vivo to analyze cartilage volume, bone marrow lesions, synovitis, lesions of the ligaments and menisci (Roemer, Crema, Trattnig & Guermazi, 2011). And vice versa mice, rats, guinea pigs are too small for MRI (Blair-Levy et al., 2008). As for horses: spontaneous OA is a quite common problem for them. The most frequently affected joint is the metacarpophalangeal joint. In sheep, goats as well as in horses, it is possible to analyze many detailed outcome measures, because of the size of the joints (Mastbergen and Lafeber, 2009).
It is necessary to remember that all animal research must be described in an Animal Care and Use Committee Protocol Form, which must be approved by the institutional Animal Care and Use Committee prior to any animal work being performed. To clarify OA pathogenesis, it is necessary to examine metabolism of articular cartilage, articular capsule and subchondral bone in OA development.
Methods, which are necessary for perfect investigation of the joint For the investigation of a joint in the whole micro-magnetic resonance imaging and micro-computed tomography are necessary.
For the articular cartilage: 1) macroscopic cartilage assessment: examination of the articular cartilage for gross morphologic changes with application of India Ink; 2) histologic assessment of cartilage: fi xed in 10 % neutral buffered formalin, decalcifi ed tissue blocks, dehydrated through graded alcohol, embedded in paraffi n 5-6 micrometer sections are stained with hematoxylin-eosin, Safranin O -Fast Green for general review of the cartilage; for estimation of the chondrocytes' density, for analyzing of nucleo-cytoplasmic proportion; chondrocyte necrosis, fi brillation of matrix, fi ssure or fracture of cartilage, cartilage thinning, chondrocyte clusters and clones, cartilage erosion; 3) immunohistochemical analysis of type I collagen, type II, type IX and type X collagen with relevant positive and negative control; 4) immunohistochemical analysis of VEGF α -for examination of vascular invasion; 5) immunohistochemical analysis of MMP -for investigation of the level of cartilage degradation; 6) staining of the sections with alcyan blue with different concentration of MgCl 2 -for cartilage glycosaminoglycans analysis. For the subchondral bone: 1) bone mineralization analysis; 2) analysis of subchondral bone thickness on the histological sections using individual point-to-point distance measures; 3) analysis of bone connectivity; 4) analysis of the subchondral bone structure: trabecular bone volume/total volume, trabecular thickness, trabecular number, trabecular separation, subchondral osteonecrosis, osteosclerosis, subchondral bone erosions, and subchondral cyst formation.
For the articular capsule and synovial layer: 1) staining of the sections with hematoxylin-eosin, resorcin -new fuchsin, with Mallori solution, impregnation of the sections with argentum carbonate; 2) immunohistochemical analysis of type I collagen, type III collagen with relevant positive and negative control; 3) immunohistochemical analysis of α/β γ/δ T cells, CD3, CD4, CD8, CD25, CD45RO, CD69; 4) immunohistochemical analysis of CD68+ macrophages; 5) immunohistochemical analysis of CD34 in order to reveal endothelial reaction; 6) analysis of articular capsule and synovial layer structure: thickened tissue, villous proliferation, synovial cell hyperplasia, edema, metaplastic changes etc. Statistical analyses have to be the last step in investigation.
Conclusion
Osteoarthritis (OA) is one of the common forms of joint disease. OA is mainly in the head of the top fi ve causes of disability. Pain, joint stiffness and reduced motion lead to disability and loss of independence of patients with OA. Notwithstanding a great number of revealed predisposing factors and the involvement of mechanical distress, the exact pathogenesis of OA is still a subject of investigation. Particularly, the earliest changes are largely unknown, because they appear long before clinical manifestation, and that is why they are not able to be studied in humans. The great number of OA researches is devoted to the methods of early detection and developing strategies of its treatment. Animal experimental models help in investigating mechanisms of OA development and fi nding morphological, physiological criteria for strict diagnosis of OA. In some animal models, more advanced stages of OA can be studied. Small animal experimental models of OA are more suitable for preclinical investigations of OA pathogenesis: its genetic and molecular mechanisms. Large animal experimental models of OA allow to investigate biomechanical changes of the joint and to provide intraarticular interventions. Providing profound preclinical investigations of OA pathogenesis and treatment is of great importance, because of its great infl uence on the process of OA development understanding and prevention. | 2019-04-01T13:14:50.505Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "04191c5bca969299dca943c1e33797461e38aea4",
"oa_license": "CCBY",
"oa_url": "http://pat.zsmu.edu.ua/article/download/28544/25515",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9cf4526c3833ce89aa1122ba6b6f3a834311eb7a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
99223085 | pes2o/s2orc | v3-fos-license | Thermal conductivity coefficient UO 2 of theoretical density and regular stoichiometry
The main methodological research results of the thermal conductivity of uranium dioxide (UO2) were considered. UO2 is the main fuel of modern nuclear energetic reactors. The assessment of efficiency of the proposed approximations compared to Fink-Ronchi formula at the theoretical values of UO2 density is proposed. According to this methodology we conducted the analysis of the experimental dependences of the thermal conductivity obtained by domestic authors.
Introduction
Nowadays uranium dioxide is the main fuel of nuclear energetic reactors owing to its chemical stability and radioactive resistance.The first accessible reference data on thermal and physical characteristics UO 2 were presented in research work [1] at the end of 60s.
Operating experience and further researches of oxide fuel in 1968-1998 showed significant differences in dependence of thermal conductivity coefficient on temperature O(T), both fuel production technology and further influence on its meaning during reactor work of fission products, porosity changes and stoichiometry composition of fuel [2][3][4].
In our opinion, important methodical specifics of the research stage of thermal and physical properties UO 2 are: x introduction of idea of theoretical density UO 2 ; x detailed understanding of thermal transfer mechanism to thermal conductivity models of crystal sampling O(T) in the form of sum of phonon or lattice conductivity O ph , photon or radiation O r and electronic conductivity, including ambipolar conductivity, O e ; x receiving of the first recommended Harding-Martin's formula [5] for thermal conductivity design UO 2 of theoretical density at temperature ranges 773-3120К which included two main components of thermal conductivity mechanism (pic.1); x use of integral thermal conductivity for temperature changes design in fuel pellet.On the whole, at this stage of research insufficiency of experimental data on thermal conductivity UO 2 at high temperatures conditioned by numerous amount of influencing factors and difficulty of making experiments was identified.There are ranges of temperature changes of fuel pellets in cassettes of medium and maximum reactor power VVER-1000 [6] shown on the graph of thermal conductivity coefficient dependence UO 2 on temperature (pic.1) which underline the role of each component of thermal transfer mechanism.Thereafter in 1999-2008 as a result of generalization of more extensive experimental data Ronchi [7] offered a new dependence for ambipolar component.Later Fink generalized other researcher's results and found modified expression for phonon component.He used Ronchi's expression for ambipolar component and received a design formula of thermal conductivity coefficient UO 2 [8,9] which inaccuracy is +10% till the limit of 2000 K, and +20% if the limit is higher.Coefficient 1.158 was introduced in the Fink-Ronchi's formula below for recomputation of the meaning of thermal conductivity samples UO 2 coefficient from 95 to 100% theoretical density [
Comparison of the approximations on the thermal conductivity of UO at the theoretical values of density
An experimental work [11] proved a theoretical dependence on thermal conductivity coefficient UO 2 design presented in a reference book [12] There is also an empiric formula there [12]:
In the succeeding reference books and books after 2003 there is move towards standard formula (1).There is empiric dependence in work [15] 14 4 norm 4820 2.434 10 T , 351 T O (6) in which it is easy to detect a significant difference from Fink-Ronchi's formula led to theoretical density UO 2 (1) (see fig. 3, curve 2) while initial conditions of its use were not stipulated.Dependence (1) becomes general in work [16] and further a design formula becomes based on it [17] which includes medium fuel burning-out В, MW•day/kgU: [15]; 3 -formula (7) [17].
Thus, analysis of comparison of experimental dependences of thermal conductivity coefficient UO 2 made by different authors and Fink-Ronchi's formula at theoretical density of fuel allowed: x to identify significant differences in input of some mechanisms of thermal conductivity assessment with the growth of fuel temperature; x to detect a base limit of uranium dioxide thermal conductivity change making it possible to clearly distinguish porosity influence, nonstoichiometry and burning-out at present.
Introduction of theoretical density UO 2 allows detecting efficiency of offered approximations for thermal conductivity coefficient UO 2 design of theoretical density -O 0 .Let us compare design dependences made by Russian scientists in different years with Fink-Ronchi's formula. | 2019-04-08T13:09:24.661Z | 2017-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "1b6567039035d53150d6b57a3b94fd6a12fd45ed",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/06/matecconf_tibet2017_01050.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1b6567039035d53150d6b57a3b94fd6a12fd45ed",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
255045614 | pes2o/s2orc | v3-fos-license | Asymptotic freedom using a gluon mass as a regulator
. Abstract. Front-Form Hamiltonian dynamics provides a framework in which QCD’s vacuum is simple and states are boost invariant. However, canonical expressions are divergent and must be regulated in order to establish well-defined eigenvalue problems. The Renormalization Group Procedure for E ff ective Particles (RGPEP) provides a systematic way of finding counterterms and obtaining regulated Hamiltonians. Among its achievements is the description of asymptotic freedom, with a running coupling constant defined as the coe ffi cient in front of the three gluon-vertex operators in the regulated Hamiltonian. However, the obtained results need a deeper understanding, since the coupling exhibits a finite dependence on the regularization functions, at least at the third-order term in the perturbative expansion. Here we present a similar derivation using a di ff erent regularization scheme based on massive gluons. The procedure can be extended to incorporate contributions from virtual fermions.
Introduction
Front-Form Hamiltonian dynamics [1,2] is a candidate tool to characterize bound states in QCD [3,4] and to investigate the relation between the parton and constituent quark models aiming to obtaining results that are invariant under certain boost transformations [5].However, these longterm goals have important challenges to overcome.One of them is the regularization of highly divergent canonical expressions.Another one is the introduction of counterterms to describe aspects related to vacuum physics [5].In this context, the similarity renormalization group, developed by Głazek and Wilson [6,7], together with the concept of effective particle introduced by Głazek [8][9][10], known as RGPEP, stands for a systematic procedure to handle these divergences and to find counterterms.
The RGPEP is in a developing stage and the way to obtain non-perturbative solutions to the renormalizationgroup equation is still unknown.However, it is possible to use perturbative expansions in powers of the coupling constant instead [9].The bound state equation has been considered in heavy-flavor QCD and numerical results for the spectrum of heavy quarkonia and baryons have been obtained using a simplified sketch [11,12].Initially, the new version of the method was used to describe the running coupling and, more precisely, the phenomenon of asymptotic freedom.Published works in this direction [13][14][15] reproduce the asymptotic-freedom result obtained from renormalization group techniques in Euclidean space [16].A finite dependence on the regularization functions used to regulate small momentum fractions (small-x) usually remains [14,15].Such dependence needs further understanding.
A regularization provided by a canonical gluon mass [5] seems to be more adequate for various reasons 1 : first of all, the same regulating function is used to remove both ultraviolet-and small-x divergences; furthermore, we use the same type of function as the ones introduced by the RGPEP procedure; and finally, it allows one to include a large range of +-component momenta near zero [20,21].
In the following, we study the impact of introducing such a parameter and its consequences as a ragulator.At the end of the procedure, the limit of zero mass is applied, with no need of introducing new fields or interactions to recover gauge invariance.The result is qualitatively the same as the one obtained earlier [15]: a function of the momentum fraction of external particles h (x 0 ) appears as a side product of regularization and dumps asymptotic freedom for values of x 0 0.13.This article is organized in the following way.In Section 2 we present the basic elements involved in front-form quantization and the notation employed along this document.Section 3 is dedicated to introduce the RGPEP method and its application to the QCD Hamiltonian for gluons up to third order.It includes the regularization procedure.Section 4 defines the running coupling as a coefficient in the three-gluon-vertex Hamiltoian term.Finally, Section 5 concludes the article.
Front-Form Hamiltonian dynamics
Relativistic dynamics obeys the Poincaré algebra, a set of commutation relations between the ten fundamental dynamical quantities: the generators of space-time translations and rotations.In its original work [1], Dirac found three ways of satisfying these relations, giving rise to the Instant-, Front-and Point Forms of dynamics.
The RGPEP is built on the Front Form of dynamics for reasons we shall not discuss here (see the first sections of [5]).In this form, four-vectors in Minkowski space are defined as x μ = x + , x − , x ⊥ , where x + = x 0 + x 3 , x − = x 0 − x 3 , and x ⊥ = x 1 , x 2 .The inner product is and represents the energy of the particle.The dynamics is not entirely specified by Dirac forms, and the Hamiltonian of interest is usually obtained from the T +− component of the energy-momentum tensor associated to the Lagrangian density considered.To describe pure-gluonic QCD we use the Yang-Mills theory of the non-Abelian gauge group SU (3).Details can be found in [2,15], here we just quote the final expressions Eq. ( 9)-( 14) of [15]: where H is the Hamiltonian density and Ω denotes the surface of quantization, in this case, the plane defined by x + = const.The Hamiltonian of pure-gluonic QCD has four terms the subscripts on each of the four terms denote the number of fields involved in the term: H A 2 is the free Hamiltonian, H A 3 is the first-order vertex, H A 4 is a four-gluon vertex and H [∂AA] 2 appears due to the constraint equation in the gauge A + = 0.This sets A − = 2 1 ∂ + ∂ ⊥ A ⊥ for free fields.The theory is quantized using the canonical expansion of field A μ in terms of creation and annihilation operators with commutation relations where σ and c are spin and color indices, respectively, and δ (p) = 16π 3 δ (p + ) δ p 1 δ p 2 .These relations and normal ordering of operators (denoted by : H :) are used to obtain the Hamiltonian in terms of creation and annihilation operators: and ; the argument of the delta function k † − k is a shortcut for the difference between momenta of created particles minus momenta of annihilated particles in the term.Finally, Y 123 is a polarization function whose concrete expression can be found in Eq. (B3) of [15].The parameter ξ is the canonical gluon mass and f tr is a regularization function, introduced in the next section.The subscript t r is a cutoff parameter.
Renormalization Group Procedure for Effective Particles
Canonical expressions with regulators such as Eq. ( 8) are transformed in order to produce results independent of regularization.RGPEP takes them as initial conditions and sets a family of equivalent Hamiltonians that depend on a parameter t: the new operators a t create and annihilate effective particles of size s = 4 √ t and are related to the initial or bare operators by a unitary transformation (cf.Ref. [9]) whose anti-hermitian generator is This expression, H f is the free part of the Hamiltonian which does not evolve with t; H Pt is the final Hamiltonian multiplied by half the sum of total momentum created and annihilated in that term.The generator gives rise to a differential equation with a double-commutator of the type of that introduced by Wegner [22]: Note that Eq. ( 9) forces functions multiplying operators in Hamiltonian terms to also change with t.In order to distinguish the change of these functions with the change of operators we will use normal fonts H t (a t ) when both are at the scale t and calligraphic font H t (a 0 ) when the operators are at the bare scale.
Eq. ( 12) can be solved order by order in a perturbative expansion on the coupling constant g.only those terms relevant to the derivation of the running coupling, Eq. ( 27)-( 33) of [15] we have: in order to alleviate notation and build intuition we follow [15] and define These expressions are then introduced in Eq. ( 12) and give rise to successive expressions in powers of g.Counterterms are introduced order by order in the initial Hamiltonian to make physical results independent of regularization.
First-order solution
Let us introduce some important concepts before analyzing the three-gluon vertex and the running coupling.The equation in first power of g has two terms: one corresponding to Y 21t , the other to its Hermitian conjugate Y 12t .For the first one we have The solution Y 21 is represented graphically by figure 2 and it is similar to Eq. ( 8) with where M i is the invariant mass of configuration i.In this case we have: x 1/3 and x 2/3 are the longitudinal momentum fractions of particles 1 and 2, respectively, and κ ⊥ 12 is the relative transverse momentum of particles in configuration a.More generally, we call parent momenta P to the sum of momenta created or annihilated through a given interaction, the longitudinal momentum fraction of particle p involved in such interaction is then and the transverse momentum is Eq. ( 21) justifies the name effective particles of size s = 4 √ t.Namely, form factors like Eq. ( 22) prevent particles of size s to change their relative kinetic energy by more than about λ = 1/s through a single interaction.Note that the notion of size is inherent to interactions; momenta of free particles are not constrained in this formalism no matter the value of s.
Finally, the canonical expressions are regularized by the introduction of a canonical gluon mass ξ and a regulating function defined through Eq. ( 22) t = t r , where t r is a small value that acts as a cutoff.Frequently, the notation f t+tr,ab is used instead of f t,ab f tr,ab , since it allows to clearly see that for any finite value of t the regularization parameter is "muted" in the limit t r → 0 [21].
The three-gluon vertex
The three-gluon vertex can be analyzed by considering the third-order solution to the RGPEP equation and it has the following structure where Y 21,t and K 21,t are the first and third order contributions respectively.Caligraphic letters with tildes are introduced to make explicit the common factors within integrals.K21,0 is the third order counterterm.We focus on terms which can be factorized in the following way: where Y 123 (x 1 , κ 12 , σ) is the canonical spin and color structure of the first-order interaction of the initial Hamiltonian.c t is the function obtained from the RGPEP procedure that multiplies the operator structure defining the three gluon vertex, i.e. a † a † a + h.c.It can be written as the sum of diagrams a to i of figure 1, denoted by γ (a) , γ (b) , ..., γ (i): each one of these functions involve three-dimensional loop integrals characterized by the Front-Form momentum fractions x and relative transverse momenta κ ⊥ of the internal virtual particles, and would diverge in the limits κ → ∞ and x → 0, 1 in the absence of form factors and regulators.
Regularization
RGPEP form factors suppress interactions if the differences of invariant masses between the initial and final states in a given interaction are greater than the effective size parameter s = 4 √ t, and thus indirectly avoid the appearance of large κ divergences.However, the regularization is incomplete: at t = 0 the effective expressions must reduce to the ones of the initial theory, which translates to differences such as f t − 1, with f t a form factor and f 0 = 1.Contributions coming from −1 factors are not regularized and give rise to loop divergences.To avoid such divergences we introduce functions f tr in the initial Hamiltonian H 0 .
Counterterms are necessary to avoid dependence on the regularization factor t r in physical results.To find them we notice that the effective Hamiltonians H t become independent of t in the ultraviolet limit κ → ∞, and only form factors with vanishingly small t r remain.Thus the difference between two scales H t − H t0 is ultraviolet finite regardless the values of t and t 0 .The ultraviolet divergent part of the counterterm can then be considered to be that of −H t0 , and its finite (in the limit κ → ∞) part should be then fixed by experimental considerations, for more details see [13,14].
We have now justified the following equation for the third order counterterm: where the function c t0 is the same that we introduced in Eq. ( 29) with t changed to t 0 , c 0 is a finite and in principle unknown contribution necessary because in general the finite part of the counterterm is not equal to the finite part of c t0 .
The situation for small-x divergences (x → 0, 1) is somehow different: In the massless case, invariant masses remain finite if κ → 0 in addition, avoiding form factors to regulate these x divergences.Several strategies are now possible: in [14,15] one introduces different regularization functions and considers the impact of their choice in the running coupling.Here, in contrast, a gluon mass ξ and initial functions f tr are used.With a gluon mass invariant masses diverge if momentum fractions x approach their limiting values for any κ, and thus form factors avoid also these divergences.It is still necessary to consider a parameter t r different from zero, but we do not need extra regularization functions whose explicit forms are in principle arbitrary.At the end of the procedure we take the limit ξ → 0 to recover QCD massless gluons.
Running coupling
We use the definition of the running coupling introduced in [14,15]: the running coupling is defined as the coefficient in front of the canonical color, spin and momentum dependent factor Y 123 (x 1 , κ 12 , σ) in the limit κ 12 → 0 for some value of x 1 denoted x 0 .Therefore, we first factorize the function Y 123 (x 1 , κ 12 , σ) in Eq. ( 27): By definition, the running coupling reads , 02006 (2022) https://doi.org/10.1051/epjconf/202227402006t h Quark Confinement and the Hadron Spectrum EPJ Web of Conferences 274 XV setting its value to be g 0 at the scale t 0 , one has where and n runs from a to i. Eq. ( 33) can now be written in terms of the difference of γs at scales t and t 0 : Explicit expressions for γs can be obtained from Appendix C of [15], changing the RGPEP factors B t as described in appendix A. These equations usually involve integrals in momentum fraction x and relative transverse momenta κ of internal virtual particles.They are evaluated as explained in Appendix B. Finally, relevant results are obtained after applying limits ξ → 0 and t r → 0.
Term a
The triangle term a is obtained from the product of three first-order vertices Y t .Introducing (barred) dimensionless variables defined in Eq. ( 48), we can express it as where with x 2 = 1 − x 1 , and γ E the Euler-Mascheroni constant.
Term b
Term b is obtained from the product of the first-order vertex Y t and the second-order term this contribution exactly cancels the term proportional to t − t0 in Eq. (36).
Terms d and f
Term d is obtained from the product of the second order self-energy term μt and the first-order vertex Y t ; while term f from the second-order counterterm and the first order vertex Y t .Their sum gives the following result where
Terms g and i
Terms g and i are obtained in a similar way that terms d and f : h g+i (x 1 ) = log e γE ξ4 t t0 + log 2 + 1 . (42)
Terms c, e and h
Term c is obtained from the product of the first-order vertex Y t and the second-order interaction Ξ t .The result turns out to be negligible in the limits ξ → 0 and t r → 0. Terms e and h are also derived from the same vertices, and do not contribute to the running coupling since there are no linear terms in κ 12 that could give rise to the canonical polarization structure Y 123 of Eq. (31).
Results and conclusions
Eqs. (37), ( 40) and (42) give the final expression for the running coupling constant Eq. ( 43) is represented in figure 3 for values of x 1 = x 0 ranging from 0.5 to 0.1.The result exhibits asymptotic freedom for x 0 down to 0.13 and it coincides with the analysis of Feynman diagrams in pure gluonic QCD [16] if the factor λ is interpreted as the scale of the renormalization group equations in Euclidean space and if h (x 0 ) = 0 (cf.[14,15]).
In figure 4 the contributions from different terms are considered separately.The self-energy ones, corresponding to d+f and g+i increase as the energy scale diminishes and thus contribute to asymptotic freedom.In contrast, a decreases with the energy scale, and thus the loss of asymptotic freedom for low values of x 0 is entirely due to the triangle term a.
There is no dependence on the mass parameter ξ in the final result in the limit ξ → 0 even though separate contributions diverge in this limit.Thus a mass term for gluons seem to provide an adequate regularization of small-x divergences, producing a function of h (x 1 ) that controls the strength of the running of the coupling constant for different values of the external longitudinal momentum fraction, with the same qualitative behaviour obtained in [14,15].Finally, as noted in appendix B, the methods developed here can also be used to evaluate fermion integrals when particles' masses are small compared to the scales settled by t and t 0 .
A Introduction of a mass term for gluons
As described in Subsection 3.2 each γ (n) consists on a three dimensional integral over momentum fractions x and relative transverse momenta κ of internal virtual particles.RGPEP factors in integrands depend on the order in the perturbation expansion, on how these particles are connected, and on polarization functions that encode the spin and color of these internal degrees of freedom.In the case of massless gluons, explicit expressions for these factors are found in Appendix C of [15].The addition of a gluon mass alter these equations, changing invariant masses that appear there for: with ij= {68, 78, 12}.Note that they may be regarded as invariant masses M χ with x-dependent "masses" χ (x).Numbers denote variables of particles in the various interactions of the third-order diagrams of figure 1.
B Integration method
To evaluate the expressions that are solutions to the RGPEP equation we use dimensionless variables: where t N is an arbitrary scale.The integrals over momentum fractions x are then divided in three intervals or regions called region I, region II, and region III respectively; G x, κ⊥ , t, t0 ; tr , ξ is usually a function of invariant masses, form factors and polarization vectors that is simplified as follows: • In region II the polarization fraction x is bounded ξ < x < 1 − ξ and no integral diverges because the ultraviolet κ → ∞ have been already regularized.Thus we set the regularization parameters to zero in the integrand: G x, κ⊥ , t, t0 ; 0, 0 = G x, κ⊥ , t, t0 ; tr , ξ | II and apply Eq. (E17) of [15] Integrals over x are then easily evaluated and only divergent and constant terms in the limit ξ → 0 are kept.
• Region I is more involved because invariant masses do diverge even in the limit ξ → 0. Nevertheless, since x < ξ, it is enough to factorize the poles in x = 0 and expand around this point the remaining terms to obtain the most strongly-divergent results.For example, a typical integral to evaluate would be (53) The cutoff t r usually appears added to t or t 0 in form factors, f t+tr and f t0+tr ; in these cases it is "muted" and can be discarded.However, special care should be taken when this is not the case, as there are contributions depending on t r in equations Eq. (36) and Eq.(38).Region I is different for the triangle terms a, b and c, since the low limit of integration over x changes from zero to x 1 .In these cases the poles at x 1 are factorized instead and the remaining expressions expanded around this point.Finally, contributions of light fermions beyond the ultraviolet counterterm already found in [14] can be evaluated using this method replacing the gluons mass parameter ξ with the fermion mass m f if the scales settled by the parameters t and t 0 are much greater than m f .
Figure 1 .
Figure 1.Third-order contributions to the running coupling, including the counterterm.Terms (a)-(i) correspond to γ (a) − γ (i) of Eq. (29), term (j) is the third-order counterterm corresponding to γ ( j).External effective particles are labeled 1, 2 and 3 and are represented with bold gluonic lines.
Figure 2 .
Figure 2. Diagrammatic representation of Y 21t .Letters denote configurations of particles before and after the interaction, a refers to particles 1 and 2 and b to particle 3.For more details about notation see Ref. [9].
Figure 3 .
Figure 3. Running coupling for different values of x 0 , the black line (h (x 0 ) = 0) represents the result obtained from the renormalization group equations in Euclidean space.The function exhibits asymptotic freedom from x 0 = 0.5 down to values of x 0 ≈ 0.13.
Figure 4 .
Figure 4. Relevant contributions from terms a, d + f and g + i to the running coupling (terms linear in the difference t − t0 and logarithms log e γE √ tt 0 ξ4 are not taken into account because they cancel in the final expression).Self-energy terms contribute to asymptotic freedom, the triangle term does not, and dominates over the other two for low values of x 0 .
•
For terms d − i of figure 1 results of region I can also be applied to region III because the integrals are symmetric under the change of variables y = 1 − x.For triangle terms a − c the simplification of region I can be applied factorizing poles in x = 1 and expanding around this point instead of x = x 1 . | 2022-12-24T16:05:50.880Z | 2023-01-20T00:00:00.000 | {
"year": 2023,
"sha1": "48fc9202c3ab00162a546309ed1aa4dd98b8362b",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2022/18/epjconf_confxv2022_02006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a3e9ea1eca4c5f07b2023d53bae5171c03498f6f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
267654448 | pes2o/s2orc | v3-fos-license | Thermodynamics and Phase Transitions of Dyonic AdS Black Holes in Gauss-Bonnet-Scalar Gravity
: In this paper, by treating the cosmological constant as a thermodynamic pressure, we study the thermodynamics and phase transitions of the dyonic AdS black holes in Gauss-Bonnet-Scalar gravity, where the conformal scalar field is considered. In a more general extended phase space, we first verified the first law of black hole thermodynamics, and find that it is always true. Meanwhile, the corresponding Smarr relation is also obtained. Then, we found that this black hole exhibits interesting critical behaviors in six dimensions, i.e., two swallowtails can be observed simultaneously. Interestingly, in a specific parameter space, we observed the small/intermediate/large black hole phase transitions, with the triple point naturally appearing. Additionally, the small/large black hole phase transition, similar to the liquid/gas phase transition of the van der Waals fluids, can also be found in other parameter regions. Moreover, we note that the novel phase structure composed of two separate coexistence curves discovered in the dyonic AdS black holes in Einstein-Born-Infeld gravity disappears in Gauss-Bonnet-Scalar gravity. This suggests that this novel phase structure may be related to gravity theory, and importantly, it is generally observed that the triple point is a universal property of dyonic AdS black holes. On the other hand, we calculated the critical exponents near the critical points and found that they share the same values as in mean field theory. Finally, it is true that these results will provide some deep insights into the interesting thermodynamic properties of the dyonic AdS black holes in the background of conformal scalar fields.
Introduction
The thermodynamics of black holes (BHs) constitutes an important research subject in the field of BH studies.In the 1970s, it was realized that BH is a thermodynamic system with temperature and entropy [1][2][3].The intriguing thermodynamic properties of BHs are gradually being discovered, which differ from those of ordinary thermodynamic systems.In particular, Hawking and Page discovered a phase transition between stable BHs and thermal radiation, which is referred to as the Hawking-Page phase transition [4].On the other hand, the anti-de Sitter/conformal field theory (AdS/CFT) correspondence indicates that the thermodynamics of BHs in AdS space can correspond to the thermodynamics of the dual strongly coupled conformal field theory in the boundary of AdS space [5][6][7].Therefore, in the framework of the AdS/CFT correspondence, the Hawking-Page phase transition is interpreted as a confinement/deconfinement phase transition of gauge fields [8].Motivated by this inspiration, the thermodynamics and phase transitions of BHs have been widely investigated [9][10][11][12][13][14].
Recently, there has been increasing attention directed towards studying the thermodynamics of AdS BHs in the extended phase space.It is worth noting that, in this framework, the negative cosmological constant Λ is interpreted as the thermodynamic pressure, while its conjugate quantity is regarded as the thermodynamic volume [15][16][17][18][19][20][21].Subsequently, Kubiz ňák and Mann compared the charged AdS BHs with the van der Waals (vdW) liquid-gas system, and demonstrated that they share the same oscillatory behavior of pressure-volume, critical exponents, and scaling relations [22].Thus, an analogy between charged AdS BHs and vdW systems has been established.Specifically, the small/large BH phase transition in AdS BHs is similar to the liquid/gas phase transition in vdW fluids.This analogy was applied to different types of AdS BHs, and suggested that such a small/large BH phase transition widely existed [23][24][25][26][27][28].Later, it was discovered that AdS BHs exhibit more interesting phase transitions in the extended phase space, such as the reentrant phase transition and the triple point [29][30][31][32][33][34][35][36][37][38][39][40][41].Furthermore, in the context of higher-order Lovelock gravity, there are also some intriguing BHs phase transitions, such as multicritical phase transitions [42][43][44].
By adding higher-order curvature terms to the gravitational action, the extended gravity theories have been established, which include Lovelock gravity and GB gravity [45][46][47].These extended gravity theories have provided new insights into the study of BHs in higherdimensional cases.For example, Wei found that charged AdS BHs in GB gravity exhibits small/intermediate/large BH phase transitions in six-dimensional spacetime, with the appearance of triple point [31].Frassino studied the phase transitions of charged AdS BHs in third-order Lovelock gravity and confirmed the existence of triple point and reentrant phase transitions in higher dimensions [32].More studies have shown that AdS BHs in extended gravity exhibits many intriguing properties [42][43][44][48][49][50].On the other hand, quasitopological electromagnetism is a class of interesting and significant objects in the study of BHs [51][52][53][54][55][56].Recently, Liu et al. have introduced a novel concept of quasitopological electromagnetism, which is defined as the square of the norm of the topological wedge product of the Maxwell field strength of order k (k > 2) [51].Subsequently, Li investigated the phase transitions of the dyonic AdS BHs in Einstein-Born-Infeld (EBI) gravity and obtained some interesting results, such as triple point and novel phase structure composed of two separate coexistence curves [57].Moreover, we also revealed the intriguing thermodynamic properties of dyonic AdS BHs in Einstein-Gauss-Bonnet (EGB) gravity and observed the triple point [58].
Recently, a class of high-dimensional dyonic BH solutions, which include the dyonic BHs in Gauss-Bonnet-Scalar (GBS) gravity, has been derived by coupling Lovelock-Scalar gravity with quasitopological electromagnetism [59].The conformal scalar field is very important for BHs and it is also quite interesting.Recently, Oliva et al. have developed a model for the gravity theory coupled to the real scalar field [60,61].Based on this model, higher dimensional BHs with scalar hair have been studied [62][63][64][65][66].It is believed that the conformal scalar field can affect the thermodynamic stability of hairy BHs [62,63].Interestingly, one also finds that the conformal scalar field has an impact on the local stability of BHs [59].Specifically, the range of horizon radius for stable BHs decreases with the increase in the conformal scalar field parameter H. Clearly, these studies indicate that BHs influenced by the conformal scalar field exhibit many intriguing thermodynamic properties which are worthy of further investigation.In addition, although dyonic AdS BHs in EBI gravity and EGB gravity exhibit some rich phase transitions, such as triple points [57,58], it is still unclear whether these intriguing phase transitions also exist in GBS gravity when considering the conformal scalar field.Therefore, in this paper, we study the thermodynamics and phase transitions of the dyonic BHs in GBS gravity by considering the effect of the conformal scalar field in the extended phase space.We are aim to further reveal the interesting thermodynamic properties of the dyonic BHs in GBS gravity and provide insights into the influence of the conformal scalar field on dyonic AdS BHs.
This paper is organized as follows.In Section 2, we give a review of the dyonic BHs in GBS gravity when considering the conformal scalar field.In Section 3, we study the thermodynamics of the dyonic BHs in the extended phase space.In Section 4, we investigate the phase transitions and phase diagrams of the dyonic AdS BHs, where the conformal scalar field is considered.Section 5 involved the computation of critical exponents near the critical points.Finally, Section 6 concludes with a summary and discussion.
Review of the Dyonic BHs
The action of high curvature gravity coupled to the conformal scalar field and matter sources can be expressed as [59] where L qt represents the Lagrangian density of matter, a p is a Lovelock coupling constant, and b p is a conformal coupling constant.Here, δ , where ξ describes the scalar field [60,61].As a certain class of Lovelock coupling gravity, the dimensionally continued gravity can be obtained when assuming [67][68][69] (2) It should be noted that l is related to the cosmological constant, i.e., Λ = − (d−1)(d−2) . Meanwhile, n is associated with the dimensionality parameter d, where d = 2n + 1 is for odd dimensions and d = 2n + 2 is for even dimensions.The gravitational field equation in Equation ( 1) can be determined by the action principle, i.e., where ς is the energy-momentum tensor corresponding to the material source, and which is defined by One should note that I qt represents the action associated with matter.And, the energymomentum tensor of the conformal scalar field is Based on the action principle, the scalar field equation is constructed as Moreover, the Lagrangian density of the material source is where η is a coupling parameter, and L int takes the form of In addition, the energy-momentum tensor corresponding to the Lagrangian density (7) can be expressed as To construct the hairy dyonic BH solutions, one applies condition (2) and the static spherical symmetry line element where dΥ 2 d−2 is the metric of the (d − 2)-dimensional hyper-surface of curvature (d − 2)(d − 3)γ, which is defined as The magnetic field of this (d − 2)-dimensional hyper-surface can be expressed as where q is related to the magnetic charge and Σ represents the volume of the (d − 2)dimensional hyper-surface.For the purely electric case, the corresponding Maxwell tensor is of the form where prime denotes differentiation with respect to r.Based on Equations ( 12) and ( 13), the following equation can be obtained, Integrating the above equation, it gives It should be noted that the constant of integration Q in the above equation is related to the electric charge.The configuration of a scalar field can be defined as When the conditions and are satisfied, it shows that ξ(r) is a solution of Equation (6).It can be shown that there is an unknown X in Equations ( 17) and ( 18), so that one of these two equations must constitute a constraint on the constant b p 's.Therefore, by considering the gravitational field equations given in Equation (3), choosing the Lovelock parameter a p arbitrarily, and based on the scalar field ξ(r) subject to the constraints of Equations ( 17) and ( 18), the energy-momentum tensor (9), and (15), an independent equation of motion can be obtained, which takes the form of where , with the conditions p ≥ 2. By solving Equation (19), one obtains where is a parameter related to the conformal scalar field.Therefore, by setting α 0,1,2 ̸ = 0 and α p = 0 for p ≥ 3 in Equation ( 21), a spherical (γ = 1) 1 dyonic BHs with the effect of conformal scalar field can be obtained in GBS gravity, which is [59] where is a hypergeometric function.Here, M is the BH mass, H is the parameter related to the conformal scalar field, Σ is the volume of the (d − 2)-dimensional hyper-surface, and l corresponds to the negative cosmological constant as . Moreover, Q and q represent the electric and magnetic charges of the BH, respectively.
Thermodynamics of the Dyonic BHs
In this subsection, we study the thermodynamics of the dyonic BHs in the extended phase space, where the conformal scalar field is considered.In this case, the negative cosmological constant Λ is regarded as the thermodynamic pressure P = − Λ 8π [15].Furthermore, by solving the equation f (r h ) = 0, the outer horizon radius r h of BH can be obtained, which is determined by the largest root of this equation.Thus, we can express the BH mass in terms of the horizon radius r h as Based on the definition of Hawking temperature T = f ′ (r h ) 4π , the BH temperature can be determined, Furthermore, in the extended phase space, the BH mass should be regarded as the enthalpy rather than the internal energy, i.e., H ≡ M. Therefore, we can calculate the other thermodynamic quantities of BH, such as the thermodynamic volume V, entropy S, electric potential Φ Q , and magnetic potential Φ q as follows, Considering the characteristics of the parameters H, α 2 , and η, we regard them as the new thermodynamic variables.Therefore, it can be verified that these thermodynamic quantities satisfy the follow differential form where is the conjugate quantity to H, is the conjugate quantity to α 2 , is the conjugate quantity to η.In addition, the corresponding Smarr relation can be obtained, which is of the form Through the above discussion, we find that both the first law of BH thermodynamics and the Smarr relation hold in a more general extended phase space which H is considered as the new thermodynamic variable.
The Gibbs free energy, as a quantity describing the global stability of a BH system, is given by G = H − TS.Therefore, we obtain The appearance of a swallowtail in a Gibbs free energy temperature (G − T) diagram indicates the occurrence of a BH phase transition.Therefore, we study BH phase transitions by analyzing the swallowtail observed in the G − T diagram.
Phase Transitions and Phase Diagrams of the Dyonic BHs
In this section, we would like to study the phase transitions and phase diagrams of the dyonic AdS BHs in GBS gravity.In particular, this paper focuses on the BH phase transition in six-dimensional spacetime 2 .Naturally, based on the temperature Equation ( 26), the equation of state for the BH can be obtained as Moreover, considering the thermodynamic volume V ∝ r d−1 h , the critical point can be determined by the following conditions, As is well known, the local thermodynamic stability of BHs is measured by the heat capacity with a constant pressure, denoted as C P .When C P is positive, it indicates that the system is locally stable.On the contrary, when C P is negative, it suggests that the system is locally unstable.For a constant pressure P, the heat capacity C P is defined as By substituting thermodynamic quantities into the calculation, C P can be further represented as In this paper, we follow the conditions of T > 0 and S > 0. Thus, in the T − r h diagram, the BH branches with positive and negative slopes correspond to stable and unstable phases, respectively, and their corresponding C P values are positive and negative.
Next, we proceed to study BH phase transitions and phase diagrams, with a particular focus on the triple point.In addition, we also focus on the effect of the conformal scalar field on BH phase transitions.Therefore, in this paper, we set the electric charge Q = 10, magnetic charge q = 5, and parameter Σ = 1, and vary the conformal scalar field parameter H, the coupling parameters α 2 , and η to investigate the phase transitions and phase diagrams of the dyonic BHs.
Phase Transitions by Fixing H and η While Varying α 2
In this subsection, we study the BH phase transitions and phase diagrams by fixing the conformal scalar field parameter H = 0.01 and the coupling parameter η = 0.01, while varying α 2 as 1, 5, and 10.
α 2 = 1
In this case, according to Equations ( 37) and ( 38), a critical point can be determined, which is The behaviors of temperature T with respect to horizon radius r h and Gibbs free energy G with respect to temperature T are plotted in Figure 1a,b, respectively.It should be noted that isobaric curves of the same color in (a) and (b) correspond to same pressure values.In Figure 1a, when P < P c , two extremal points appear on the isobaric curves (represented by red and green), which divide these two curves into three branches: the stable small BH branch, the unstable intermediate BH branch, and the stable large BH branch.Among them, stable branches are represented by solid curves with a positive value of C P , while unstable branches are represented by dashed curves with a negative value of C P .When P > P c , there is no extremal point on the isobaric curve, and T increases monotonically with r h .Now, let us focus on the behavior of Gibbs free energy in Figure 1b.For P < P c , a swallowtail appears on each isobaric curve, which suggests the occurrence of a first-order small/large BH phase transition.It should be noted that the nonsmooth points on the isobaric curves in the G − T diagram correspond to the extremal points on the isobaric curves in the T − r h diagram.Regarding the red and green isobaric curves, it can be found that the system initially exhibits a small BH phase and turns into a large BH phase near the intersection as temperature increases.Furthermore, by comparing the red with the green isobaric curves, it can be discovered that the size of the swallowtail decreases as the pressure increases.When the pressure reaches P c , the swallowtail disappears.For P > P c , the Gibbs free energy decreased monotonically with temperature, which indicates that no phase transition occurs in the system.The phase diagram for the dyonic BHs, as shown in Figure 1c.It can be observed that the pressure increases monotonically with temperature and terminates at the critical point (P c , T c ).The region of small BHs is located above the coexistence curve, while the region of large BHs lies below it.This is a typical small/large BH phase transition, which is similar to the vdW liquid/gas phase transition.
Firstly, we focus on the behavior of temperature, as shown in Figure 2a.When P < P c1 , two extremal points appear on the blue isobaric curve, which divide it into three branches: the stable small BH branch, the unstable intermediate BH branch, and the stable large BH branch.Interestingly, when P c1 < P < P c2 , four extremal points emerged on the red isobaric curve, which divide it into five branches: the stable small BH branch, the unstable small BH branch, the stable intermediate BH branch, the unstable large BH branch, and the stable large BH branch.For P = P t = 0.00337487, it is easy to utilize Maxwell equal area laws to construct two pairs of equal area regions in the T − S diagram, as illustrated in Figure 2b.These two pairs of regions have the same temperature, i.e., T = T t = 0.0485154.This implies that the BH undergoes two phase transitions simultaneously at such pressure and temperature.In fact, this result indicates the existence of a triple point, where small, intermediate, and large BH phases can coexist.When P increases to P c2 , the BH system undergoes a second-order BH phase transition.For P c2 < P < P c3 , two extremal points on the orange isobaric curve divide it into three branches: the stable small BH branch, the unstable intermediate BH branch, and the stable intermediate BH branch.Moreover, the first-order phase transition turns to a second-order phase transition as P approaches to P c3 .When P > P c3 , the temperature T increases monotonically with r h , which implies that there is only one BH branch.We have plotted the behavior of Gibbs free energy G with respect to temperature T, as shown in Figure 3.When P < P c1 , a swallowtail appears in the G − T diagram, which indicates a small/large BH phase transition.As the pressure increased to P c1 < P < P t , two swallowtails appear in Figure 3b, which suggests the potential existence of two BH phase transitions.However, the intermediate BH branch is suppressed by the lower free energy branch and does not participate in the phase transition.Therefore, in this case, only the small/large BH phase transition occurs.Increasing the pressure to P = P t , it can be discovered that the intersection points of two swallowtails appear at the same point, which suggests the occurrence of small/intermediate/large BH phase transitions.For P t < P < P c2 , two swallowtails can be observed, and all BH branches can participate in the phase transitions.This indicates that in this case, the system can undergo small/intermediate and intermediate/large BH phase transitions simultaneously.When the pressure is increased to P c2 < P < P c3 , one swallowtail can be noticed, which implies that there exists only one small/intermediate BH phase transition.When P > P c3 , the Gibbs free energy decreases monotonically with temperature, which indicates that no phase transition occurs in the system.
The phase diagram is illustrated in Figure 4, where Figure 4b is a local magnification near the triple point.As can be seen from the phase diagram, the system undergoes the small/large BH phase transition at P < P t .Interestingly, at the triple point (P t , T t ), the small, intermediate, and large BHs phases coexist, and the system undergoes the small/intermediate/large BH phase transitions.When P t < P < P c2 , the small/intermediate and intermediate/large BH phase transitions appear simultaneously.When the pressure increases to P c2 < P < P c3 , the system undergoes only the small/intermediate BH phase transition.In summary, these results suggest the presence of a rich variety of phase transition types in this parameter region.
Firstly, we would like to analyze the behavior of temperature T with respect to r h , as shown in Figure 5.When P < P c1 , two extremal points appear on the blue isobaric curve, which divide it into three branches: the stable small BH branch, the unstable intermediate BH branch, and the stable large BH branch.As the pressure increases to P c1 < P < P c2 , the red isobaric curve is divided into five branches by four extremal points: the stable small BH branch, the unstable small BH branch, the stable intermediate BH branch, the unstable large BH branch, and the stable large BH branch.For P c2 < P < P c3 , there are two extremal points on the orange isobaric curve, which suggests the existence of three BH branches.When the pressure increases to P > P c3 , the temperature increases monotonically with r h , which implies that there is only one BH branch.Next, let us analyze the behavior of Gibbs free energy G with respect to T. When P < P c1 , the appearance of a swallowtail in Figure 6a actually indicates the small/large BH phase transition.When the pressure increased to P c1 < P < P c2 , two swallowtails can be observed in Figure 6b.However, the intermediate BH branch is suppressed by the BH branches with lower Gibbs free energy and does not participate in the phase transition.Therefore, in this pressure range, only the small/large BH phase transition occurs.For P c2 < P < P c3 , the presence of a swallowtail indicates the occurrence of the small/large BH phase transition.When P > P c3 , the monotonic decrease in Gibbs free energy suggests that no phase transition occurs in the system.Finally, the phase diagram is illustrated as shown in Figure 7.The coexistence curve originates from the origin and terminates at the critical point (P c3 , T c3 ).From the phase diagram, it is evident that while there are three critical points in this parameter region, only the small/large BH phase transitions occur.
Phase Transitions by Fixing H and α 2 While Varying η
In this subsection, by setting the conformal scalar field parameter H = 0.01 and the coupling parameter α 2 = 6, while varying η as 0.01, 0.1, and 5, we study the BH phase transitions and phase diagrams.
In this case, three critical points can be obtained, which are T c2 = 0.0455153, P c2 = 0.00317372, ( 49) We have plotted the behavior of temperature T with respect to r h , as shown in Figure 8a.For P c1 < P < P c2 , there are four extremal points on the isobaric curve, which suggests the existence of five BH branches.When P = P t = 0.00292831, we have utilized Maxwell equal area law to construct two pairs of equal area regions with the same temperature T = T t = 0.0447573, as shown in Figure 8b.In fact, this predicts the existence of a triple point.Therefore, we plot the behavior of Gibbs free energy with respect to temperature, as shown in Figure 9.As anticipated, the three BH branches intersect at a point when P = P t , which indicates the occurrence of the small/intermediate/large BH phase transitions.Finally, the phase diagram is plotted in Figure 10.In the phase diagram, the triple point where small, intermediate, and large BHs coexist can be observed.4.2.2.η = 0.1 Like the previous case, three critical points can be obtained, which are T c2 = 0.0455649, P c2 = 0.00319072, T c3 = 0.0463009, P c3 = 0.00429228.
The T − r h diagram is illustrated in Figure 11, where four extremal points appear on the red isobaric curve at P = 0.00316.However, the intermediate BH is suppressed and does not participate in phase transitions, as shown in Figure 12b.This indicates that the system only undergoes the small/large BH phase transitions, similar to the case discussed in Section 4.1.3.Finally, the phase diagram, as illustrated in Figure 13, indicates that only the small/large BH phase transitions occur in this parameter region.
In Figure 14, there exists an isobaric curve with four extremal points, which seems to indicate complex phase transition behavior.However, by analyzing the behavior of Gibbs free energy in Figure 15, it can be found that only the small/large BH phase transitions occur, which is similar to the previous case.The phase diagram in Figure 16 also supports our analysis, which indicates the small/large BH phase transitions occur in this parameter region.
The behavior of temperature T with respect to r h is shown in Figure 17a.For P c1 < P < P c2 , four extremal points appear on each isobaric curve, which indicates a rich variety of phase transitions.Obviously, when P = P t = 0.00342036, we have utilized the Maxwell equal area law to construct two pairs of equal area regions, as shown in Figure 17b.Then, we plotted the behavior of Gibbs free energy in Figure 18.In particular, when P = P t = 0.00342036, it can be discovered that three BH branches intersect at one point, which suggests the occurrence of small/intermediate/large BH phase transitions.Finally, in the phase diagram shown in Figure 19, a triple point can be observed where small, intermediate, and large BH phases coexist.
The behaviors of temperature and Gibbs free energy are plotted in Figures 20 and 21.After careful analysis, it can be determined that only small/large BH phase transitions occur in this parameter region.Finally, the phase diagram in Figure 22 further shows this small/large BH phase transition.T vs. r h , where the blue, purple, red, black, orange, green, and gray curves represent P = 0.003, 0.00326926, 0.0035, 0.00383222, 0.008, 0.0161884, and 0.02, respectively.The solid and dashed curves represent stable and unstable branches, respectively.
Critical Exponents
It is widely believed that the critical exponents offer a valuable method for describing the behavior of physical quantities near the critical point, and they do not depend on the details of the physical system.Therefore, in this section we would like to calculate the critical exponents in the vicinity of the critical points.
For convenience, we start by defining some reduced parameters, whereby we set where τ = T T c is defined as the reduced thermodynamic temperature, we set where ν = V V c is defined as the reduced thermodynamic volume, and we define as the reduced thermodynamic pressure.
Next, let us review the definitions of the critical exponents α, β, γ, and δ near the critical point [22]: (1) Exponent α determines the behavior of the specific heat at constant volume, (2) Exponent β describes the behavior of the order parameter η 1 = V l − V s (the difference between the volumes of the coexisting large and small BHs) on a given isotherm (3) Exponent γ governs the behavior of the isothermal compressibility κ T (4) Exponent δ reflected the following behavior on the critical isotherm T = T c Subsequently, we will proceed to compute these critical exponents.From Equation ( 28), it can be found that the entropy S is independent of T, which leads Thus, the critical exponent is α = 0.In addition, by substituting the reduced parameters introduced in Equations ( 63)-( 65) into Equation (37), we can calculate the corresponding state equations.Therefore, it is easy to express the reduced pressure near the critical point as The values of the expanded coefficients in Equation ( 71) for different parameters were calculated, and the corresponding results are listed in Table 1.As shown in Table 1, the coefficient A 0 = 1, the coefficient B 0 is positive, and the coefficients A 3 and B 1 are negative.Moreover, the coefficients A 1 and A 2 are absent in the BH system, and thus they have not been listed in this table.Therefore, the reduced pressure can be re-expressed as first law of BH thermodynamics and derive the corresponding Smarr relation in a more general extended phase space.Then, we study the BH phase transitions by analyzing the characteristic behaviors of temperature and Gibbs free energy in six-dimensional spacetime.
On the other hand, to analyze the effect of the conformal scalar field on the BH phase transition, we fix and vary the values of the conformal scalar field parameter H and the coupling parameters α 2 and η to study the BH phase transition.We first consider the case where H = 0.01 and η = 0.01, while α 2 takes the values of 1, 5, and 10.For α 2 = 1, we observed a typical small/large BH phase transition, which is similar to the vdW liquid/gas phase transition.When α 2 = 5, we discovered four extremal points on the isobaric curve in the T-r diagram, as well as two swallowtails in the G-T diagram.This actually indicates a rich variety of phase transitions beyond the small/large BH phase transition.As a result, the small/intermediate/large BH phase transitions can be found in this case.Additionally, the triple point where the small, intermediate, and large BHs can coexist is obtained, i.e., (P t = 0.00337487, T t = 0.0485154).For α 2 = 10, although there are three critical points, as the intermediate BH branch has a higher free energy and is suppressed by the lower free energy BH branch, thus only the small/large BH phase transition can be discovered.Then, we fixed H = 0.01 and α 2 = 6 while we varied the parameter η to study the phase transition of BHs.When η = 0.01, we observed the small/intermediate/large BH phase transitions, as well as the triple point, which is located at (P t = 0.00292831, T t = 0.0447573).For the case of η = 0.01 and 5, which is similar to the case of H = 0.01, η = 0.01, and α 2 = 10, only the small/large BH phase transition occurs.Finally, we set α 2 = 5 and η = 0.01 and varied the parameter H to study the BH phase transition.For H = 0.1, the small/intermediate/large BH phase transitions can be found.As expected, a triple point appears, which is located at (P t = 0.00342036, T t = 0.0486811).While for H = 1, only the small/large BH phase transitions can be observed.Based on the above discussions, it can be found that the conformal scalar field has some significant impact on the BH thermodynamics and phase transitions.On the other hand, we also note that the novel phase structure composed of two separate coexistence curves, discovered in EBI gravity [57], is absent in GBS gravity.This is consistent with the results obtained in EGB gravity [58], further suggesting that this novel phase structure is related to the gravity theory.In fact, these results also demonstrate that the triple point, where small, intermediate, and large BHs can coexist, is a universal feature of dyonic AdS BHs.
Moreover, we calculated the critical exponents near the critical points and obtained results for α = 0, β = 1 2 , γ = 1, and δ = 3.This implies that these critical exponents share the same values as in mean field theory and are consistent with those obtained in other BH systems.Finally, it can be summarized that our conclusions will provide important insights for a deep understanding of the intriguing thermodynamic properties of the dyonic AdS BHs in GBS gravity.
In addition, holographic duality provides a valuable method for the study of the thermodynamic properties of AdS BHs, i.e., it reveals these characteristics from an alternative perspective [70,71].Therefore, it is worth utilizing holographic duality to further investigate the thermodynamics of this dyonic AdS BHs, which may reveal more intriguing thermodynamic properties.This will also be a part of our future work.
Figure 2 .
Figure 2. (a) T vs. r h .The blue, red, black, orange, green and gray curves correspond to P = 0.0024, 0.00337487, 0.00374402, 0.004, 0.0044492 and 0.005, respectively.Solid curves indicate stable branches, while dashed curves represent unstable branches.(b) T − S diagram of two pairs of equal area regions at pressure P = P t = 0.00337487.The horizontal line has a temperature T = T t = 0.0485154.
Figure 6 .
Figure 6.G vs. T.The red, green and orange curves represent the small, intermediate and large BHs, while the dashed curves indicate unstable BHs.
Figure 8 .Figure 9 .
Figure 8.(a) T vs. r h .The blue, red, black, orange, green, and gray curves correspond to P = 0.002, 0.00292831, 0.00317372, 0.004, 0.00583728, and 0.007, respectively.The solid and dashed curves represent stable and unstable branches, respectively.(b) T − S diagram of two pairs of equal area regions at a pressure of P = P t = 0.00292831.The horizontal line has a temperature of T = T t = 0.0447573.
Figure 15 .
Figure 15.G vs. T.The red, green and orange curves represent the small, intermediate and large BHs, while the dashed curves indicate unstable BHs.
Figure 17 .
Figure 17.(a) T vs. r h .The blue, red, black, orange, green, and gray curves corresponds to P = 0.0025, 0.00342036, 0.00375132, 0.0043, 0.00494838, and 0.0055, respectively.The solid and dashed curves represent stable and unstable branches, respectively.(b) T − S diagram of two pairs of equal area regions at pressure P = P t = 0.00342036.The horizontal line has a temperature T = T t = 0.0486811.
Author Contributions:
Conceptualization, P.M. and G.L.; methodology, P.M.; software, P.M.; validation, P.M., Z.Y. and G.L.; formal analysis, P.M.; investigation, P.M.; resources, P.M.; data curation, P.M.; writing-original draft preparation, P.M.; writing-review and editing, P.M., Z.Y. and G.L.; visualization, P.M.; supervision, Z.Y. and G.L.; project administration, Z.Y. and G.L.; funding acquisition, Z.Y. and G.L. All authors have read and agreed to the published version of the manuscript.Funding: This work is supported by the National Natural Science Foundation of China (Grant No. 11903025), by the starting fund of the China West Normal University (Grant No. 18Q062), by the Sichuan Science and Technology Program (2023ZYD0023), by the Sichuan Youth Science and Technology Innovation Research Team (21CXTD0038), and by the Natural Science Foundation of Sichuan Province (2022NSFSC1833).
•••σ p λ p is the generalized Kronecker delta and R σλ µν represents the components of the Riemann tensor.The tensor S σλ µν is defined as Figure 18.G vs. T.The red, green, and orange curves represent the small, intermediate, and large BHs, while the dashed curves indicate unstable BHs.
Table 1 .
The values of the expanded coefficients in Equation (71) for different parameters. | 2024-02-14T16:13:09.454Z | 2024-02-12T00:00:00.000 | {
"year": 2024,
"sha1": "75e62836d537df3580b0a0242b1a3e4b1e685db2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1997/10/2/87/pdf?version=1707731586",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "530e12263c491472074464f081d47a0692aea1e2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
119189551 | pes2o/s2orc | v3-fos-license | On orbital angular momentum conservation in Brillouin light scattering within a ferromagnetic sphere
Magnetostatic modes supported by a ferromagnetic sphere have been known as the Walker modes, each of which possesses an orbital angular momentum as well as a spin angular momentum along a static magnetic field. The Walker modes with non-zero orbital angular momenta exhibit topologically non-trivial spin textures, which we call \textit{magnetic quasi-vortices}. Photons in optical whispering gallery modes supported by a dielectric sphere possess orbital and spin angular momenta forming \textit{optical vortices}. Within a ferromagnetic, as well as dielectric, sphere, two forms of vortices interact in the process of Brillouin light scattering. We argue that in the scattering there is a selection rule that dictates the exchange of orbital angular momenta between the vortices. The selection rule is shown to be responsible for the experimentally observed nonreciprocal Brillouin light scattering.
I. INTRODUCTION
The coupling between electron spins in solids and light is in general very weak. This is because the coupling is inevitably mediated by the orbital degree of the electrons and is realized through spin-orbit interaction for orbits and spins and electric-dipole interaction for orbits and light, respectively 1 . Although it is possible to coherently (non-thermally) manipulate collective excitations of spins in spin-ordered materials by means of ultrafast optics, where the electric field density of an optical pulse is high both temporally and spatially [2][3][4][5] , an attempt to realize coherent optical manipulation of magnons in the quantum regime is hindered by the weakness of the spin-light coupling 6 . Given the encouraging development of circuit quantum magnonics, where microwave photons and magnons are strongly coupled, enabling a coherent energy exchange at the single-quantum level 7-9 , the similar energy exchange between optical photons and magnons has been anticipated.
To overcome the weakness of the spin-light interaction, cavity optomagnonics has been investigated [10][11][12][13][14][15][16] . In cavity optomagnonics, the density of states of optical modes are engineered with an optical cavity to enhance spinlight interaction. In particular, spheres of ferromagnetic insulators supporting whispering gallery modes (WGMs) for photons and a spatially uniform magnetostatic mode, called the Kittel mode, for magnons are used as a platform of the cavity optomagnonics. With spheres made of typical ferromagnetic insulator, yttrium iron garnet (YIG), the pronounced sideband asymmetry [11][12][13] , the nonreciprocity 11 , and the resonant enhancement 12,13 of magnon-induced Brillouin scattering have been demonstrated.
In this context, it is interesting to examine the behavior of magnetostatic modes beyond the simplest Kittel mode. The magnetostatic modes residing in a ferromagnetic sphere under a uniform static magnetic field are known as the Walker modes 17,18 . They exhibit, in general, topologically non-trivial spin textures about the axis along the applied magnetic field and might be called magnetic quasi-vortices. The magnetic quasi-vortices can be characterized by their orbital angular momenta along the symmetry axis 19,20 . Photons in optical whispering gallery modes possess not only spin angular momenta but also orbital angular momenta, too, which echoes the concept known as optical vortices 21 . Within the ferromagnetic sphere, the optical vortices can interact with the magnetic quasi-vortices in the course of the Brillouin light scattering. The total orbital angular momentum is then expected to be conserved as long as the symmetry axis of the WGMs coincides with that of the Walker modes, imposing a selection rule on the Brillouin scattering processes.
In this article, the Brillouin scattering hosted in a ferromagnetic sphere is theoretically investigated putting a special emphasis on the orbital angular momentum exchange between the optical vortices and the magnetic quasi-vortices. We establish a selection rule imposed by the orbital angular momentum conservation for the Brillouin scattering hosted in a ferromagnetic sphere. The experimentally observed Brillouin scattering by various Walker modes reported in the accompanying paper 22 , which reveals that the scattering is either nonreciprocal or reciprocal depending on the orbital angular momentum of the magnetic quasi-vortices, is then analyzed with the theory developed here and found to be explained well. The result would provide a new area for chiral quantum optics 23 and topological photonics 24,25 based on optical vortices and magnetic quasi-vortices.
II. ORBITAL ANGULAR MOMENTA
The schematics of the cavity optomagnonic system we investigate is shown in Fig. 1, where the Walker mode Here the distribution of the transverse magnetization of the (4, 0, 1) Walker mode on the equatorial plane is shown as an example. The Walker modes and the WGMs are assumed to share the symmetry axis (z-axis) along a static magnetic field H. and the WGMs share the symmetry axis (z-axis) along a static magnetic field H. The Walker modes and the WGMs generally exhibit nonzero orbital angular momenta. In this section we analyze the orbital angular momenta of these modes.
A. Orbital angular momenta of Walker modes
The orbital angular momentum density l (mmag) z of a magnon along the static magnetic field H ( z-axis) can be deduced from the dependence of the transverse magnetizations, M x (t) and M y (t), on the azimuthal angle φ as 19,20 where M ⊥ (t) = M 2 x (t) + M 2 y (t). As for the Walker mode with the index (n, m mag , r) 17,18 the orbital angular momentum L (mmag) z can be given by the volume integral z = 0, and L (0) z = 1, corresponding to the winding numbers of the respective spin textures of the Walker modes. Note that the magnetic field is applied parallel to z axis. of l (mmag) z over the entire sphere and depends on the index m mag , that is, While the Kittel mode [(1, 1, 0) mode] has no orbital angular momentum, L (1) z = 0, (4, 0, 1) and (3,1, 1) modes, for instance, have L (0) z ≈ 1 and L (−1) z ≈ 2, respectively. The approximation in the last line of Eq. (2) is due to the dipolar interaction with broken axial symmetry. As the applied static magnetic field H approaches infinity, the Zeeman energy becomes dominant over the dipole interaction energy, and thus "≈" becomes "=" in Eq. (2). Note also that for the Walker modes with n = m mag and n = m mag + 1, Eq. (2) is exact. We call the Walker modes with non-zero L (mmag) z as magnetic quasi-vortices. The prefix "quasi-" emphasizes the fact that the orbital angular momentum we defined in Eq. (2) is the approximated one and the fact that magnons are quasi-particle with finite lifetime. Figure 2 shows the spatial distributions of the transverse magnetizations for the representative Walker modes (1, 1, 0), (3,1, 1), (3, 1, 1), and (4, 0, 1). The modes having non-zero L z [e.g., (3,1, 1) and (4, 0, 1) in Fig. 2] exhibit the topologically non-trivial spin textures. Note that the orbital angular momentum L z here plays a similar role as the winding number or the skyrmion number in other literature 26 .
B. Orbital angular momentum of WGMs
The electric field of the WGM in an axially symmetric dielectric material has been extensively studied 27 . Now, for simplicity, we focus on the azimuthal mode index m which characterizes the azimuthal profile of the electric field of the fundamental WGM. In the spherical ba- the electric field of the WGMs of the counterclockwise (CCW) orbit can be written as where E (TE) and E (TM) correspond to the transverse electric (TE) and the transverse magnetic (TM) WGMs, respectively, and φ is the azimuthal angle. Note that the time-dependent electric field as a whole is written as For the clockwise (CW) orbit, the electric fieldsĒ (TE) andĒ (TM) can be written as E i (E o ) in Eqs. (4) and (7) shall be called the inner (outer ) component of the TM mode. To see this, Fig. 3(a) shows the radial intensity distributions of two components |E i | 2 and |E o | 2 (magenta and green dotted lines) along with the intensity profiles of the transverse for the TM electric field of a WGM. We can see that |E i | 2 has its maximum in the inner part of the resonator compared to |E o | 2 . The shift of the "centers of gravity" of the two components, |E i | 2 and |E o | 2 is a manifestation of the spin-Hall effect of light 28,29 , which originates from the spin-orbit coupling of light 30 .
From the dependence of the electric field on φ, the orbital angular momentum L z of the WGM, that is, the optical vortex 21 , can be straightforwardly deduced. First, let us consider the CCW orbit. As for the TE mode with the azimuthal mode index m = m TE , since there is no spin angular momentum, the orbital angular momentum is given by As for the TM mode with m = m TM , however, the spinorbit coupling of light has to be taken into account 30 . For the CW orbit, the similar argument leads us to the following: and the total angular momentum J (CW,TM,mTM) = −m TM is again well-defined. Note that for the CW orbit the outer (inner) component of TM mode is associated with σ + (σ − ), that is opposite to that for the CCW orbit. The orbital angular momenta of the WGMs can be visualized by sketching the trajectory of the head of the polarization vector of the electric fields [Fig 3(b)]. When the mode index is 10, the orbital angular momentum reads 10, 9 and 11 for the TE, inner TM, and outer TM components, respectively.
A. Magnetic Quasi-Vortices-Optical Vortices Interaction
Let us now see that the total orbital angular momentum is conserved in the Brillouin scattering process. The thorough treatment of the Brillouin scattering by magnons in WGMs can be found in Ref. [16]. In the following, we emphasize the role of orbital angular momenta in the Brillouin scattering process. The interaction Hamiltonian representing the Brillouin scattering is where the integrand E is the energy flux density and the integral runs over infinity in time t and the volume V of the WGM, E 1 (t) = E 1 e −iω1t and E * 2 (t) = E * 2 e iω2t are the input and scattered electric fields of WGMs, respectively. Here, the permittivity tensorǫ can be written in the Cartesian basis as 31 The interaction between the magnetic quasi-vortices and optical vortices in the course of the Brillouin scattering process can be understood best in the spherical basis. In this basis the permittivity tensor can be written as Here the term ǫ 0 ǫ r M 0 in Eq. (15) has been neglected. Henceforth, the term f M s M z in Eq. (16) is also ignored for it is independent of time and give no contribution to the Brillouin scattering. Here ω m /2π is the resonant frequency of the concerned Walker mode with the azimuthal mode index of m mag .
In the spherical basis the time-dependent transverse magnetization is given by Here note that the creation (annihilation) of a magnon decreases (increases) the spin angular momentum. As we shall show, the Brillouin scattering stems from the term with M + (M − ) representing the Stokes-scattering (anti-Stokes-scattering) associated with the creation (annihilation) of a magnon. Since the TE-to-TM or TM-to-TE transition process changes the spin angular momentum of magnon, these transitions give nonzero contributions to the Brillouin scattering given the conservation of the spin angular momentum. We shall see this more clearly in Sec. III B.
B. Selection rules
Since the interaction depends on the direction of the input field and its polarization, let us first suppose that the input field is the CW TE mode with mode index of m TE , that is, E 1 (t) = E (mTE) e −imTEφ e −iω1tê * 0 . In this case the Brillouin scattering results in producing photons in the CW TM mode as seen in the following. We can straightforwardly extend the argument to other cases, e.g., the TM mode input or the input to the CCW orbit.
With the CW TE mode as the input field, the energy flux density E in Eq. (14) reads where The first (second) term in the right-hand side of Eq. (18) represents the Stokes (anti-Stokes) scattering. The possibility of the scattered light being the CCW WGM is denied given the fact that we are concerned only with cases where L for the Stokes scattering and for the anti-Stokes scattering. Since the optical densities of states are modified in the presence of the WGMs, the probabilities of the scattering processes are affected by them, too. Furthermore, because of the axial symmetry of the system, the conservation of the total angular momentum is expected. The designated WGM of the Brillouin scattering can then be specified by the selection rule obtained by the conservation of the orbital angular momentum. To see this, we integrate E in Eq. (14) over the azimuthal angle φ as a part of the volume integral. From the first Stokes term in Eq. (18) As for the second anti-Stokes term in Eq. (18), the selection rule is and with Eqs. (2), (11), and (13), we have Next, let us briefly describe the results when the laser light is injected into the CCW-TE mode. The Stokes (anti-Stokes) scattering process gives the same conditions of the energy conservation, Eq.
that is, the same selection rule as Eq. (24). As for the Stokes scattering, represents the orbital angular momentum conservation, yielding that is, the same selection rule as Eq. (26). These selection rules regarding the orbital angular momentum are the main result of this paper. With the geometric birefringence 32-34 and densities of states of WGMs, these selection rules dictate the Brillouin light scattering by Walker-mode magnons hosted in a ferromagnetic sphere as shown below. In the next section we employ the selection rules to explain the experiment reported in the accompanying paper 22 , which reveals that the Walker-mode-induced Brillouin light scattering is either nonreciprocal or reciprocal depending on the orbital angular momentum of the magnon in the relevant Walker mode, that is, the magnetic quasi-vortex. | 2017-11-26T02:31:57.000Z | 2017-11-26T00:00:00.000 | {
"year": 2018,
"sha1": "487569e08a930337b6f9ee6ff1ed61434d381687",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/aae4b1",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "487569e08a930337b6f9ee6ff1ed61434d381687",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
236983928 | pes2o/s2orc | v3-fos-license | Moderating Effect of Gender on the Relationship between Technology Readiness Index and Consumers’ Continuous Use Intention of Self-Service Restaurant Kiosks
: This study aims to analyze the moderating effect of gender on the relationship between technology readiness and willingness to continue using self-service kiosks in fast-food restaurants among middle-aged and older consumers. We conducted a survey from 1 May to 30 May 2020 among 320 consumers born in or before 1980 who only used kiosks in fast-food restaurants. The findings are as follows: First, the more innovative and optimistic the consumer, the more they are willing to continue using kiosks, whereas the more discomfort the consumer feels, the less likely they are to continue using them. Second, among technology readiness factors, a sense of insecurity does not have a significant effect on the willingness to continue to use kiosks. Third, among innovative consumers, men were found to be more likely to continue using kiosks than women. Thus, fast-food restaurant managers need to know that men and women perceive technology-based self-service differently.
Introduction
Traditional service encounters are face-to-face, direct interactions between customers and service employees. However, with the development of technology, such encounters are increasingly being replaced by technology-based self-services (TBSSs), where customers interact with technology instead [1,2]. Initially, companies adopting TBSSs targeted specific consumers. However, TBSSs have become increasingly popular, thus allowing companies to attract a broader consumer base [3].
In particular, the self-service kiosk, which is an unmanned payment system where customers can search, order, and pay for items on the menu, is the TBSS type most rapidly increasing in use in South Korea. A report [4] on the 2019 foodservice trends in Korea revealed that the proportion of consumers who knew about kiosks increased by 33.7% from 20.0% in 2017 to 53.7% in 2019. The proportion of those who have experienced using kiosks when eating in restaurants increased from 42.9% in 2017 to 71.9% in 2019, with most of them (76.3%) having dined in fast-food restaurants. Survey respondents pointed out that the dining and drinking places where self-service kiosks could be installed in the future were, in descending order, school or company cafeterias, bars, and coffee shops.
However, the above-mentioned report [4] on the foodservice trends in Korea in 2019 revealed that some consumers were not satisfied with the use of kiosks. The following are some of the responses from the report's survey: "kiosk is more inconvenient than face-toface service" (61.1%); "I am not good at using the machine" (53.3%); and "the machine frequently makes errors" (29.6%). Many rapidly developing cutting-edge technologies have permeated every aspect of life, some consumers have difficulty in adjusting to the speed of technological changes [5]. The "digital divide", which refers to the economic and social inequality between people who can and cannot easily access and use digital media, has become a social problem [6,7]. On the one hand, digital natives, who were born in and are already exposed to various digital environments, do not feel any discomfort or insecurity when adopting new technologies [8]. On the other hand, digital immigrants, who have to adjust to new digital environments with the transition from analog to digital, may experience greater discomfort and insecurity. Therefore, in order to reduce the digital divide among middle-aged consumers who are not as regularly exposed to digital media, it is necessary to study how the consumer's innovation propensity, optimism, discomfort, and insecurity about new technologies affect the acceptance of new technologies. To check if consumers are ready to accept new technologies, Parasuraman [9] suggested using a Technology Readiness Index (TRI). Many studies have identified that a higher positive TRI and lower negative TRI, means that the person is more likely to accept the new technology [10][11][12].
Also, it would be meaningful to investigate what personal characteristics would influence the formation of attitudes to TBSS. Livingstone et al. [13] suggested gender, age, socio-economic status, disability, race, and linguistic proficiency as the determinants of the digital divide phenomenon. Especially, the extant research [14][15][16] on information technology acceptance showed that gender is an important factor affecting user acceptance of TBSSs. According to a study by Ju et al. [17], which analyzed the gender gap among older people in Korea, men reported higher time spent on information and communication technology, information capability, Internet, and mobile utilization than women. As per a study by Choi and Yoo [18], men had more positive technology readiness for tourism mobile apps than women, and women had more negative technology readiness than men. Kim and Oh [19] found that the convenience of using the mobile payment service for men did not affect acceptance intention. However, the convenience of using the mobile payment service for women affects the acceptance intention. This means that the digital gap differs by gender and that the digital gap among women is more severe than that of men. Therefore, in order to increase the intention to use self-order kiosks of restaurant customers, it is most important to identify the impact relationship of gender differences along with technology preparation. Although numerous studies [20,21] indicate that age is an important variable in predicting customer acceptance of new technologies, if both digital immigrants and digital natives are included in the research subjects to identify TRI, the technical readiness will be offset. Therefore, it is necessary to select a specific age group with a similar consumption trend, but TRI studies considering gender and age are not yet sufficient. Accordingly, this study seeks to identify attitudes toward technology acceptance by dividing middle and high-aged groups with very low levels of digital informatization utilization and competence by gender.
This study, therefore, aims to identify the moderating role of gender in the relationship between personal TRI and the intention to continue using self-service kiosks among middleaged and older consumers at fast-food restaurants. The findings of this research can be used by restaurant managers to decide whether to install self-service kiosks.
Technology Readiness Index (TRI)
The TRI estimates people's propensity to embrace and use new technologies for accomplishing goals in their home and work lives [9]. Parasuraman [9] conducted the National Technology Readiness Survey and developed a TRI consisting of 36 questions. Parasuraman and Colby [22] later developed TRI 2.0, which compressed the 36 questions into 16.
The TRI consists of positive (optimism and innovativeness) and negative (discomfort and insecurity) factors. Optimism is the positive belief that new technologies will improve flexibility and efficiency of life and work. Innovativeness is the tendency to use new technology before other people and become a pioneer. Discomfort arises over the consumers' feeling of lack of control and oppression by the technology. Insecurity is the feeling of being insecure with new technology and being skeptical about its viability [9]. The TRI plays the role of the leading variable [11,23,24] in the expanded model (i.e., technology readiness) and acceptance model in the theory related to acceptance of information technology (i.e., technology acceptance model and the unified theory of acceptance and use of technology). Further, it plays a moderating role in the relationship between the factors determining technology acceptance and consumer attitudes [24,25].
Recently, there have been some studies on TRI in the hospitality industry. In a study involving TBSS users in a Malaysian airport, Ab Halim [26] used all four variables suggested by Parasuraman [9] and found that positive TRI (optimism and innovativeness) positively affects user satisfaction. Lee [21] also used all four variables in a study on hotel guests, finding that optimism, innovativeness, and insecurity affect perceived usefulness. Lee [21] further discovered that optimism, discomfort, and insecurity have significant effects on perceived convenience. Pradhan et al. [27] analyzed tourists' intent to use smart devices and observed that optimism has a significant positive impact on perceived benefits, but not innovativeness. Also, both insecurity and discomfort significantly affect perceived risk. In contrast, adopting only two factors (i.e., positive and negative TRIs,) Seo et al. [8] conducted a research where the respondents were users of tourism applications on smartphones, and found that only positive TRI has a positive effect on perceived usefulness and perceived ease of use. Lee [28] only applied optimism and innovativeness in his research on users of TBSS in a restaurant, noticing that only optimism has an effect on usefulness, ease of use, playfulness, and perceived risk. Moon and Moon [12] evaluated the excellence of the dimensional structure of the TRI by tapping into restaurant customers using TBSS as their research subjects. They found that the fitness indices of the model using all four factors and the one using a two-dimensional TRI with positive and negative variables were relatively high. Further, only optimism and the positive TRI impacted customers' intention to act in both models. In addition, Lee et al. [29], Lin and Chang [11], and Han and Park [30] measured TRI by integrating four variables into one dimension, whereas Sun et al. [31] only used discomfort as the resulting variable affected by cultural values. Thus, scholars have used questions and factor structures in various ways. We used four TRI factors indiscriminately to identify the core factors of TRI that can induce the continuous use of restaurant TBSS by examining the TRI of consumers.
Continuous Use Intention
It is important for companies to improve service conditions that customers consider useful [32]. In recent studies in the field of information and communication, continuous use has been a core factor in predicting the use of new technologies [23]. Park and Lee [33] defined continuous use intention as the degree of willingness to continue to use order and payment services through kiosks, which is determined by the evaluation of the users of the service.
There have been many studies on the effect of consumers' TRI on continuous use intention in the hospitality industry. In their research on check-in kiosks in US airlines, Lee et al. [29] found that the TRI of consumers affects attitudes toward kiosks, service providers, and continuous use intention. Moon and Moon [12] conducted research on restaurant customers who used TBSS for order and payment and found that only optimism has a positive effect on continuous use, while innovativeness, insecurity, and discomfort did not have significant effects on it. Choi et al. [34] found that optimism has a positive effect on the continuous use intention of tourists with respect to mobile tourism applications, whereas insecurity has a negative effect. Meanwhile, Seo et al. [8] found that neither positive nor negative TRIs had any effect on the continuous use intention for mobile tourism applications.
Thus, research has shown that the effect of TBSS on continuous use intention varies depending on the new technology type and dimensional structure of TRI. To discover the four TRI factors affecting the continuous use intention for fast-food restaurant kiosks, we formulate the following hypotheses.
Hypothesis 1 (H1).
Innovativeness of middle-aged and elderly consumers will have a positive (+) effect on the continuous use intention for fast-food restaurant kiosks.
Hypothesis 2 (H2).
Optimism of middle-aged and elderly consumers will have a positive (+) effect on the continuous use intention for fast-food restaurant kiosks.
Hypothesis 3 (H3).
Discomfort of middle-aged and elderly consumers will have a negative (−) effect on the continuous use intention for fast-food restaurant kiosks.
Hypothesis 4 (H4).
Insecurity of middle-aged and elderly consumers will have a negative (−) effect on the continuous use intention for fast-food restaurant kiosks.
Gender
Research in the information and communication industry has shown that the gender of users affects various decision-making processes, such as information search and technology acceptance, thus emphasizing the importance of gender to marketing managers [20,35].
Previous studies that analyzed the difference in the TRI according to gender are as follows. In their analysis of the demographic characteristics of those who embrace hotel radio-frequency identification technology, Ozturk and Hancer [14] found that women are more likely to intend to use radio-frequency identification technology than men. Further, Victorino et al. [36] divided hotel guests into three groups (innovators, paranoids, and laggards) depending on TRI, and found that men tend to innovate more and are more open to the use of new technologies, while women tend to innovate less and are reluctant to use new technologies. In a research on TBSS users in Malaysian airports, Ab Halim [26] found that men are more likely to embrace innovative new technologies like TBSSs and are less insecure than women. The research of Kim and Kim [20] involving restaurant customers also suggested that men are more innovative, and women are more insecure, while neither differs on optimism. Meanwhile, in her research on hotel guests who have never used TBSSs, Lee [21] found that men are more optimistic and innovative than women, and women are more likely to feel discomfort than men. However, there was no gender difference in the ratio of insecurity. Wang and Sparks [37] found that there is no gender difference in TRI in their study on airline and hotel guests.
The results of studies that analyzed the moderating effect of gender in the relationship between the factors influencing the acceptance of new technology and the consumer's acceptance attitude are as follows. Venkatesh et al. [38] reported that the intention to adopt and use a system is more highly affected by effort expectancy for women than for men. Huang et al. [39] found that gender affects the relationship between normative beliefs and behavioral intention such that the effect is stronger for women. Tarhini et al. [40] found that gender moderates the relationship between perceived ease of use, social norms, and behavioral intention, while no moderating effects were found in terms of perceived usefulness and self-efficacy on behavioral intention. Humbani and Wiese [41] pound that gender moderates only the relationship between convenience and the adoption of mobile payment services. There are no interaction effects of gender on the other seven factors tested in this study. On closer inspection, it emerged that males put more emphasis on the convenience derived from the use of mobile payments than do their female counterparts. In a study by Shao et al. [42], which analyzed the moderating effect of gender on mobile payment platforms, it was found that the influence of security and customization on trust was greater for women than for men. In addition, the influence of mobility and reputation on trust was found to have a greater effect on men than women. In a study by Chawla and Joshi [43] regarding the acceptance of mobile wallets, the influence of facilitating conditions and security on attitudes was found to be greater in male users than in female users.
As a result of these previous studies, in the foodservice market of South Korea where TBSSs are being introduced, the gender of consumers can serve as a useful means to explain and predict consumer behavior. However, despite growing research efforts to determine the effect of gender on the adoption of new self-service technologies, the results of the previous studies reviewed were conflicting. Furthermore, to the best knowledge of the authors, there are no empirical studies that report the moderating effects of gender on the TRI that influence the adoption of self-order kiosks in Korea, where self-order kiosk services are rapidly being introduced. Therefore, this study proposes the following hypothesis: Hypothesis 5 (H5). Gender plays a moderating role in the relationship between the innovativeness of foodservice consumers and continuous use intention.
Hypothesis 6 (H6). Gender plays a moderating role in the relationship between the optimism of foodservice consumers and continuous use intention.
Hypothesis 7 (H7).
Gender plays a moderating role in the relationship between the discomfort of foodservice consumers and continuous use intention.
Hypothesis 8 (H8).
Gender plays a moderating role in the relationship between the insecurity of foodservice consumers and continuous use intention.
Measurement Model
This research aims to examine the moderating effect of gender on the relationship between the TRI of kiosk users and their continuous use intention. To achieve this, we assumed that the four dimensions of TRI would have significant effects on the continuous use intention of kiosks. We further assumed that gender would moderate the relationship between the two variables. The constructed research model is shown in Figure 1.
Joshi [43] regarding the acceptance of mobile wallets, the influence of facilitating conditions and security on attitudes was found to be greater in male users than in female users.
As a result of these previous studies, in the foodservice market of South Korea where TBSSs are being introduced, the gender of consumers can serve as a useful means to explain and predict consumer behavior. However, despite growing research efforts to determine the effect of gender on the adoption of new self-service technologies, the results of the previous studies reviewed were conflicting. Furthermore, to the best knowledge of the authors, there are no empirical studies that report the moderating effects of gender on the TRI that influence the adoption of self-order kiosks in Korea, where self-order kiosk services are rapidly being introduced. Therefore, this study proposes the following hypothesis: Hypothesis 5 (H5). Gender plays a moderating role in the relationship between the innovativeness of foodservice consumers and continuous use intention.
Hypothesis 6 (H6). Gender plays a moderating role in the relationship between the optimism of foodservice consumers and continuous use intention.
Hypothesis 7 (H7).
Gender plays a moderating role in the relationship between the discomfort of foodservice consumers and continuous use intention.
Hypothesis 8 (H8).
Gender plays a moderating role in the relationship between the insecurity of foodservice consumers and continuous use intention.
Measurement Model
This research aims to examine the moderating effect of gender on the relationship between the TRI of kiosk users and their continuous use intention. To achieve this, we assumed that the four dimensions of TRI would have significant effects on the continuous use intention of kiosks. We further assumed that gender would moderate the relationship between the two variables. The constructed research model is shown in Figure 1.
Research Instruments
To measure the TRI, we used 16 questions in total-four each on optimism, innovativeness, discomfort, and insecurity based on TRI 2.0 [22]. To measure continuous use intention, we used three questions based on Cheng et al. [44]. All 19 items in the instrument were measured on a five-point Likert type scale anchored by 1 = strongly disagree and 5
Research Instruments
To measure the TRI, we used 16 questions in total-four each on optimism, innovativeness, discomfort, and insecurity based on TRI 2.0 [22]. To measure continuous use intention, we used three questions based on Cheng et al. [44]. All 19 items in the instrument were measured on a five-point Likert type scale anchored by 1 = strongly disagree and 5 = strongly agree. In addition, the demographic characteristics of respondents were examined using five questions about their gender, age, marital status, educational level, and frequency of using kiosks.
Data Collection
As discussed in the introduction section, if consumers of all ages are included in the study, the TRI will be offset. Thus, using a non-probabilistic sampling method, we conducted a survey of consumers born before 1980 who had previous experience using kiosk-based ordering and payment systems. The base year for dividing digital immigrants and digital natives according to digital competencies is 1980 in Prensky [45].
Data collection took place for a month from 1st May to 30th May 2020 at six stores in Lotteria, the fast-food brand with the most active kiosks in South Korea [46]. We first explained the purpose of the study to consumers who were leaving the store after eating at a restaurant. We then conducted the survey on site using a self-administered paper questionnaire for those who agreed to participate. To increase participation in the survey, we offered hand sanitizers as compensation to respondents. Among the 350 questionnaires, 320 were used in the analysis. We excluded 21 questionnaires as they were filled out by those born in and after 1980, and nine that had many missing values.
Data Analysis
We used the SPSS 18.0 program to analyze the data, and frequency analysis to determine the demographic characteristics of the respondents. To test the validity and reliability of measurement items, exploratory factor and reliability analyses were conducted. A correlation analysis was carried out to examine the relatedness of constructive concepts. Finally, we applied hierarchical regression analysis, as suggested by Baron and Kenny [47], to determine the moderating effect of gender. Before conducting the hierarchical regression analysis, we performed a mean centering of independent and moderating variables to solve the problem of multicollinearity.
Participant Characteristics
The results of the frequency analysis of the demographic characteristics of the respondents are shown in Table 1. The numbers of men and women in the sample were 165 (51.6%) and 155 (48.4%), respectively. The number of respondents in their 40 s was the highest with 146 (45.6%,) followed by 102 (31.9%) in their 50 s, and 72 (22.5%) in their 60 s and older. With respect to marital status, 243 (75.9%) were married and 46 (14.4%) were single. With regard to the frequency of kiosk use per week, 207 (64.7%) used kiosks one to three times, 96 (30.0%) used them four to six times, and 17 (5.3%) used them seven times or more.
Results of Reliability and Validity Analyses
To measure the TRI and continuous use intention of foodservice consumers, we conducted an exploratory factor analysis. The results are shown in Table 2. The measure of the sampling adequacy of the Kaiser-Meyer-Olkin (KMO) was 0.735, indicating statistical significance. Bartlett's test of sphericity value was also statistically significant (χ 2 = 3059.371, p = 0.000), verifying the suitability of data for factor analysis. The factor analysis extracted five factors with eigenvalues of 1.0 or above. Based on their core concepts, the factors were named, "innovativeness" for factor 1, "discomfort" for factor 2, "optimism" for factor 3, "insecurity" for factor 4, and "continuous use intention" for factor 5. In addition, the Cronbach's α values were 0.755 or higher for all four factors, confirming the reliability of the internal consistencies of measurement items. 1.961 I intend to continue using self-service kiosks in the future 0.845 I will always try to use self-service kiosks in my daily life 0.834 I will keep using self-service kiosks as regularly as I do now 0.559 Note: KMO = 0.753, Bartlett's sphericity test = 3059.371, df = 171, p < 0.000.
Descriptive Statistics and Correlation Analysis
Before testing the hypotheses, a correlation analysis was conducted for each factor, and the results are shown in Table 3. Continuous use intention was positively (+) correlated with innovativeness (r = 0.535), optimism (r = 0.203), and gender (r = 0.312). It was negatively (-) correlated with discomfort (r = −0.127) and had no correlation with insecurity. There was no factor with a correlation coefficient of 0.8, confirming that there was no problem of multicollinearity. Note: A female was coded as 1, and a male was coded as 2. * p < 0.05, ** p < 0.01, *** p < 0.001.
Results of Independent t-Test
To analyze the difference in the TRI based on gender, an independent t-test was conducted, and its results are shown in Table 4. Among the TRI factors, the means of optimism, innovativeness, and discomfort were found to be significantly different based on gender (p < 0.05). The means of optimism and innovativeness were higher in men than in women, whereas that of discomfort was higher in women than in men. However, there was no statistically significant difference in insecurity based on gender.
Results of Hypothesis Testing
The results of the hierarchical regression analysis for the moderating effect of gender on the relationship between TRI and continuous use intention of self-service kiosks are shown in Table 5.
First, the R-squared value of Model 1 of the effect of TRI on continuous use intention was 30.5%, and the regression model was statistically significant (F = 36.044, p < 0.000). Among the factors of TRI, innovativeness (β = 0.503, p < 0.001) and optimism (β = 0.142, p < 0.001) had positive effects on continuous use intention, whereas discomfort (β = −0.124, p < 0.05) had a negative effect on it. These findings support H1, H2, and H3. However, H4 was rejected because insecurity did not have a significant effect on continuous use intention.
Second, the R-squared value of Model 2, which consisted of the TRI and the moderating variable, gender, was 33.6%, and the regression model was statistically significant (F = 31.806, p < 0.000). Thus, gender had a positive effect (β = 0.159, p < 0.001) on continuous use intention.
Third, the R-squared value of Model 3, which includes the TRI of foodservice consumers and interaction variable using gender, was 36.8%, 3.1% more than that of Model 2. Among the TRI factors, only the interaction variable of innovativeness and gender plays the role of a moderating variable (β = 0.177, p < 0.001). These findings support H5; however, H6, H7, and H8 were rejected. Figure 2 illustrates how innovativeness and gender, which show an interaction effect, affect continuous use intention. In order to analyze the moderating effect through hierarchical regression analysis, a graph is generally drawn by substituting a high moderating variable value (Z mean + 1 standard deviation), a middle moderating variable value (Z mean), and a low moderating variable value (Z mean-standard deviation) into the regression equation to analyze the moderating effect [48]. In this study, the standard deviation of gender was ±0.47. The regression equation is shown as Equation (1). The graph shows that as innovativeness increases, men's intention to continue using kiosks increases more than that of women.
Discussion
This study examined the moderating role of gender in the relationship betwe and the continuous use intention of kiosks in fast-food restaurant consumers. Con who used kiosks were surveyed in six fast-food restaurants in Seoul and 320 qu naires were deployed.
There were three main findings of the analysis. First, higher innovativeness a
Discussion
This study examined the moderating role of gender in the relationship between TRI and the continuous use intention of kiosks in fast-food restaurant consumers. Consumers who used kiosks were surveyed in six fast-food restaurants in Seoul and 320 questionnaires were deployed.
There were three main findings of the analysis. First, higher innovativeness and optimism factors lead to higher continuous use intention of kiosks in fast-food restaurants. In contrast, a higher discomfort factor leads to lower continuous use intention of kiosks in fast-food restaurants. These findings are partially consistent with those of existing studies [12,29,33], which show that positive TRI has a positive effect on the continuous use intention of TBSSs and negative TRI has a negative effect on the continuous use intention of TBSS.
Second, among the TRI factors, insecurity was found to have no significant effect on continuous use intention. This finding is contradictory to that of Lee's [21], who suggested that the insecurity of users of TBSSs in hotels has the greatest effect on the usefulness of TBSSs. Such contradictory findings may be caused by the fact that, in fast-food restaurant kiosks, the user pays after selecting a menu item. Also, unlike TBSSs in hotels, the risk of personal information exposure in fast-food restaurant kiosks is relatively low.
Third, men were found to have a higher intent to continue using kiosks than women. Moreover, among TRI factors, the effect of innovativeness on the continuous use intention of kiosks was different between men and women. That is, high innovativeness causes men to intend to continue using kiosks more than women. These findings are consistent with those of previous studies [20,26,36], which indicated that men embrace new technologies more readily than women.
Our findings have numerous academic and practical implications. The existing TRI literature has mainly focused on interactions among variables without considering the role of gender. This study analyzed the moderating role of gender among middle-aged and older consumers of fast-food restaurants. Further, the findings can be useful for fast-food restaurant managers in developing and implementing marketing strategies. Our results have three practical applications. First, the more positively a user responds to new technologies, the higher is his or her continuous use intention. Moreover, innovative consumers who want to use new technologies before others and be technological leaders will continue to use such kiosks. Thus, it is important to raise the optimism and innovativeness of users. Considering that innovativeness is the most effective factor for continuous use intention among the TRI factors, it is advisable to develop various types of kiosks by applying information and communications technologies, such as near field communication payment or voice recognition functions, to raise the hedonic motivation of consumers. Second, it is necessary to highlight the comfort of ordering and paying through kiosks for middle-aged and older consumers who scored high on negative TRI to improve their intent to continue using them. For middle-aged and older women who feel uncomfortable being served by machines, restaurant managers must assign an employee to assist them and must construct more user-friendly interfaces for readability, design, and order processing to lessen their discomfort and increase the possibility of continuous use. Third, the study found that even female consumers who want to become technologically pioneering by using new technologies before others do not have a greater increase in their intention to order and pay for menus through self-order kiosks. Therefore, in developing kiosks, fast-food restaurant managers should be aware that men and women can perceive the same TBSS differently to improve the interaction between customers and technology.
This study has its limitations. First, the research sample is not representative since we only chose customers who had used kiosks installed in restaurants, which is the most common type of TBSS. Thus, our findings cannot be generalized to include users of other types of TBSSs such as mobile applications, blockchains, and artificial intelligence. Accordingly, future studies should consider other TBSS types. Second, the continuous use intention of TBSS can be affected by various situational conditions such as demographic characteristics, psychological factors, service use experiences, and time. However, this research does not reflect these variables. Thus, it is necessary to increase the scale of the research by including more variables that can affect the user intent toward TBSSs. Institutional Review Board Statement: Ethical review and approval were waived for this study because personally identifiable information was not used and there is no possibility of human rights violations. | 2021-08-12T13:30:10.108Z | 2021-07-10T00:00:00.000 | {
"year": 2021,
"sha1": "af356c551800df85e726ad808822c28d42d48d47",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2078-2489/12/7/280/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c077b4f82e61b664c86cfd50e4f2190aca4b688b",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
222215867 | pes2o/s2orc | v3-fos-license | High-throughput approaches for precision medicine in high-grade serous ovarian cancer
High-grade serous carcinoma (HGSC) is the most prevalent and aggressive subtype of ovarian cancer. The large degree of clinical heterogeneity within HGSC has justified deviations from the traditional one-size-fits-all clinical management approach. However, the majority of HGSC patients still relapse with chemo-resistant cancer and eventually succumb to their disease, evidence that further work is needed to improve patient outcomes. Advancements in high-throughput technologies have enabled novel insights into biological complexity, offering a large potential for informing precision medicine efforts. Here, we review the current landscape of clinical management for HGSC and highlight applications of high-throughput biological approaches for molecular subtyping and the discovery of putative blood-based biomarkers and novel therapeutic targets. Additionally, we present recent improvements in model systems and discuss how their intersection with high-throughput platforms and technological advancements is positioned to accelerate the realization of precision medicine in HGSC.
Background
The American Cancer Society estimates that in 2020, 21,750 women will be newly diagnosed with ovarian cancer in the USA and ~ 13,940 will die from the disease [1]. Epithelial ovarian cancer (EOC) represents the fifth most common cause of cancer death overall and is the leading cause of death from gynecologic malignancies in the USA, Canada and Europe [1][2][3]. EOC is a heterogeneous disease with different types of histologies, molecular and microenvironmental features [4]. Histologically, EOC is traditionally classified into five major subtypes: high-grade serous (HGSC), low-grade serous (LGSC), clear cell, endometrioid and mucinous ovarian cancer [4]. A more recent classification model categorizes EOC into type I and II tumors, where LGSC, endometrioid, mucinous and clear cell carcinomas are classified as type I [5][6][7]. These neoplasms typically present as large, unilateral, cystic tumors and clinically tend to behave in an indolent fashion [7]. Genetically, type I cancers are characterized by minor chromosomal instability and may harbor BRAF, KRAS and PTEN mutations. Type II cancers, on the other hand, comprise of HGSC, which account for the vast majority of all EOCs, carcinosarcomas and undifferentiated carcinomas. HGSCs have a high degree of genetic instability and are characterized by the presence of acquired or inherited mutations in different DNA repair pathways including TP53, BRCA1/2 and other defects in homologous recombination repair genes [4,5,7]. Recent evidence suggests that HGSC originates from the fimbriae of the fallopian tube secondarily involving the ovary and peritoneum [8].
Early-stage disease is typically asymptomatic, and currently there are no proven screening strategies for HGSC that reduce mortality [9,10]. The tumor volume in the ovaries is substantially less than that of type I tumors, and 80% of HGSCs are diagnosed at advanced disease stages [7,11]. Even in advanced HGSC, symptoms are nonspecific and include back pain, fatigue, bloating, constipation, abdominal pain, change in bowel function, urinary symptoms and weight loss [12]. The initial diagnostic work-up includes a pelvic ultrasound or computed tomography (CT) and (CA125) assessment [13]. Magnetic resonance imaging may be used to further stratify pelvic masses, and a CT of the thorax, abdomen and pelvis is performed for staging purposes. The standard of care treatment for HGSC is primary debulking surgery (PDS) to no visible residual disease with adjuvant platinum-based chemotherapy. Two randomized trials comparing PDS and chemotherapy with neoadjuvant chemotherapy followed by interval debulking surgery showed similar survival, but both studies had minimal residual disease and survival rates in both study arms [13][14][15]. Despite recent advances, approximately 70% of EOCs recur and the 5-year survival rate for metastatic disease remains poor at 30% [1,16].
Precision medicine refers to the notion of tailoring clinical management of diseases to account for patient heterogeneity. Although it is well known that EOC comprises several pathologically distinct diseases, the current standard of care is to generally manage these subtypes as a single entity. Molecular screening has revealed a vast degree of variability within the HGSC subtype itself [17]. This is reflected in the array of clinical outcomes as not all patients respond to conventional therapies. This underlying complexity also makes it unlikely that a single tumor marker will be effective for all patients. The use of poly (ADP-ribose) polymerase (PARP) inhibitors for patients with BRCA1/2 mutations is an example of the shift toward precision medicine in HGSC; however, additional work is still necessary. Biomarkers that are reflective of tumor burden and therapies which target specific tumor characteristics are needed to improve patient outcomes. Advancements in high-throughput biological techniques have provided new opportunities for the discovery of biomarkers and therapeutic targets. These platforms allow for the simultaneous profiling of thousands of molecules and the subsequent generation of a wealth of biological data. Together with large cohorts of wellannotated patient samples and improved model systems, these approaches have facilitated novel insights into biological heterogeneity at an unprecedented scale. In this review, we provide an overview of how high-throughput approaches have contributed to the molecular profiling of patient heterogeneity within HGSC and highlight the utility of these technologies in the discovery of putative blood-based biomarkers and therapeutic targets as a step toward enabling precision medicine as a reality for all HGSC patients (Fig. 1). We also discuss the complementary role of HGSC experimental models in advancing these discoveries.
Molecular tumor profiling of HGSC
High-throughput molecular profiling of tumor samples has been used to gain insights into the biological aberrations underlying the pathogenesis of HGSC. The largest study in mapping the molecular features of HGSC was conducted by The Cancer Genome Atlas (TCGA) network, where 489 tumor samples were subjected to genomic and transcriptomic analyses [17]. Exome sequencing detected TP53 mutations in 96% of tumors. Interestingly, subsequent histological analysis of the TP53 wild-type tumors in this cohort revealed differences in morphological features indicating that these tissues were not truly HGSC tumors [18], suggesting the proportion of TP53 mutations to be even higher than reported. This finding is consistent with other reports of ubiquitous TP53 mutations in HGSC [19]. Serous tubal intraepithelial carcinomas (STIC) (the precursor lesion of HGSC) and 'p53-signature lesions' (the hypothesized precursor of STIC) in the fallopian tube have been shown to share identical TP53 mutations to HGSC, signifying that TP53 mutations develop early in the HGSC carcinogenic process [20]. Germline and somatic mutations in BRCA1 and BRCA2 are the next most prevalent mutations in HGSC, cumulatively present in 22% of the TCGA cohort [16]. Seven other significantly mutated genes were identified albeit only in 2-6% of cases, demonstrating a limited mutational landscape in HGSC. In contrast, HGSC exhibits a high degree of chromosomal instability evident by extensive copy number alterations (CNAs) in each tumor and the identification of 113 significantly recurrent CNAs throughout the entire cohort [17]. The TCGA study also revealed that half of HGSC tumors had genomic and/or epigenetic deficiencies in homologous recombination, further underscoring the role of erroneous DNA repair mechanisms in HGSC pathogenesis [17]. Indeed, homologous repair deficiency (HRD) is a crucial determinant of platinum sensitivity in HGSC [21]. Other frequently altered pathways in HGSC include RB1, PI3K/ RAS, NOTCH and FOXM1 [17]. In an attempt to deconvolute this vast genomic heterogeneity, Macintyre et al. [22] have recently identified seven copy number signatures in HGSC, some of which were found to be associated with previously mentioned mutations, aberrant pathways and survival outcomes, yet larger studies are still required to validate these associations.
The profiling of mRNA expression in HGSC tumors has identified four overlapping transcriptional subtypes of HGSC: C1-mesenchmyal, C2-immunoreactive, C4differentiated and C5-proliferative [17,23]. Independent studies have identified prognostic implications associated with these subtypes in which the immunoreactive subtype
Molecular Subtyping
Targeted Therapies
Model Systems
Primary Sources throughput
Technology Precision Medicine
Non-cancerous tissue Tumors Blood draws Fig. 1 Applications of high-throughput technologies for precision medicine. High-throughput examination of experimental models and patient samples is promising for molecular subtyping and the discovery of liquid biomarkers and targeted therapies, which cumulatively contribute to advancing precision medicine in HGSC. GEMM genetically engineered mouse model, PDX patient-derived xenograft exhibited improved survival outcomes, whereas the mesenchymal and proliferative subtypes demonstrated the worst overall survivals [24,25]. Building on these consistent findings, Leong et al. [26] have identified a gene signature consisting of 39 differentially expressed genes for classification of these subtypes. The Clinical Proteomic Tumor Analysis Consortium (CPTAC) analyzed the global proteomes of 169 HGSC tumors from the TCGA cohort [27]. Clustering of tumors based on protein abundance revealed five subtypes, four of which demonstrated a clear resemblance to the classical transcriptomic subtypes and one novel subtype classified as stromal [27]. Integration of proteomic and CNA data revealed that proteins associated with multiple CNAs were enriched in cell invasion/migration and immune processes, suggesting there is a functional convergence of the high degree of chromosomal instability [27]. The low overall correlation between mRNA expression and protein expression in this investigation highlights the importance of multi-omic profiling to achieve a comprehensive understanding of molecular alterations underlying HGSC [27]. Aside from delineating molecular heterogeneity between patients, high-throughput tumor profiling can also be used to elucidate the diversity within a tumor. Deconvolution of bulk HGSC transcriptional data has revealed that individual tumors often display multiple subtype signatures [25], accentuating the additional layer of molecular complexity offered by intratumor heterogeneity. Albeit on a small scale, recent efforts in multiregion tumor profiling of HGSC tumors have uncovered intratumor molecular heterogeneity in both a spatial manner and temporal manner [26,[28][29][30][31]. Although larger investigations are warranted to extend the generalizability of these data, these studies highlight the susceptibility of bulk subtypes to sampling bias and the potential confounding role of stromal components in tumor profiling. Additionally, in a disease characterized by extensive intraperitoneal dissemination, multiregion molecular profiling of primary tumors and metastases can be of value for discerning the biology underlying HGSC progression [32][33][34]. Despite the loss of spatial microenvironment context, single-cell technologies can also provide insights into intratumor heterogeneity [35,36]. Further large-scale studies using these emerging approaches may shed light into how intratumor heterogeneity manifests in clinical outcomes such as therapeutic resistance. Overall, the spectrum of molecular differences within HGSC underscores the significance in using high-throughput approaches to further understand the biological abnormalities and translate these findings into novel biomarkers and targeted therapies.
Blood-based biomarkers for HGSC
A biomarker is a measurable feature that is reflective of biological processes and can provide information regarding the disease state of an individual. Cancer biomarkers are used for various purposes throughout the course of disease progression, including assessing the likelihood of developing cancer, diagnosing malignancies, determining prognosis, predicting patient responses to specific therapies and monitoring residual disease posttreatment and during remission. In contrast to directly examining tumor tissue, liquid biopsies can facilitate minimally invasive tumor assessments to guide clinical decisions. Blood is an attractive biological fluid for biomarkers in clinical practice due to the standardized collection procedures and abundant availability. In this section, we briefly review the current landscape of HGSC blood-based biomarkers and discuss the utility of high-throughput approaches in the discovery of novel biomarkers to help improve clinical management of HGSC.
Liquid biopsies in clinical practice
In the context of HGSC, and EOC in general, serum biomarkers are currently used for the differential diagnosis of a pelvic mass prior to surgery, monitoring response to treatment and detecting recurrent disease. Although definitive diagnosis of EOC currently requires histological examination, differential diagnosis of a pelvic mass determines preoperative referral [37]. This is crucial as optimal tumor resection and subsequent improved outcomes are more likely when surgical management for EOC is performed by gynecological oncologists rather than general surgeons or gynecologists [38]. In addition to determining treatment efficacy and prognosis following therapy, accurate markers of treatment response are also used to evaluate novel therapies in clinical trials [39]. Considering the high rates of HGSC relapse, early detection of recurrent disease is imperative for appropriate timing of therapies to improve survival [40].
Cancer antigen 125
CA125 is a large membrane glycoprotein encoded by the gene MUC16 and was identified as a tumor marker for EOC in 1983 [41]. Significant expression of CA125 is observed in 85% of serous, 65% of endometrioid, 40% of clear cell, 36% of undifferentiated and 12% of mucinous ovarian cancers, highlighting the lack of utility of CA125 in some EOC subtypes [42]. Despite being the most widely used biomarker for EOC, CA125 offers limited value as a diagnostic test. Serum concentrations of CA125 are elevated in 90% of advanced-stage EOCs and less than 50% of early-stage EOCs, resulting in a low sensitivity for detecting early-stage disease [43]. Furthermore, serum CA125 abundance has a low specificity for EOC as levels can be increased due to multiple benign gynecological and medical conditions including endometriosis and pregnancy [44]. The low specificity is especially manifest in premenopausal women who are at an increased risk of many of these other conditions [45]. Given the limitations of CA125 as a stand-alone diagnostic marker, the Risk of Malignancy Index (RMI) [46] and the International Ovarian Tumor Analysis (IOTA) Adnex model [47] were developed to integrate serum CA125 levels, ultrasound criteria and demographics, resulting in improved specificity and sensitivity for differential diagnosis of pelvic masses prior to surgery. When evaluated as a potential screening test, both the UKCTOCS study [9] and the PLCO trial [48] demonstrated that serum testing of CA125 alone or combined with transvaginal ultrasound imaging did not reduce mortality due to EOC and resulted in an increase in unnecessary invasive procedures associated with complications, underlining the clinical consequences of low specificity.
Nevertheless, CA125 offers clinical utility when evaluating treatment response and monitoring remission. A decrease in serum CA125 is indicative of treatment response, whereas a persistence of abnormally elevated CA125 or increases may suggest treatment resistance and/or residual disease [39]. Many post-treatment surveillance protocols include serial measurements of CA125, as rising serum CA125 is strongly predictive of disease recurrence [49][50][51]. A rise in CA125 concentration has been shown to precede clinical detection of recurrent disease by at least three to five months [49,52,53]. However, up to half of patients within the normal limits of CA125 during remission are found to have small volumes of disease during a second-look surgery [54,55]. Hence, despite being the earliest sign of recurrence currently available, CA125 is not optimally sensitive for detecting recurrence in all patients.
Human epididymis protein 4
Human epididymis protein 4 (HE4), encoded by the gene WFDC2, is a secreted glycoprotein that is overexpressed in serous and endometrioid ovarian cancers [56]. Hellstrom et al. [57] initially determined that serum HE4 was comparable to CA125 for distinguishing between patients with advanced-stage disease and healthy controls. Subsequent studies have produced conflicting reports regarding the sensitivity of HE4 compared to CA125 as a diagnostic test, yet there is a consensus that HE4 is more specific than CA125, especially in premenopausal women [58][59][60][61]. This superiority is likely due to serum levels of HE4 being less influenced by other gynecological disorders such as endometriosis [62]. Serum HE4 is also elevated in at least a third of patients who do not demonstrate elevated serum CA125 levels, signifying a role for complementary markers in diagnostics [63]. Serum HE4 is currently approved to be used as a tumor marker for monitoring disease progression and recurrence. A study evaluating serum levels of HE4 and CA125 prior to surgery for suspicious recurrent EOC found HE4 to be more sensitive and specific than CA125 [64]. In a pilot study, Anastasi et al. [65] found that a rise in serum HE4 preceded elevated serum CA125 five to eight months in five out of eight patients with recurrent disease. Preliminary studies have also revealed that HE4 elevation can detect recurrence in a subset of EOC patients that do not present with increased serum CA125 [66,67]. The combination of both markers resulted in a higher sensitivity and specificity in detecting recurrence than either marker alone [68], but data from larger prospective trials, including potential benefits in survival, are still pending.
Multimarker assays
In response to the promising data regarding the complementary potential between CA125 and HE4 as diagnostic markers, the Risk of Malignancy Algorithm (ROMA) was developed and is currently approved for differential diagnosis [69]. ROMA combines serum measurements of CA125 and HE4 and uses two different logarithmic regression models based on menopausal status to determine the likelihood of malignancy in women who are having surgery for pelvic masses [70]. A meta-analysis comparing ROMA, HE4 and CA125 revealed that ROMA demonstrated the greatest sensitivity and HE4 exhibited the highest specificity for differential diagnosis, although these differences were not statistically significant [71]. In a direct comparison, ROMA was found to be more sensitive than RMI with similar specificities [72], yet both assays demonstrated low sensitivity for early-stage disease [73]. OVA1 is a biomarker panel of five proteins used for the differential diagnosis of pelvic masses prior to surgery. The test consists of immunoassays for two upregulated proteins (CA125, beta 2 microglobulin) and three down-regulated proteins (transferrin, transthyretin, apolipoprotein A1) in serum [74]. An algorithm is used to integrate the measurements of each marker to generate an ovarian malignancy risk score ranging from 0 -10. The threshold for risk of malignancy is dependent on menopausal status. Many prospective studies comparing the performance of OVA1 to CA125 have reported higher sensitivity than CA125, especially for early-stage disease, yet lower specificity [75][76][77]. There have been no direct comparisons between OVA1 and ROMA to date. Overa is a second-generation multivariate assay which was intended to overcome the low specificity of OVA1 [78]. Overa uses serum measurements of CA125, HE4, apolipoprotein A1, transferrin and follicle-stimulating hormone to assess the likelihood of malignancy in women who will undergo surgery for a pelvic mass. The incorporation of follicle-stimulating hormone eliminates the need for assessing menopausal status as in OVA1. Overa was designed and validated using the same study population as OVA1, thus allowing for direct comparisons between the two assays. Indeed, Overa demonstrated an improved specificity and similar sensitivity to OVA1 [78].
Germline BRCA1/2 mutations
Germline deficiencies in BRCA1/2 are the strongest genetic risk factors for nonmucinous EOC [79]. The cumulative risk of developing ovarian cancer in BRCA1 and BRCA2 carriers ranges from 40 to 59% and 16 to 18%, respectively [80][81][82]. As such, genetic counseling and genetic testing can be suggested for patients with familial history of breast, ovarian, pancreatic or prostate cancer to identify those who are at an elevated risk [83] and will likely benefit from preventative measures such as risk-reducing bilateral salpingo-oophorectomies [84,85]. In addition, germline and somatic BRCA1/2 mutations are considered predictive biomarkers due to strong associations with favorable outcomes following both platinum-based chemotherapy [86,87] and maintenance therapy with PARP inhibitors [88,89]. PARP inhibitors have also been approved for usage as monotherapy in BRCA1/2-deficient women with recurrent disease [90,91]. Hence, genetic testing for BRCA1/2 mutations is recommended for all newly diagnosed ovarian cancer patients to aid in therapy selection and determining cancer risk for family [83].
Emerging high-throughput biomarker discovery approaches
Though blood is an attractive source of biomarkers, limitations in the throughput of highly sensitive molecular measurements have been a challenge, especially for heterogeneous diseases such as HSGC. Advancements in 'omics' approaches have enabled the ability to characterize and evaluate various classes of circulating molecules as potential blood-based biomarkers (Fig. 2). A biomarker discovery pipeline is typically initiated with a discovery phase in which large-scale comparative profiling experiments of blood, tumor tissue or model systems are used to generate a list of candidate markers. Following the discovery phase, targeted methods of quantification are often applied to validate candidate markers in clinical samples [92]. Here, we discuss the various classes of molecules and types of discovery approaches which have been applied to HSGC biomarker discovery.
Circulating tumor DNA
Circulating cell-free DNA (cfDNA) are short fragments of DNA released into the bloodstream from apoptotic or necrotic cells [93]. The quantification of total cfDNA has revealed that EOC patients have elevated levels of cfDNA compared to healthy controls and patients with benign disease [94][95][96]. However, evaluation of cfDNA abundance is not a direct measure of tumor burden as DNA fragments are also released by noncancerous cells. The fraction of cfDNA that originated from tumor cells, termed circulating tumor DNA (ctDNA), can be distinguished by the presence of cancer-specific alterations [97]. Given the low abundance of ctDNA compared to cfDNA, highly sensitive approaches such as digital polymerase chain reaction (PCR) and targeted next-generation sequencing (NGS) are used to detect cancer-specific modifications. Since TP53 mutations are ubiquitous in HGSC, detection of TP53 mutants in cfDNA has been preferentially used when investigating ctDNA as a biomarker [98][99][100][101]. Parkinson et al. [98] analyzed TP53 mutations in longitudinal plasma samples from HGSC patients undergoing treatment to evaluate the value of ctDNA in determining prognosis and response to treatment. This study revealed that the abundance of TP53 mutant ctDNA fractions prior to treatment significantly correlated with volumetric measurements of tumors from CT images, unlike CA125, and that a decrease of > 60% of TP53 mutant ctDNA fractions following treatment was a predictor of timeto-progression. Christie et al. [102] recently investigated whether reversal of germline BRCA1/2 mutations can be detected in ctDNA, as this molecular alteration is known to correspond with acquired chemo-resistance. Reversion mutations were detected in the plasma of three out of five patients with reversion mutations observed in tumor samples, all of whom were resistant to platinumbased therapy or PARP inhibitors. The authors noted that detection of reversal mutations was associated with the fraction of ctDNA out of total cfDNA, measured by the presence of TP53 mutant alleles. Certainly, a limiting factor of utilizing ctDNA as a tumor marker is that current detection strategies may not be sensitive enough to detect rare mutants in early-stage disease where the ctDNA fraction is low [103].
In addition to investigating specific genes, examination of genome-wide chromosomal aberrations in ctDNA can be promising for the discovery of novel tumor markers. Harris et al. [104] used whole-genome sequencing to characterize genomic rearrangements in primary tumors of HGSC patients and investigated whether patient-specific aberrant chromosomal junctions could be detected in plasma. ctDNA with patient-specific chromosomal alterations was detected in pre-surgically drawn plasma samples for eight out of ten patients. Postsurgical detection of ctDNA was specific to the only three patients with clinically documented residual disease, suggesting a potential for personalized markers of tumor burden [104]. As changes in DNA methylation and chromatin remodeling play a role in tumor biology, the rise of epigenetic technologies (e.g., methylation profiling) is also promising for the use of ctDNA as blood-based tumor markers [105]. In the context of HGSC, Widschwednter et al. [106] identified aberrant methylation signatures in tumor tissues and developed a three-marker DNA methylation panel for ctDNA that was able to discriminate patients from healthy women or women with benign masses. The panel was also shown to better distinguish between platinum responders and nonresponders than CA125. microRNAs microRNAs (miRNAs) are a class of short (19-25 nucleotides) noncoding RNAs that are involved in gene regulation. miRNAs can function as oncogenes or tumor suppressors depending on cellular context, and expression has been shown to be deregulated in several cancers [107]. miRNAs are actively secreted from cells by binding to protein complexes or by being packaged into extracellular vesicles, thus providing protection from RNAse digestion and degradation in various extreme conditions (e.g., high temperatures, severe pH and multiple freeze-thaw cycles) [108,109]. The stability of miRNAs in blood renders them as attractive molecules for tumor markers in liquid biopsies. In large-scale miRNA biomarker discovery experiments, high-throughput qRT-PCR panels, microarrays and more recently, NGS, can be used for profiling miRNAs in patient samples [110][111][112]. Todeschini et al. [110] used microarrays to profile miRNA expression in sera from HGSC patients and healthy controls. The differentially expressed miRNAs were then quantified in an independent cohort from which a single miRNA that demonstrated the greatest ability in discriminating HGSC patients from controls was identified as a putative diagnostic biomarker. Shah et al. [111] used qRT-PCR panels for serum miRNA profiling and demonstrated that combining measurements of circulating miRNAs and CA125 can be predictive of surgical resection outcomes for women with HGSC, suggesting value for circulating miRNAs as prognostic markers.
Proteins
Proteins are the primary functional elements of most biological processes, and thus, protein expression is often deregulated in disease states. Mass spectrometry (MS) is a powerful approach for protein measurement as current MS-based proteomic experiments can detect thousands of proteins in a single sample. MS has already proven to be fruitful in EOC biomarker discovery as the four markers in OVA1 (excluding CA125) were discovered using MS-based approaches [74]. In the studies that led to the development of OVA1, seven protein candidates were identified in the original discovery phase, yet verification of candidates was limited to only those proteins which had existing immunoassays [113,114]. Though this approach is advantageous for faster clinical adoption, antibody availability can pose as a bottleneck for validation of candidate markers in biomarker discovery pipelines. Targeted MS approaches such as multiple reaction monitoring (MRM) and the more recently developed parallel reaction monitoring (PRM) can enable high-throughput robust quantification independent of antibody availability, thus circumventing the need for antibody development during biomarker discovery.
Detection of blood-based protein markers is challenging due to the large dynamic range and high sample complexity of serum/plasma. Considering that the 22 most abundant proteins in plasma account for 99% of the total mass of protein [115], detection of low-abundance proteins, often the most promising proteins for biomarker candidates, is hindered. Several preanalytical workflows have been developed to overcome this complexity, including the depletion of high-abundance proteins, sample fractionation and/or the enrichment for sub-proteomes [116]. N-glycosylation is a posttranslational modification that plays an important role in the stability, solubility and localization of proteins to the cell surface [117]. N-glycosylated proteins can be enriched from biological samples using chemoproteomic-and lectin-based approaches [118,119]. As N-glycosylation is highly prevalent among extracellular proteins (including secreted proteins) and is not present on several high-abundance blood proteins (i.e., albumin), the N-glycoproteome represents a clinically relevant sub-proteome for liquid biomarker discovery. Sinha et al. [120] recently devised an integrated N-glycoproteomics-based approach for detecting biomarkers of HGSC relapse. N-glycosylated peptides were enriched from the sera and tumors from recurrent HGSC patient-derived xenograft (PDX) mice and from sera of non-engrafted mice. Species mapping was used to distinguish between peptides of human (tumor) and mouse (stroma) origin, and comparative analysis was used to select a set of candidate markers. Subsequently, PRM was used to quantify the candidates in longitudinal HGSC patient serum samples, revealing four candidates that demonstrated an earlier rise between the remission and the recurrence time points than CA125. Although large-scale clinical validation of the markers is warranted, this study is a proof-of-concept for the use of N-glycoproteomics and PDX models in serum protein biomarker discovery for HGSC.
Glycans, lipids and metabolites
Posttranslational modifications and metabolic processes are important determinants of cellular signaling and modulating phenotypes. As such, these molecular classes also represent promising candidates for bloodbased biomarker discovery. Considering that aberrant glycosylation occurs during malignant transformation [121], one such approach consists of profiling differences in glycan structures on glycoproteins. Biskup et al. [122] used MS to compare the serum N-glycome profiles between serous EOC patients and healthy women. This glycomics study revealed a marker panel comprising 11 differentially abundant glycans that demonstrated an improved specificity for distinguishing patients from healthy controls compared to CA125. Moreover, as metabolic alterations have been implicated in tumorigenesis [123], metabolomics and lipidomics have emerged as potential avenues for biomarker discovery. Zhou et al. [124] used MS to examine the metabolite profiles of sera from HGSC patients, women with benign ovarian masses and healthy controls and subsequently developed a machine-learning algorithm for diagnostic classification based on the mass spectrum profiles. Buas et al. [125] performed lipidomics analyses of plasma collected from serous EOC patients and patients with benign ovarian masses. A classification model incorporating CA125 and four lipid metabolites demonstrated an increased diagnostic accuracy compared to CA125 alone. Together, these studies suggest a potential utility for plasma metabolites to aid in the diagnosis of HGSC.
Extracellular vesicles and circulating tumor cells
Aside from investigating freely circulating molecules in blood, molecular profiling of extracellular vesicles (EVs) and circulating tumor cells (CTCs) are alternative approaches for blood-based biomarker discovery. EVs, such as exosomes, are secreted from most cell types, play a role in intercellular communication and contain molecular content from the cell-of-origin [126]. In a pilot study, Taylor et al. [127] identified eight exosomal miRNAs that demonstrated significantly distinct expression profiles in the sera of serous EOC patients compared to the sera of women with benign disease. These exosomal miRNAs exhibited a similar expression profile in tumor tissue and were not detected in the sera of healthy controls. Recently, Kobayashi et al. [128] used microarrays to profile miRNAs in exosomes isolated from the conditioned media of HGSC cell lines and immortalized ovarian surface epithelial cells. A single upregulated miRNA was selected for subsequent quantification in EOC patient sera and was found to be differentially expressed between sera of HGSC patients and sera from non-HGSC patients [128]. This illustrates a potential for noninvasive molecular stratification of EOC. Peng et al. [129] compared the proteomes of serum exosomes from serous EOC patients and tumor tissue, revealing 35 proteins commonly upregulated in comparison with normal samples. These findings suggest that exosomes may be of use for noninvasive molecular tumor examinations.
CTCs are tumor cells that are shed into vasculature and play an important role in metastasis [130]. Although CTCs have been explored as noninvasive tumor markers in the general context of EOC, to the best of our knowledge, there are no published investigations specifically focusing on HGSC to date. Studies have primarily focused on the detection and/or the enumeration of CTCs as potential biomarkers in EOC, yet there have been conflicting reports potentially due to differences in isolation strategies [131][132][133]. Molecular investigations of CTCs are less prevalent and have traditionally involved the use of qRT-PCR to evaluate the expression of a few specific genes. Zhang et al. [134] examined the expression of six genes that were known to be associated with EOC and demonstrated that EpCAM and ERBB2 expressions in CTCs were correlated with platinum resistance and overall survival. Emerging technologies in microfluidics-based CTC isolation and single-cell molecular analysis present new avenues for high-throughput examinations of individual CTCs. Single-cell RNA sequencing of CTCs has been shown to be promising in understanding clonal resistance and metastasis to potentially inform therapeutic decisions in other cancers [135,136]. Furthermore, MS-based workflows have recently been developed to profile the proteomes of CTCs, allowing for another layer of molecular characterization [137,138]. Although single-cell approaches have yet to be applied to CTCs in HGSC, it is proposed that comprehensive molecular characterization of CTCs can provide noninvasive insights regarding intratumor heterogeneity and aid in patient selection for targeted therapies.
Targeted therapies for HGSC
Targeted therapies are therapeutic agents that act on specific molecular targets, pathways or aspects of the tumor microenvironment that drive the cancer phenotype, in an effort to reduce harm in normal cells. Contemporary systemic management of EOC has progressed from chemotherapy to combination treatments and frontline targeted therapy, when appropriate. In this section, we review the current application of targeted therapies in HGSC clinical practice and describe high-throughput biological workflows for therapeutic target discovery.
Current clinical use of targeted therapies
Although there are several emerging therapies under clinical investigation for EOC (e.g., immunotherapies [139] and folate receptor-targeted therapies [140]), we have limited our review on the targeted therapies with the most clinical data and they have been approved for use in the clinic (Fig. 3).
Anti-angiogenic agents
Angiogenesis is a rate-limiting step in the evolution of cancer [141] and has therefore been studied as a potential target for systemic treatment. Bevacizumab is a monoclonal antibody that targets vascular endothelial growth factor (VEGF) A which is secreted by tumors to induce the formation of new blood vessels [142].
Early studies of bevacizumab have demonstrated improved progression-free (PFS) and overall survival (OS) in colorectal and renal cancer [143,144]. Two landmark trials that assessed the role of concurrent and maintenance treatment with bevacizumab in EOC were the GOG-0218 (primary endpoint: PFS) and the ICON7 (primary endpoints: PFS and OS) studies [145,146]. Both trials have shown significant improvements in PFS in the intention-to-treat populations with bevacizumab compared to chemotherapy alone but have failed to improve OS in the overall study population. The clinical significance of a three-month difference in PFS has been debated, and as such, bevacizumab is not universally used in the first-line treatment of EOC. In the ICON7 trial, women with high-risk features (inoperable stage III, suboptimal debulking and stage IV disease) randomized to bevacizumab had a significant improvement in mean OS of 4.8 months [147]. Similarly, a subanalysis of the GOG-0218 study suggested that patients with International Federation of Gynecology and Obstetrics (FIGO) stage IV disease may have an increased survival benefit from bevacizumab [148].
In addition to its utility as a first-line therapy, bevacizumab has proven to be effective in patients with recurrent disease. Clinical trials have revealed significant improvements in PFS when bevacizumab was added compared to chemotherapy alone in both platinum-sensitive [149] and platinum-resistant patients [150]. Similar to primary disease, the use of bevacizumab for recurrent disease was not associated with significant improvement in OS for all participants [150,151]. Other anti-angiogenic therapeutics currently under investigation for EOC include pazopanib [152] and nintedanib [153], both of which have shown similar improvements in progression-free survival in clinical trials. Considering the potential severe side effects including hypertension, renal complications, hemorrhage, gastrointestinal perforation and fistula formation, appropriate patient selection and balancing the risks and potential benefits play a pivotal role for anti-angiogenic treatment.
Poly (ADP-ribose) polymerase inhibitors
PARP-1 was first described in 1966 [154] but its pivotal role for ovarian cancer was only recently discovered [155]. PARP-1 and PARP-2 are enzymes that play a critical role in base excision repair, a repair mechanism for DNA single-strand breaks [156]. The inhibition of PARP results in an accumulation of single-strand breaks, which can lead to double-strand breaks during replication. The double-strand breaks are normally repaired by a process termed 'homologous recombination' [156]. Fig. 3 Current targeted therapies for high-grade serous ovarian cancer. a Anti-angiogenic agents. Cancer cells secrete vascular endothelial growth factor (VEGF) A that binds to vascular endothelial growth factor receptor (VEGFR) to promote angiogenesis and proliferation. Bevacizumab is a monoclonal antibody which inhibits the binding of VEGF to VEGFR, thus hindering angiogenesis and tumor growth. b Poly(ADP-ribose) polymerase (PARP) inhibitors. PARP enzymes mediate base excision repair of DNA single-strand breaks. Inhibition of PARP results in the accumulation of single-strand breaks culminating in DNA double-strand breaks. In cells with homologous repair deficiencies, double-strand breaks are not repaired resulting in replication fork collapse, chromosome instability and cell death. BER base excision repair, PARP poly (ADP-ribose) polymerase, VEGF vascular endothelial growth factor, VEGFR vascular endothelial growth factor receptor the inhibition of PARP results in a synthetic lethal interaction as the accumulation of double-strand breaks coupled with inadequate repair mechanisms can lead to chromosomal instability, cell cycle arrest and subsequent apoptosis [157]. Olaparib, niraparib and rucaparib are the three PARP inhibitors that are currently FDA-approved for recurrent ovarian cancer after showing consistent improvement in PFS [88,[158][159][160].
In the SOLO-1 trial, olaparib was tested as a frontline maintenance treatment in women with newly diagnosed FIGO stage III-IV ovarian cancer with germline or somatic BRCA1/2 mutation following cytoreductive surgery and platinum-based chemotherapy. PFS was significantly improved in the olaparib arm compared to placebo and the 3-year progression-free survival was 60.4% versus 26.9%, HR 0.3 (p < 0.001) [89]. The FDA has subsequently approved olaparib for frontline maintenance treatment in women with platinumresponsive ovarian cancer and BRCA1/2 mutation. More recently, the PRIMA trial has shown that niraparib is also effective in the overall population regardless of the HRD status, but the post hoc subanalysis has clearly shown that those patients with BRCA1/2 mutations and other HRDs benefited most from maintenance treatment with niraparib [161].
For patients at high risk for recurrence/progression, there is currently a lack of evidence to suggest the superiority of either anti-angiogenic agents or PARP inhibitors over the other. Interim data from the ongoing PAOLA study investigating the combination of PARP inhibitors and bevacizumab suggest a significant benefit in PFS from concurrent use of both agents [153]. Future studies will need to identify patient groups who benefit most from PARP inhibitors, anti-angiogenic treatment, evolving therapeutics such as immunotherapies or a combination of these, while balancing the benefits and added toxicities from combination treatments.
Large-scale discovery of therapeutic targets
High-throughput experiments can serve as useful means of discovery in targeted therapy development. These large-scale molecular examinations enable the generation of novel hypotheses regarding putative therapeutic targets to help select candidates for further validation. As HGSC is characterized by a lack of discernible drivers (aside from TP53) and extensive heterogeneity, there remains a vast potential for uncovering unanticipated vulnerabilities as novel targeted therapies. Highthroughput experiments for therapeutic target identification can be classified in one of two broad themes: molecular profiling to detect aberrantly expressed molecules in tumors or phenotypic screening to determine the molecules important for cancer cell survival.
Molecular expression profiling
In molecular expression profiling experiments, highthroughput discovery experiments are performed on one or more molecular classes with the underlying assumption that differences in the expression profile of a molecule, or group of molecules (i.e., those associated with a biological pathway), may inform the understanding of disease pathology. As such, the discovery experiments are typically followed by analyses designed to reveal these differences, including differential expression analyses and ontologically informed pathway analyses. Molecular profiling experiments on patient cohorts enable matching clinical data to molecular phenotype which can be especially apt for identification of candidates for targeted therapies. In one such study, Coscia et al. [162] compared the proteomic profiles of platinum-sensitive and -resistant HGSC patient samples revealing cancer/testes antigen 45 (CT45) expression to be predictive of disease-free survival. By establishing a link between demethylating agents and CT45 expression and by linking CT45 to cytotoxic T cell engagement, two potential therapeutic strategies can be devised from these findings.
Molecular expression profiling experiments offers the advantage of not requiring a priori information regarding the pathology of the disease. However, a significant hurdle which lays between molecular expression profiling experiments and novel therapeutics is the potential need to develop novel drug compounds. While there are some measures for druggability which can inform therapeutic target selection, drug development remains a costly process with uncertain success [163]. One strategy to mitigate this challenge is to focus on protein classes for which there is a history of therapeutic intervention. For instance, though kinases are often effective targets for cancer therapeutics, established protein kinase therapies have demonstrated limited utility in HGSC. Recognizing this disconnect, Kurimchak et al. [164] profiled the kinome of primary tumors and PDXs to detect differentially expressed kinases in HSGC revealing a potential therapeutic target, MRCKA.
Surface proteins are another especially useful class of molecules as their accessibility renders them favorable therapeutic targets-evidenced by the fact that over 58% of the known protein targets of FDA-approved drugs are cell surface proteins [165]. However, proteomic workflows which enrich for surface proteins typically require large starting amounts and are therefore not practical or possible for all systems. Despite the challenges associated with surface proteomic workflows, the profiling of the cell surface proteins is well suited for identifying targets for repurposing approved therapeutics and development of new therapeutics. Antibody-drug conjugates (ADC), an emerging class of therapeutics for cancer treatment, enable surface proteins to act as potential therapeutic targets independent of their direct connection to disease pathology [166,167]. One ADC, IMGN853, which targets folate receptor ɑ, is being evaluated in a phase 3 clinical trial for folate receptorpositive platinum-resistant EOC patients [168]. In this case, although there is some evidence to suggest that targeting folate receptor ɑ alone could have an effect on cancer progression [169], the cytotoxic component of IMGN853 is the maytansinoid compound conjugated to the antibody which targets microtubules.
Phenotypic screening
Phenotypic screening approaches can be used to identify tumor-specific molecular dependencies as putative targets for therapeutic inhibition. Technological advancements have enabled large-scale molecular perturbations which allow for the functional examination of thousands of molecules in a single experiment. Functional genomic screens entail the use of RNA interference (RNAi) or more recently, clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 systems coupled with NGS to characterize the genes associated with a phenotype of interest [170,171]. In a pan-cancer comparison, Cheung et al. [172] performed genome-wide shRNA knockdown screens in 102 cell lines, including 25 ovarian cancer cell lines revealing 54 genes exclusively essential for ovarian cancer viability and proliferation, underlining the utility of functional genomic approaches to identify lineage-specific dependencies. Functional genomic screening can also be used to identify concurrent therapeutic targets that improve chemosensitivity of existing therapies. Fang et al. [173] performed a genome-wide CRISPR knockout screen in an HGSC cell line treated with olaparib to identify targets that mimic HRD. Based on this screen, the authors were able to characterize a gene whose knockout increased the cytotoxic effects and can potentially extend the clinical utility of PARP inhibitors in HGSC.
In contrast to genome-wide interrogations, functional genomics screens can also be conducted on a subset of experimentally relevant genes. To identify novel synthetic lethal targets in BRCA2 deficient tumors, Mengwasser et al. [174] conducted targeted CRISPR screens in two pairs of isogenic cell lines; one breast cancer pair and one HGSC pair. The isogenic cell lines differed based on the presence of a functional BRCA2 gene and the screen targeted 380 genes that are involved in DNA damage repair. Interestingly, the authors determined two candidates that not only demonstrated synthetic lethality with BRCA2 deficiencies but also BRCA1, as evident through subsequent investigations. The candidates identified in this study represent potential therapeutic targets for BRCA -deficient tumors that are resistant to PARP inhibitors. Likewise, Baratta et al. [175] designed an in vivo shRNA screen to assess the depletion of ~ 800 genes in xenografts of a human HGSC cell line. The screen revealed several candidates essential for proliferation/survival of HGSC. Through further investigations in patient-derived cell lines, one gene was identified as a potential target for MYCN overexpressing tumors.
Although functional genomics experiments are advantageous approaches for identifying molecular vulnerabilities for potential inhibition, exploiting these candidates as therapeutic targets can be hindered by the druggability of proteins. The use of high-throughput drug screens is an alternative phenotypic screening approach to identify actionable dependencies. In these experiments, numerous small-molecule compounds, typically with known mechanisms of action, are simultaneously tested against cells to identify novel vulnerabilities. Kenny et al. [176] recently performed a fully robotic screen of ~ 45,000 small molecules in an HGSC organotypic model consisting of one of five HGSC cell lines, primary human stromal cells and extracellular matrix components. Subsequent in vitro and in vivo assays identified three compounds that prevent cancer adhesion, proliferation and invasion suggesting that these compounds can be promising therapeutics for ovarian cancer metastasis. Additionally, drug screens can be used to elucidate indirect mechanisms of targeting undruggable cancer proteins. Zeng et al. [177] screened a small molecule library in two HGSC cell lines to identify alternative methods of down-regulating MYC; an oncogene essential when overexpressed in HGSC yet pharmacologically undruggable. This screen revealed a compound that suppressed MYC expression through simultaneous inhibition of three specific cyclindependent kinases, hence identifying putative targets for MYC overexpressing HGSC tumors. A caveat associated with drug screens is that target discovery is restricted to those proteins with existing small molecule inhibitors, thus ignoring the potential of other therapeutic classes such as monoclonal antibodies.
Integrated target discovery workflows
The use of the aforementioned high-throughput experiments is beneficial as initial steps in therapeutic target discovery. Both approaches enable the concurrent screening of numerous molecules to select potential candidates for further validation experiments. However, as technological advancements improve the capabilities of these platforms, these large-scale experiments can identify hundreds of hits and given the time-consuming nature of molecular biology interrogations, it is often not feasible to individually follow up on every single hit. The use of bioinformatic tools that prioritize targets (e.g., SurfaceGenie [178]) or mining publicly available data, such as TCGA and the Genotype-Tissue Expression (GTEx) project, can help further narrow down candidates, yet well-annotated complete data are not always available for all diseases. Hence, an emerging workflow for therapeutic target discovery is the integration of both high-throughput approaches for a multifaceted characterization of candidates. By leveraging the advantages of these two orthogonal approaches, an integrated workflow can produce a refined list of candidates that are both actionable and essential. Medrano et al. [179] conducted genome-wide shRNA screens and cell-surface characterizations of 27 HGSC cell lines in parallel, resulting in the identification of CD151 as a cell-surface protein that demonstrated essentiality in a subset of HGSC cell lines. Subsequently, the authors performed RNA sequencing to molecularly characterize the discrepancies in response to knockout of the candidate. This study identified both a novel therapeutic target and a molecular marker of target sensitivity, highlighting the utility of integrated highthroughput workflows for HGSC target discovery. Similar target discovery pipelines have proven promising in other cancer settings as well. Martinko et al. [180] used MS to evaluate changes in cell surface expression associated with oncogenic KRAS and in parallel, conducted a targeted CRISPR knockdown screen of ~ 1600 membrane proteins to functionally characterize the oncogenic KRAS surfaceome. Integration of both datasets resulted in the discovery of CDCP1 as a therapeutic antibody target for KRAS-driven cancer cells. Considering the limited success in translating promising targets into beneficial therapies, comprehensive characterization through both expression profiling and functional analyses early in the discovery process may help ensure the selection of robust targets for therapy development.
Cell lines
Ideal experimental models should accurately reflect tumor biology to ensure maximum translational utility. Given the ease of use and accessibility, immortalized human cancer cell lines are the most widely used models for experimental interrogations of HGSC [181]. However, until recently, the majority of cell lines used in HGSC research were poorly characterized with uncertain histopathological origins. To address these ambiguities, Domcke et al. [182] compared copynumber changes, mutations and mRNA expression profiles of 47 EOC cell lines and 316 HGSC tumor samples examined by TCGA. Strikingly, this extensive evaluation concluded that the most frequently used cell lines in HGSC research poorly recapitulated the genomic and transcriptomic features of HGSC tumors and are likely other EOC histopathologies. The authors recommended an alternative set of cell lines that closely resemble HGSC tumors and thus would be more appropriate as in vitro models. A separate proteomic profiling study of 28 EOC cell lines, two immortalized ovarian surface epithelial cell lines, three primary fallopian tube epithelial cell isolates and eight HGSC tumor tissues revealed distinct groups of cell lines [183]. The majority of cell lines reported to likely represent HGSC as per Domcke et al. [182] clustered with the proteomes of HGSC tumors and fallopian tube samples, further confirming a HGSC histopathology. Additional studies have revealed discrepancies amongst the ability of HGSC cell lines to model tumor metastasis and histopathology in vivo when xenografted [184,185]. Together, these studies illustrate the disconnect between certain model cell lines and HGSC tumors and highlight the importance of informed cell line selection.
Another caveat of in vitro cell line models is the artificial microenvironment invoked by monolayer growth on plastic and the lack of multicellularity. Three-dimensional (3D) culture has emerged as a step toward bridging the gap between in vitro and in vivo experiments. 3D culture more closely resembles the tumor microenvironment by restoring 3D cell-cell and cell-ECM interactions [186,187]. Moreover, research groups have successfully demonstrated co-culturing with patient-derived fibroblasts [188] and patientderived mesothelial cells [189] in HGSC spheroid models to capture the influence of tumor-stromal cross talk on survival and proliferation. Provided that HGSC disseminates through the release of multicellular aggregates into the peritoneal cavity, 3D organotypic in vitro models have been developed to recapitulate significant events in metastasis and gain insights into tumor biology [190]. Nonadherent 3D models have also been used to investigate cancer stem cell populations enriched in disseminated spheroids which are thought to contribute to chemoresistance in HGSC [191]. The use of organ-specific growth factors to model niche environments has enabled the development of ovarian cancer organoid lines that maintain the genomic and histological features of primary tumors and preserve tumor heterogeneity, highlighting their potential utility for precision medicine research [192].
Genetically engineered mouse models
Genetically engineered mouse models (GEMMs) offer the potential for in vivo tumor investigation. Various molecular biology techniques can be used to introduce genetic modifications in a spatial and temporal manner for in vivo modeling of genetic defects contributing to tumorigenesis [193]. However, the preclinical utility of these models is dependent on the accuracy in recapitulating the histology and pathogenesis of human tumors, an element which has historically proven difficult in the context of HGSC. Given the uncertainty in the site of origin for HGSC, early attempts to develop HGSC GEMMs have focused on targeting the ovarian surface epithelium for genetic manipulations [193]. These models failed to replicate the molecular and clinical features observed in human HGSC tumors. In addition to selecting the correct cell of origin, targeting genes that are relevant to HGSC is also an important consideration when generating GEMMs. Indeed, targeting different combinations of oncogenes and tumor suppressors in the same cell of origin has resulted in different HGSC GEMM phenotypes [194,195]. Fortunately, as molecular understanding of HGSC biology evolves, so does the ability to correctly model the disease. Targeting HGSC relevant genes, such as TP53, BRCA1, RB1 and PTEN, in fallopian tube epithelial cells has resulted in a new generation of HGSC GEMMs [195][196][197]. These clinically relevant models reproducibly demonstrated the formation of precursor STICs in fallopian tubes and mirrored the aggressive metastatic patterns observed in human HGSCs. Hence, if modeled correctly, GEMMs represent promising options for interrogations of earlystage disease and identifying new therapeutic targets.
Patient-derived xenografts
PDXs are an alternate approach for in vivo experimental models, in which minced fragments of patient tumors are transplanted into immunodeficient mice [198]. The primary advantage of this experimental system is the ability to perform in vivo interrogations of human tumors. Although PDXs have been successfully generated through various different engraftment locations, orthotopic engraftment is preferred as it results in a physiologically relevant microenvironment [199]. In the context of HGSC, there are two engraftment sites that are considered orthotopic: intrabursal (IB) engraftment and intraperitoneal (IP) engraftment [181]. IB engraftment refers to the injection of tumor cells into the ovarian bursa, which is the fat pad surrounding a murine ovary [200]. As there are anatomic differences between the reproductive systems of mice and humans, the bursa can often hinder the extensive peritoneal metastasis characteristic of advanced human ovarian tumors [200].
Alternatively, IP engraftment consists of injecting the tumor cell suspension directly into the peritoneal cavity, mirroring human abdominal tumor dissemination [120,200]. In addition to recapitulating tumor pathology, orthotopic HGSC PDXs have also been shown to maintain molecular profiles highly comparable to patient tumors [201] and emulate patient-specific responses to platinum-based therapy [202]. These studies highlight the advantage in using PDXs for identifying novel precision medicine approaches in HGSC as it provides an in vivo opportunity for investigating tumor heterogeneity. Indeed, Weroha et al. [203] developed an orthotopic PDX bank consisting of 241 EOC models that reflected the molecular diversity observed in patients and can be a promising resource to investigate subtype specific biomarkers and therapies. A limitation of PDX models is the inability to evaluate interactions between the tumor and the immune system, an important facet of the tumor microenvironment [198].
Additional considerations for high-throughput studies
Apart from potential unsuitable experimental models, there are several other factors that can impede the utility of high-throughput studies in precision medicine discovery efforts. Provided that several preanalytical variables (e.g., time to freezing, storage duration, serum vs. plasma etc.) have been shown to influence molecular profiles [116,204], a lack of standardized sample collection and storage protocols can pose as a challenge and potential source of incompatibility for multisite investigations. Though these variables cannot be retrospectively regulated when using samples from biobanks, information about these preanalytical variables should be collected and examined as possible confounders during data analysis. Another consideration is the need for well-annotated clinical cohorts. HGSC patients with extensive disease preventing optimal surgical debulking are often candidates for neoadjuvant chemotherapy [205]; hence, tumor resection is performed only after a round of therapy. As therapeutics can drive the evolution of tumors, tumor samples from these patients likely reflect a different molecular state than those without prior therapy. It is thus essential when conducting high-throughput investigations of clinical samples to use samples with extensive documentation to account for influences from other clinical variables such as treatment history. Additionally, the unprecedented scale of biological data prompts the need for higher computational infrastructure to effectively store and analyze the immense volume of data. While highthroughput experiments are generally well suited for discovery, ultimately to be of clinical benefit, the findings must be integrated into testing regimes which are compatible with the healthcare environment (i.e., costeffective and quick).
Conclusions and future perspectives
Through applications in molecular subtyping, liquid biopsies, and targeted therapies, advancements in highthroughput technologies have opened new avenues for precision medicine discovery in HGSC. Largescale tumor profiling has provided insights regarding the molecular complexity underlying tumorigenesis. Appreciation of this vast heterogeneity has warranted diverging from the one-size-fits-all approach traditionally used for the management of HGSC and EOC as a whole. The use of genetic testing for BRCA1/2 and other HRDs as an indication for the use of PARP inhibitors is an example of the adoption of precision medicine into the clinical management of HGSC, yet the dismal five-year survival rate suggests that more work is still needed. The ability to simultaneously examine thousands of molecules in a single experiment has fueled the discovery of numerous putative tumor markers for liquid biopsies in HGSC. Considering the limited utility of single markers (e.g., CA125) due to tumor heterogeneity, the use of highthroughput tools has enabled the potential for uncovering multimarker panels with improved clinical performance. Large-scale biological experiments have also been utilized for the identification of novel therapeutic targets, and integration of orthogonal approaches can be promising for the detection of actionable vulnerabilities in HGSC.
Despite the alluring potential of high-throughput approaches, failure to appreciate the intricate nature of HGSC biology in research design and experimental models can stifle the translational utility of the findings from these experiments. Considering that HGSC and the other histological subtypes of EOC are distinct diseases characterized by differences in molecular profiles, clinical progression and pathogenesis, EOC is often still examined as a single entity without subtypespecific stratification in preclinical and clinical validation studies, thus acting as a potential confounder of findings. Furthermore, despite increasing evidence indicating fallopian tissue epithelium as the primary tissue of origin for HGSC, many studies continue to use ovarian surface epithelium as 'normal' tissue for comparative experiments, resulting in the potential identification of biologically irrelevant biomarkers and therapeutic targets. As such, it is imperative to incorporate our evolving understanding of HGSC biology in research design to leverage the full potential of emerging highthroughput applications in precision medicine. | 2020-10-09T14:26:49.007Z | 2020-10-09T00:00:00.000 | {
"year": 2020,
"sha1": "81a66bf714fe32f8134f8720cf82af947b52c426",
"oa_license": "CCBY",
"oa_url": "https://jhoonline.biomedcentral.com/track/pdf/10.1186/s13045-020-00971-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81a66bf714fe32f8134f8720cf82af947b52c426",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5933695 | pes2o/s2orc | v3-fos-license | Removal of Arsenic from Drinking Water by Hydroxyapatite Nanoparticles
Arsenic(As) contained in drinking water can cause adverse effects on human health. This study investigated the effect of hydroxyapatite nanoparticles (nano-HAp) on sorption of As(V) ions in aqueous solution. The amounts of arsenic ion,nano-HAp and pH on removal efficiency were also investigated. Results showed that the removal of arsenate from water using hydroxyapatite nanoparticles, improved with increasing pH. The optimum amount of nano-HAp for As (V) removal is found to be 0/6 g/L with the removal efficiency of 88 %. The sorption data were then correlated with the Langmuir, Freundlich, adsorption isotherm models. The results indicated that nano-HAp can be used as an effective adsorbent for removal of As(V) from aqueous solution.
A r s e n i c c o m p o u n d s a r e c o m m o n contaminants in the environment.Because of arsenic toxicity and induced carcinogenetic agents (Eblin et al., 2006;Hughes.,2002),higher arsenic concentration in the environment represents serious problems for human health, especially for populations in Bangladesh, Western Bengal, Vietnam,China, Mexico and Chile.The danger of elevated arsenic concentration in waters in these countries was under lined by WHO, which estimated the recommended limit for arsenic concentration in drinking waters up to10 µg/L Arsenic-contaminated drinking water can cause adverse health effects in human beings.Arsenic plays crucial role in making disturbance in RNA and DNA synthesis, which consequently lead to cancer.Increasing birth of exceptional child, low birth weight, malformed child and dead births were reported due to Arsenic compounds (Jain et al., 2000;Kiping et al.,1997;Ng et al.,2001;Bissen et al.,2003;Penrose et al.,2009;Ng et al.,2003;Burkel et al.,1999;Smedley et al.,2002).The conventional technologies for arsenic removal from waters are based on processes of coagulation, sorption, ionexchange reactions or methods of reverse osmosis.Materials used in these processes are Fe 0 , Fe (III) oxyhydroxides, Mn (II), Al(III), apatite, silicate sands, carbonates, sulphides, ashor various types of coal (Chmielewská et al.,2008;Daus et al.,2004;DeMarco et al.,2003;Hiller et al.,2007;Lin et al.,2001;Sato et al.,2002;Song et al.,2006).Now a days, there is a trend to use the alternative and low-cost materials for arsenic removal from the waters in laboratory or medium-scale experiments, too.Effectiveness of chemically modified or native biomass in processes of arsenic removal was evaluated and proved by various authors (Abdel-Ghani et al.,2007;Boddu et al.,2008;Cernansky et al.,2007;Loukidou et al.,2003;Malakootian et al.,2009;Murugesan et al.,2006;Rahaman et al.,2008;Seki et al.,2005).Calcium hydroxyapatite (HAp), Ca 10 (PO 4 ) 6 (OH) 2 , has also been used for the removal of heavy metals from contaminated soils, waste water and fly ashes (Omar et al.,2003;Takeuchi et al.,1990).Calcium hydroxyapatite (Ca-HAp) is a principal component of hard tissues and has been of interest in industry and medical fields.Its synthetic particles find many applications in bioceramics, chromatographic adsorbents to separate protein and enzyme, catalysts for dehydration and dehydrogenation of alcohols, methane oxidation, and powders for artificial teeth and bones paste germicides (Elliott et al.,1994).These properties relate to various surface characteristics of HAp, e.g., surface functional groups, acidity and alkaline, surface charge, hydrophilicity, and porosity.It has been found that Ca-HAP surface with to P-OH groups acting as sorption sites (Tanaka et al., 2005).The sorption properties of HAp are of great importance for both environmental processes and industrial purposes.Hydroxyapatite is an ideal material for long-term containment of contaminants because of its high sorption capacity for actinides and heavy metals, low water solubility, high stability under reducing and oxidizing conditions, availability, and low cost (Krestou et al.,2004).HAP has been utilized in the stabilization of a wide variety of metals (e.g., Cr, Co, Cu, Cd, Zn, Ni, Pu, Pb, As, Sb, U, and V) by many investigators (Omar et al.,2003;Ramesh et al.,2012;Vega et al.,1999).They have reported the sorption is taking place through ionic exchange reaction, surface complex with phosphate, calcium and hydroxyl groups and/or co-precipitation of new partiallysoluble phases.In this study, the effect of Hap nanoparticles on removal efficiency of arsenic ions in different conditions investigated.
MAteRiAls AND MetHoDs
The hydroxyapatite nanoparticles previously have prepared (Montazeri and Biazar., 2011)and characterized by using the different analyses.The pH values of the solution were roughly adjusted from 2 to 12 by adding HNO 3 and NaOH respectively.The pH of the solutions was then accurately noted.Hydroxyapatite nanoparticles with different concentrations were added to each flask and securely capped, immediately.The suspension was then manually agitated.The pH values of the supernatant liquid were noted.Metal salt of (HAsNA 2 O 4 .H 2 O) was used to prepare metal ion (As(V)) solution.Sorption studies were carried out by shaking aseries of bottles containing different amounts of HAp-nano in 50 mL of metal ions solution with different concentrations and pH.Suspensions were exposed toultrasonic waves (50W,20 min; ), to disperse nanopaticles.The samples were stirred at room temperature at 250 rpm for 1 h (Equilibrium time), then centrifuged for 5 min and the supernatant liquid was analyzed by an atomic absorption spectrometer(S-series, Thermo Scientific; USA).
Results AND DiscussioN effect of pH
The pH is a significant factor for determining the form of the metallic species in aqueous media.It influences the adsorption process of metal ions, as it determines the magnitude and sign of the charge on ions (Gupta et al.,2005).The effect of solution pH on the sorption of As(V) ions from the aqueous solution by Hap-nanoin different concentrations was investigated in the pH range of 2-12 with the As(V) concentrations of 0/6 g/L.The result is shown in Fig
effect of Arsenic concentration
For study the effect of solution arsenate amount on the sorption of As(V) ions from the aqueous solution by HAp-nanoin different concentrations was investigated in the range of (0/1,0/2, 0/4,0/6 g/L in pH:8).The result is shown in fig 2 .increase arsenate absorption occurred with increasing arsenaterate.Arsenate adsorption, in different concentrations of arsenate in water, investigated in nano-hydroxyapatite, showed a similar increase approximately, As a result, absorption levels for these different amounts of arsenate are significant.
effect of contact time
The time-dependent behavior of As(V) dsorption was measured by varying the contact time between adsorbate and adsorbent in the range of 5-120 min.The percentag adsorption of As(V) with different contact time is shown in Fig. 3. From Fig. 3, it can be observed that the rate of removal of As(V) ions was higher at the initial stage, due to the availability of more active sites on the surface of HA and became slower at the later stages of contact time, due to the decreased or lesser number of active sites (Kannan and Karrupasamy 1998).It is apparent from Fig. 3 that until 1 h, the percentage removal of As(V) from aqueous solution increases rapidly and reaches up to 85 %.A further increase in contact time has a negligible effect on the percentage removal.Therefore, a 1 h shaking time was considered as equilibrium time for maximum adsorption.The decrease in rate of removal of As(V) with time may also be due to aggregation of As(V) around the HA particles.This aggregation may hinder the migration of adsorbate, as the adsorption sites become filled up, and also resistance to diffusion of As(V) molecules in the adsorbents increases (Mittal et al. 2010).
effect of mass of adsorbent on As(v) removal
The effect of HA dosage on As(V) removal was analyzed by varying the dosage of HA and the result is shown in Fig. 4. It was observed that the removal efficiency increases with the increase in HA dosage.This reveals that the instantaneous and equilibrium sorption capacities of As(V) are functions of the HA dosage
Adsorption isotherms
Equilibrium isotherm is described by a sorption isotherm,characterized by certain constants whose values express the surface properties and affinity of the sorbent sorption equilibrium is established when the concentration of sorbentin the bulk solution is in dynamic balance with that at the sorbent interface (Oladoja et al.,2008).The adsorption isotherm study is carried out on wellknown isotherms such as Langmuir Langmuir .,1915).
Where b is the constant that increases with increasing molecular size, q max is the amount adsorbed to form a complete monolayer on the surface (mg/g), X is weight of substance adsorbed (mg), M is weight of adsorbent (g), and Ce is the concentration remaining in solution (mg/L).The essential features of the Langmuir isotherm may be expressed in terms of equilibrium parameter RL, which is a dimensionless constant referred to as separation factor or equilibrium parameter (Weber and Chakkravorti, 1974).
Where K f and n are the constants depending on temperature An isotherm plot for sorption of As (V) by HAp-nanois shown in Fig 5 and 6 The diagram indicates that the Freundlich isotherm is favorable for removal of As (V) by HAp-nano.The value of Ralso indicates that Langmuir isotherm is favorable.It can be concluded that Freundlich isotherm is the best fit langmuir isotherms.The adsorption capacity of HAp-nanofor As (V) adsorption is compared with other adsorbents (Table 3).The value of As (V) uptake by HApnanofound inthis work is significantly higher than that of other adsorbents.
Three types of reactions may control As(V) immobilization by HAp-nano: surface adsorption, cation substitution orprecipitation.The first mechanism is the adsorption of As(V) ions on the HAp-nanosurfaces and following ion exchange reaction between As(V) ions adsorbed and Ca ions of HAp-nano (Suzuki et al.,1984).This ion exchange reaction mechanism (Ma et al.,1994) Information about the sorption mechanisms have been inferred by the values of molar ratios (Qs) of cations bound by HAp-nanoto Ca desorbed from HAp-nano (Aklil et al., 2004).Fig 7 presents the effect of calcium concentration on arsenate removal by HAp-nano.For all sorbents tested, increasing calcium level appeared to assist arsenate sorption.When calcium concentration increased from 0 to 2.5 mM, the arsenate removal efficiency increased from 2.4% to 5.4% by using HAp-nano.The calcium effects on arsenate sorption to HAp-nanoare to be due to two reasons.First, according to increasing calcium concentration in water can inhibit HAp-nano, which can inhibit arsenate sorption to the sorbents.Second, Ca 2+ in water can complex with phosphate on HAp-nano surface, resulting in an increase of sorption sites and subsequently an increase of arsenate sorption (Czerniczyniec et al.,2007;Sneddon et al.,2005).SEM image of absorb As(V) by nanoparticles of HAp-nanoshown in figure8.
coNclusioN
The result shows that hydroxyapatite nanoparticle (HAp-nano) is a powerful adsorbent for removing As(V) from aqueous solution.The optimum dose of HAp-nanofor As(V)removal is found to be 0.2 g/L with the removal efficiency of 88 %.Freundlich isotherm had best fit than Langmuir, for experimental data.The adsorption capacity of HAp-nano was found to be 526 mg/g HAp-nano dissolution and hydroxy pyromorphite precipitation were the main mechanisms for As(V) immobilization by HAp-nano.
table . 1: comparison of contact time for As(v) removal
1.It was found that the adsorption capacity of HAp increases with increase in pH in acidic | 2019-04-07T13:07:55.112Z | 2014-08-31T00:00:00.000 | {
"year": 2014,
"sha1": "3ed004af3312bc86044dc274fa6e0c33971335ed",
"oa_license": "CCBY",
"oa_url": "http://www.cwejournal.org/pdf/vol9no2/vol9_no2_331-338.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3ed004af3312bc86044dc274fa6e0c33971335ed",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
236948195 | pes2o/s2orc | v3-fos-license | Neobavaisoflavone May Modulate the Activity of Topoisomerase Inhibitors towards U-87 MG Cells: An In Vitro Study
Despite many advances in therapy, glioblastoma (GB) is still characterized by its poor prognosis. The main reason for this is unsuccessful treatment, which slightly extends the duration of remission; thus, new regimens are needed. One of many types of chemotherapeutics that are being investigated in this field is topoisomerase inhibitors, mainly in combination therapy with other drugs. On the other hand, the search for new anti-cancer substances continues. Neobavaisoflavone (NBIF) is a natural compound isolated from Psoralea corylifolia L., which possesses anti-oxidant, anti-inflammatory, and anti-cancer properties. The aim of this study was to evaluate the effect of NBIF in human U-87 MG glioblastoma cells in comparison to normal human NHA astrocytes, and to examine if it influences the activity of irinotecan, etoposide, and doxorubicin in this in vitro model. We demonstrated that NBIF decreases U-87 MG cells viability in a dose-dependent manner. Furthermore, we found that it inhibits cell growth and causes glutathione (GSH) depletion more intensely in U-87 MG cells than in astrocytes. This study also provides, for the first time, evidence of the potentialization of the doxorubicin effect by NBIF, which was shown by the reduction in the viability in U-87 MG cells.
Introduction
Glioblastoma (GB), a grade IV glioma, is the most frequent primary tumor located in the central nervous system. It is also one of the deadliest cancers [1,2]. Approximately only 5% of patients diagnosed with this neoplasm survive 5 years after the recognition of the disease [3]. The main reasons for such low survival are late diagnosis and ineffective treatment connected with such difficulties as drug resistance and tumor heterogeneity. At present, the standard for GB treatment concerns surgical removal of the tumor [4]. Due to the deep infiltration of GB cells into brain tissue, postoperative patients undergo radioand chemotherapy. However, the current therapeutic strategy slightly extends the duration of remission, and the tumor typically recurs [5,6].
The major pharmaceuticals used in GB treatment are alkylating agents, such as temozolomide (TMZ) [7]. Its mechanism of action involves methylation of guanine at the O6 position, which results in DNA damage and apoptosis of GB cells [8]. Nonetheless, clinical data indicate that TMZ is often inefficient [5,8]. This is due to the activity of O6methylguanine-DNA methyltransferase (MGMT), a DNA repair enzyme, which limits the effect of TMZ [2,5,8]. The expression of its gene is elevated in up to 60% of GB cases, making it a major obstacle to successful treatment [9]. Thus, there is a strong need for a new therapeutic approach.
A group of the anti-cancer agents that might be applied in GB therapy is topoisomerase inhibitors, such as irinotecan, etoposide, or doxorubicin. They are widely used in the
NBIF Decreases Viability and Amount of NHA and U-87 MG Cells without Affecting the Cell Cycle
To determinate the effect of NBIF on the viability of human glioblastoma U-87 MG cells and normal human NHA astrocytes, a WST-1 assay was conducted ( Figure 1). Concentrations of the isoflavone spanned from 1 to 100 µM, and cell exposure to it lasted 48 h. A significant decrease in the viability of U-87 MG cells was observed at each NBIF concentration used, ranging from 1 µM (by ca. 12% of control) to 100 µM (by ca. 42% of control). In NHA cells, a significant viability reduction was noted at the two highest concentrations, i.e., 75 µM (ca. 11% of control) and 100 µM (ca. 28% of control). The viability of astrocytes and glioblastoma cells for the highest tested concentration was estimated at 72.17% (±6.28) and 58.23% (±7.69), respectively. The IC 75 values of NBIF were calculated in GraphPad Prism software and were estimated at 36.6 µM in U-87 MG cells and 96.3 µM in NHA cells. NBIF affects the activity of irinotecan, etoposide, doxorubicin, and additionally, TMZ on GB cells.
NBIF Decreases Viability and amount of NHA and U-87 MG Cells without Affecting the Cell Cycle
To determinate the effect of NBIF on the viability of human glioblastoma U-87 MG cells and normal human NHA astrocytes, a WST-1 assay was conducted ( Figure 1). Concentrations of the isoflavone spanned from 1 to 100 μM, and cell exposure to it lasted 48 h. A significant decrease in the viability of U-87 MG cells was observed at each NBIF concentration used, ranging from 1 μM (by ca. 12% of control) to 100 μM (by ca. 42% of control). In NHA cells, a significant viability reduction was noted at the two highest concentrations, i.e., 75 μM (ca. 11% of control) and 100 μM (ca. 28% of control). The viability of astrocytes and glioblastoma cells for the highest tested concentration was estimated at 72.17% (±6.28) and 58.23% (±7.69), respectively. The IC75 values of NBIF were calculated in GraphPad Prism software and were estimated at 36.6 μM in U-87 MG cells and 96.3 μM in NHA cells. Additionally, to evaluate the influence of NBIF on the proliferation of U-87 MG and NHA cells, a counting assay was performed (Figure 2A,B). For this purpose, two concentrations of NBIF were selected (25 μM and 100 μM) and incubated with cells for 48 h. A significant reduction in the number of the living cells ( Figure 2A) was observed at the highest concentration (100 μM) in both NHA and U-87 MG cells, by ca. 31% and 23% (compared to the controls containing the total number of cells, as 100%), respectively. The percentage of living cells in the groups treated with 100 μM NBIF compared to the controls was estimated at approx. 60% in NHA astrocytes and 75% in U-87 MG cells. The number of the dead cells (stained with DAPI) was minute in every group ( Figure 2B). Additionally, to evaluate the influence of NBIF on the proliferation of U-87 MG and NHA cells, a counting assay was performed (Figure 2A,B). For this purpose, two concentrations of NBIF were selected (25 µM and 100 µM) and incubated with cells for 48 h. A significant reduction in the number of the living cells ( Figure 2A) was observed at the highest concentration (100 µM) in both NHA and U-87 MG cells, by ca. 31% and 23% (compared to the controls containing the total number of cells, as 100%), respectively. The percentage of living cells in the groups treated with 100 µM NBIF compared to the controls was estimated at approx. 60% in NHA astrocytes and 75% in U-87 MG cells. The number of the dead cells (stained with DAPI) was minute in every group ( Figure 2B).
The analysis of the cell cycle profiles of NHA and U-87 MG cells after the exposure to NBIF was performed using a fluorescence image cytometer ( Figure 3). The subpopulations of cells were distributed throughout four phases of the cell cycle: G 1 /G 0 phase, where one set of chromosomes per cell persists; S phase, in which DNA synthesis takes place; G 2 /M phase, where two sets of paired chromosomes per cell prior to the division are present; and sub-G 1 phase consisting of cells containing less than one DNA equivalent (fragmented DNA). As shown in Figure 3, NBIF caused only slight changes in the cell cycle distribution. In NHA cells, the percentage of G 2 /M fraction was decreased from 19% to 16%, and in U-87 MG from 27% to 25%. Additionally, an increase in G 1 /G 0 fraction of NHA cells treated with NBIF from 70% to 75% was noted. The analysis of the cell cycle profiles of NHA and U-87 MG cells after the exposure to NBIF was performed using a fluorescence image cytometer ( Figure 3). The subpopulations of cells were distributed throughout four phases of the cell cycle: G1/G0 phase, where one set of chromosomes per cell persists; S phase, in which DNA synthesis takes place; G2/M phase, where two sets of paired chromosomes per cell prior to the division are present; and sub-G1 phase consisting of cells containing less than one DNA equivalent (fragmented DNA). As shown in Figure 3, NBIF caused only slight changes in the cell cycle distribution. In NHA cells, the percentage of G2/M fraction was decreased from 19% to 16%, and in U-87 MG from 27% to 25%. Additionally, an increase in G1/G0 fraction of NHA cells treated with NBIF from 70% to 75% was noted.
NBIF Causes Reduction in the Level of Cellular GSH in U-87 MG Cells
Glutathione (GSH) is the major intracellular thiol, the reduced form of which persists in non-stress conditions, thus reflecting cell vitality and homeostasis [32]. To evaluate the impact of NBIF on the level of the GSH, cytometric analysis using a thiol-group-specific fluorescent dye, VitaBright-48™ (VB48), was performed. Since the cell count assay showed the effect only at the highest concentration, in this experiment, cells were treated
NBIF Causes Reduction in the Level of Cellular GSH in U-87 MG Cells
Glutathione (GSH) is the major intracellular thiol, the reduced form of which persists in non-stress conditions, thus reflecting cell vitality and homeostasis [32]. To evaluate the impact of NBIF on the level of the GSH, cytometric analysis using a thiol-group-specific fluorescent dye, VitaBright-48™ (VB48), was performed. Since the cell count assay showed the effect only at the highest concentration, in this experiment, cells were treated with NBIF at 100 µM for 48 h. The results revealed a change in the GSH oxidation status in both NHA and U-87 MG cells ( Figure 4A). NBIF caused a significant reduction in the subpopulation of cells with a high level of reduced thiols by 20% in NHA astrocytes and 40% in U-87 MG cells, as compared to the controls ( Figure 4B).
The NBIF Effect on the Activity of Irinotecan, Etoposide, Doxorubicin, and Temozolomide on the Cell Viability
Several studies have shown that isoflavones influence the effects of some chemotherapeutic agents, including topoisomerase inhibitors [33][34][35][36][37][38][39]. A WST-1 assay was performed to evaluate if NBIF influences the activity of irinotecan, etoposide, and doxorubicin. First, in the preliminary part of the experiment, we examined the effect of irinotecan, etoposide, and doxorubicin at concentrations ranging from 1 μM to 200 μM on the cell viability in NHA and U-87 MG cells in 48 h and 72 h ( Figure 5). In cancer cells treated with irinotecan and etoposide, significant differences (p < 0.001) compared to the controls were noted at concentrations starting from 10 μM. Similar results were observed in NHA cells treated with irinotecan, but in astrocytes incubated with etoposide, a significant reduction in cell viability was found at 50 μM and higher concentrations. In groups treated with doxorubicin, significant differences were spotted in every used concentration. Additionally, to inspect the effect of TMZ on NHA cells, we performed a WST-1 assay in groups treated with this agent at concentrations of 1 μM to 100 μM for 24 h, 48 h, and 72 h (lower-right panel), as according to Respondek et al. [7] TMZ had no ± SD of the percentage of cells with a high level of reduced thiols (bar graph) from three independent experiments in at least triplicate; statistically significant differences are designed as * p < 0.05, ** p < 0.001 vs. corresponding control using unpaired t-test.
The NBIF Effect on the Activity of Irinotecan, Etoposide, Doxorubicin, and Temozolomide on the Cell Viability
Several studies have shown that isoflavones influence the effects of some chemotherapeutic agents, including topoisomerase inhibitors [33][34][35][36][37][38][39]. A WST-1 assay was performed to evaluate if NBIF influences the activity of irinotecan, etoposide, and doxorubicin. First, in the preliminary part of the experiment, we examined the effect of irinotecan, etoposide, and doxorubicin at concentrations ranging from 1 µM to 200 µM on the cell viability in NHA and U-87 MG cells in 48 h and 72 h ( Figure 5). In cancer cells treated with irinotecan and etoposide, significant differences (p < 0.001) compared to the controls were noted at concentrations starting from 10 µM. Similar results were observed in NHA cells treated with irinotecan, but in astrocytes incubated with etoposide, a significant reduction in cell viability was found at 50 µM and higher concentrations. In groups treated with doxorubicin, significant differences were spotted in every used concentration. Additionally, to inspect the effect of TMZ on NHA cells, we performed a WST-1 assay in groups treated with this agent at concentrations of 1 µM to 100 µM for 24 h, 48 h, and 72 h (lower-right panel), as according to Respondek et al. [7] TMZ had no effect on U-87 MG cells in these concentrations. We retrieved similar results in astrocytes. The IC 50 values for irinotecan, etoposide, and doxorubicin for 48 h and 72 h exposure were calculated in GraphPad Prism software and are presented in Table 1. For the next part of the experiment, concentrations of irinotecan, etoposide, and doxorubicin were selected (10 μM, 50 μM, and 1 μM, respectively) on the basis of the results obtained in the previous part. These were the lowest drug concentrations in groups incubated for 48 h, at which the statistically significant difference in the viability between NHA and U-87 MG cells had been present. We assumed that the viability of normal cells for the combined treatment should be as high as possible. For irinotecan, the lower concentration was chosen due to the large decrease in viability of normal cells at 50 μM. In the TMZ case, the highest concentration from the previous part of the study was used.
To examine the effect of NBIF on the activity of these chemotherapeutics, we treated NHA and U-87 MG cells with mixtures containing irinotecan (10 μM), etoposide (50 μM), doxorubicin (1 μM), or TMZ (100 μM) combined with the isoflavone at 25 μM and 100 μM for 48 h and assessed the cell viability via the WST-1 test (Figure 6). No favorable effects were observed in groups incubated with NBIF combined with irinotecan, etoposide, or TMZ. Moreover, in U-87 MG cells treated with the mixture of NBIF (25 μM) and irinotecan For the next part of the experiment, concentrations of irinotecan, etoposide, and doxorubicin were selected (10 µM, 50 µM, and 1 µM, respectively) on the basis of the results obtained in the previous part. These were the lowest drug concentrations in groups incubated for 48 h, at which the statistically significant difference in the viability between NHA and U-87 MG cells had been present. We assumed that the viability of normal cells for the combined treatment should be as high as possible. For irinotecan, the lower concentration was chosen due to the large decrease in viability of normal cells at 50 µM. In the TMZ case, the highest concentration from the previous part of the study was used. To examine the effect of NBIF on the activity of these chemotherapeutics, we treated NHA and U-87 MG cells with mixtures containing irinotecan (10 µM), etoposide (50 µM), doxorubicin (1 µM), or TMZ (100 µM) combined with the isoflavone at 25 µM and 100 µM for 48 h and assessed the cell viability via the WST-1 test (Figure 6). No favorable effects were observed in groups incubated with NBIF combined with irinotecan, etoposide, or TMZ. Moreover, in U-87 MG cells treated with the mixture of NBIF (25 µM) and irinotecan or TMZ, an increase in the cell viability was noticed, as compared to drugs alone. Another unbeneficial effect was found in NHA astrocytes treated with the combination of NBIF (25 µM) and etoposide, where a slight reduction in the viability was observed. However, we found significant differences in cells treated with NBIF-doxorubicin mixture by ca. 19% in NHA cells and ca. 20% in U-87 MG cells, compared to groups treated with doxorubicin alone. The cell viability in these groups was estimated at approx. 47% and 23%, respectively. Values not sharing a common superscript differ significantly at p < 0.05.
Discussion
Treatment of GB is an immense challenge for modern medicine. Current regimens are usually unsuccessful and do not sufficiently improve survival; thus, new therapies need to be investigated [5,6]. Among the potential drug candidates are anti-neoplastic agents, such as topoisomerase inhibitors [11][12][13][14][15][16]. On the other hand, there are natural substances, such as isoflavones, that have proven to be effective against various cancer types [31]. There are also reports about their beneficial interactions with topoisomerase inhibitors [33][34][35][36][37][38][39]. In this paper, we investigated the effect of NBIF on the topoisomerase inhibitors' activity, i.e., irinotecan, etoposide, and doxorubicin, on GB cells for the first time.
In several studies, NBIF has proven to decrease cell viability in cancer cells [27][28][29]. This effect was demonstrated by Kim et al. [29] in U-373 MG glioma cells. We found similar results in our study, where the reduction was observed at every used concentration in U-87 MG cells from 1 μM to 100 μM (Figure 1). Furthermore, herein we provide a comparison of the NBIF effect to normal cells, which are NHA astrocytes. In those cells, a decrease in the viability was not noticed at the low concentrations of NBIF (0 μM-50 μM). This was assumed as a sign of no cytotoxicity in normal cells, which is
Discussion
Treatment of GB is an immense challenge for modern medicine. Current regimens are usually unsuccessful and do not sufficiently improve survival; thus, new therapies need to be investigated [5,6]. Among the potential drug candidates are anti-neoplastic agents, such as topoisomerase inhibitors [11][12][13][14][15][16]. On the other hand, there are natural substances, such as isoflavones, that have proven to be effective against various cancer types [31]. There are also reports about their beneficial interactions with topoisomerase inhibitors [33][34][35][36][37][38][39]. In this paper, we investigated the effect of NBIF on the topoisomerase inhibitors' activity, i.e., irinotecan, etoposide, and doxorubicin, on GB cells for the first time.
In several studies, NBIF has proven to decrease cell viability in cancer cells [27][28][29]. This effect was demonstrated by Kim et al. [29] in U-373 MG glioma cells. We found similar results in our study, where the reduction was observed at every used concentration in U-87 MG cells from 1 µM to 100 µM (Figure 1). Furthermore, herein we provide a comparison of the NBIF effect to normal cells, which are NHA astrocytes. In those cells, a decrease in the viability was not noticed at the low concentrations of NBIF (0 µM-50 µM). This was assumed as a sign of no cytotoxicity in normal cells, which is desirable for potential anticancer agents. We also demonstrated that NBIF at 100 µM inhibits cell growth by measuring the number of the U-87 MG and NHA cells treated with the isoflavone (Figure 2). However, we did not notice any significant alterations in the distribution of the cell cycle caused by NBIF (Figure 3). This might suggest that it inhibits cell growth in a cell cycle-independent manner. Together with the fact that we did not observe signs of cell death caused by NBIF, we suppose that this compound has anti-proliferative properties. This is in line with NBIF being considered an inhibitor of DNA polymerase and topoisomerase II, enzymes crucial for cell proliferation [40].
GSH is necessary in numerous cellular processes, such as proliferation; thus, its imbalance may lead to the inhibition of cell growth [41]. In cancer cells, the GSH level is typically elevated in order to buffer the augmented oxidative stress caused by increased metabolism sufficiently. It is also involved in the detoxification of xenobiotics. Depletion of GSH, therefore, results in oxidative damage to DNA and cell organelles. Moreover, excessive levels of GSSG contribute to this, as it acts as a pro-oxidant [42]. In fact, flavonoids are able to induce a reduction in the intracellular level of GSH [43,44]. Although these compounds are characterized by their anti-oxidant properties, depending on the conditions, they are able to inhibit anti-oxidant defense system of the cell [45]. This might be due to the modulation of transcriptional factors involved in GSH recycling, e.g., nuclear factor erythroid 2-related factor 2 (Nrf2), as polyphenols have been proved to act in this manner [46,47]. Herein, we report for the first time that NBIF triggers GSH depletion in GB cells ( Figure 4A,B). In our experiment, this compound at a concentration of 100 µM caused an over 2-fold decrease in the subpopulation with a high level of GSH in U-87 MG cells, and 1.28-fold in NHA astrocytes.
Several isoflavones have been found to enhance the activity of anti-cancer drugs [28][29][30][31][32][33][34]. Since GSH depletion is considered a mechanism to sensitize cancer cells to chemotherapeutical agents [36,37], we investigated if NBIF-induced GSH reduction influences the activity of irinotecan, etoposide, doxorubicin currently used in GB therapy TMZ (Figures 5 and 6). Among the drugs used, in groups treated with irinotecan, etoposide, and TMZ we did not notice any significant favorable effects of the combination with NBIF on the cell viability ( Figure 6). Interestingly, NBIF as an additional anti-proliferative agent did not affect the reduction in the cell viability in these groups when compared to cells treated only with drug. It seems like the only agent that caused the effect was the chemotherapeutic. However, we discovered a potentializing action in cells treated with the NBIF doxorubicin mixture. In U-87 MG cells, the reduction in the viability was almost 2-fold compared to cells treated with the drug alone and greater than in NHA astrocytes. In the literature, it has been reported that some isoflavones exhibit sensitizing action when combined with doxorubicin. One of the studies concerning this subject was by Xue et al. [30], where genistein sensitized drug-resistant human breast cancer cells to doxorubicin. Another studied isoflavone, biochanin A, has been demonstrated to have similar properties. It has been shown to inhibit proliferation in osteosarcoma cells [33] and reverse drug resistance in colon cancer cells [34]. There is also a study conducted on glioma cells, i.e., on U-87 MG cells, by Liu et al. [31], in which formononetin, a soy isoflavone, was proven to enhance the effect of doxorubicin. Our findings are consistent with these experiments, and in comparison, to the results on formononetin treated U-87 MG cells, NBIF was shown to be more effective alone at the same concentrations, but when combined with doxorubicin, reduces the cell viability in a similar manner.
Our work is a first step in exploring NBIF antiglioblastoma properties concerning interactions with topoisomerase inhibitors. We demonstrated that this isoflavone potentiates the effect of doxorubicin. Thus, our findings are a strong basis for future research, oriented to improving the knowledge about mechanisms underlaying our findings, especially molecular processes.
Cell Culture
U-87 MG cells were cultured in DMEM medium supplemented with fetal bovine serum, and NHA cells were incubated in Gibco Astrocyte Medium, consisting of DMEM, N-2 Supplement, One Shot fetal bovine serum. All used media were supplemented with penicillin G (10,000 U/mL), neomycin (10 µg/mL), and amphotericin B (0.25 mg/mL). U-87 MG and NHA cultures were maintained at 37 • C in humidified 5% CO 2 atmosphere. The experiments were performed at passages 8-12 of NHA cells.
Cell Vitality Assay-Assessment of the Level of Cellular Reduced Glutathione (GSH)
In order to evaluate the intracellular level of the reduced GSH, NHA, and U-87 MG, cells were seeded in T-75 flasks at a density of 0.8 × 10 6 per flask and after 48 h treated with NBIF (100 µmol/L) for 48 h. After the incubation, cells were collected by trypsinization and counted. One volume (10 µL) of Solution 5 containing acridine orange that stains all cells, PI that stains dead cells only, and VitaBright-48™ (VB48), which stains viable cells with an intensity depending on the level of thiols, was added to 19 volumes (190 µL) of medium containing 1 × 10 6 suspended cells. The stained-cell suspensions were put into NC-Slides™ A8 and analyzed by NucleoCounter NC 3000 fluorescence image cytometer using the Vitality assay protocol.
Fixed Cell Cycle-DAPI Assay
The analysis of the cell cycle was assessed using NucleoCounter NC 3000 fluorescence image cytometer. Cells were seeded in T-75 flasks (at a density of 0.8 × 106 per flask). After 48 h, they were treated with NBIF (100 µmol/L) for 48 h, then harvested by trypsinization and counted. One million cells were suspended in 0.5 mL PBS and fixed with 4.5 mL of 70% cold ethanol for at least 2 h. Then, the ethanol was removed, cells were washed with PBS, and centrifuged for 5 min at 500× g. To the pellets of cells, 0.5 mL of Solution 3 (containing DAPI and cell membrane-disrupting Triton X-100) was added. After 5 min of incubation at 37 • C, stained cells were loaded into NC-Slide A8™ and analyzed by NucleoCounter NC 3000 fluorescence image cytometer using Fixed Cell Cycle-DAPI assay protocol.
Statistical Analysis
In all experiments, the mean values of at least three separate experiments performed in triplicate ± SD were calculated. Differences between groups were analyzed in GraphPad Prism 8.0 (GraphPad Software, San Diego, CA, USA) program by unpaired t-test or oneway ANOVA followed by post-hoc Dunnett's or Tukey's test, as appropriate. A p-value lower than 0.05 was considered indicative of a statistically significant difference.
Conclusions
Taken together, our results shed new light on the anti-cancer properties of NBIF. Herein, we report for the first time that this compound at a concentration of 100 µM causes GSH depletion in U-87 MG cells, exhibits anti-proliferative effect against them, and sensitizes them to doxorubicin. Therefore, further studies of the molecular basis underlying the NBIF action need to be carried out.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2021-08-08T06:16:24.358Z | 2021-07-27T00:00:00.000 | {
"year": 2021,
"sha1": "45db1472da7cdd47ed2e96c27bf6afc922ec8e3c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/15/4516/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b73daff0210c2d1d3cccbde2a42513cb4dba6e16",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6986663 | pes2o/s2orc | v3-fos-license | Nutritional status and serum zinc and selenium levels in Iranian HIV infected individuals
Background Human immunodeficiency virus infected individuals are prone to malnutrition due to increased energy requirements, enteropathy and increased catabolism. Trace elements such as zinc and selenium have major role in maintaining a healthy immune system. This study was designed to evaluate the nutritional status of Iranian subjects who were newly diagnosed with human immunodeficiency virus infection and to compare serum level of zinc and selenium in these patients with those of the sex and aged match healthy subjects. Methods After an interview and physical examination, nutritional assessment was done based on clinical and anthropometric parameters. Body mass index (normal range 18.5–27 kg/m2 based on age) of less than 16, 16–16.9 and 17–18.4 kg/m2 were considered as severe, moderate and mild malnutrition respectively. Serum level of zinc and selenium were measured by graphite furnace atomic absorption. Results Severe, moderate and mild malnutrition were detected in 15%, 38% and 24% of human immunodeficiency virus infected individuals respectively. Compared with the healthy control group, serum level of zinc and selenium in the human immunodeficiency virus infected subjects were significantly lower (P = 0.01 and P = 0.02 respectively). Conclusion Malnutrition found to be prevalent in Iranian human immunodeficiency virus infected individuals and low serum zinc and selenium levels are common in this population.
Background
Human Immunodeficiency Virus (HIV) infection is a major health problem in the world and HIV infected individuals are vulnerable to malnutrition due to several factors including inadequate nutrient intake (anorexia, gastrointestinal complications such as nausea and vomiting, oral and esophageal sores), nutrient loss (malabsorp-tion and/or diarrhea), metabolic alteration (increased protein turn over and changes in fatty acid metabolism), and drug-nutrient interactions [1,2].
Although malnutrition is more frequent at the end of HIV infection course, it can also occur at the onset of the disease, before sever immunosuppression [3].
Malnutrition and HIV infection can deteriorate immune system function including decline in CD4 lymphocyte count and delayed type immune reactions [2].
Functional status and survival of HIV-infected patients are affected by their nutritional conditions [4]. The critical role of nutritional support and highly active anti-retroviral therapy (HAART) in HIV-infected individuals has been approved. American Dietetic Association recommends nutritional support as a part of the care provided to HIVinfected patients [5].
Trace elements especially zinc (Zn) and selenium (Se) are important for maintaining a healthy immune system. Zinc deficiency can declines T cells generation and depresses humoral and cell-mediated immunity [6,7]. Selenium deficiency also has several medical implications including impaired immune response [8]. Taylor et al proposed a role for Se modification both in-vitro and in-vivo in HIV infection [9]. The main route of HIV transmission in IRAN is via injection drug use (IDU) and there is no data about nutritional status among this population. The goals of this study were 1) to evaluate nutritional status of newly diagnosed Iranian HIV-infected individuals and 2) to compare the serum Zn and Se levels in these patients with those of healthy individuals.
Method
This study is a one-year cross-sectional, descriptive analytic survey conducted at Iranian Referral HIV/AIDS Research Center affiliated to Tehran University of Medical Sciences. This center is supported by Iranian Ministry of Health and Medical Education and provides free services such as Para-clinical, clinical and consultation for each volunteer who may be at risk of infection by HIV or any other sexually transmitted disease. The study group included newly diagnosed HIV infected adult males whose infection was confirmed with the anti-HIV antibody tests (ELISA and Western-blot). The control subjects were age matched healthy males related to HIV infected individuals (who accompanied HIV infected patients), without any medical problem at the time of the study or history of any chronic disease and with negative anti-HIV antibody test.
The study protocol was approved by the institutional review board and all patients provided written consent form.
During patient's interview, demographic data including social, behavioral and medical history were collected in the designed forms. Nutritional status of each patient was assessed using anthropometric parameters. Body weight was determined to the nearest 0.1 kilogram (kg) using adult balance and standing height was determined to the nearest one centimeter (Cm). Body Mass Index (BMI) was calculated using the following formula: BMI = Body Weight (kg) divided by [Height (m)] 2 . BMI (normal range 18.5-27 kg/m 2 based on age) of less than 16, 16-16.9 and 17-18.4 kg/m 2 were considered as severe, moderate and mild malnutrition respectively [10]. All patients were asked about body weight changes during past six months. Up to 10% body weight loss was considered significant and more than 10% body weight loss was considered severe weight loss [11]. According to definition of Center of Disease Control and Prevention (CDC), wasting syndrome was defined as an involuntary weight loss of greater than 10% of baseline body weight during the past 12 months or at least 5% loss of body weight during the past six months [12].
Clinical assessments including medical history and physical examination were conducted to identify any sign or cause of malnutrition. Physical appearance, opportunistic infections, diarrhea, symptoms of gastrointestinal distress such as nausea and vomiting, medications, use of herbal supplements and functional status were considered in the clinical assessment. Hepatitis B surface antigen (HBsAg) was detected by second generation enzyme linked immunoassay kit (ELISA), Diasorin Italy and third generation ELISA kit, DRG, Germany was used for HCV-Ab detection.
Six milliliters (mL) of fasting blood sample was collected from all HIV-infected and healthy subjects under trace elements free condition. Blood samples were centrifuged at 3000 g for five minutes. The collected serum was stored at -70°C until analysis. All glassware and bottles used for separation of serum and further analysis were previously soaked in 10% nitric acid and rinsed thoroughly with deionized water. Serum Zn and Se levels were measured using atomic absorption spectrophotometer (Atomic absorption spectrophotometer Shimadzu, AA-680, Japan). Samples and standards concentrations were read in duplicate.
For this study, Zn deficiency was defined as a serum level < 67 mcg/dL, using the cut-off referenced by Bender and Bender for normal plasma Zn (67-183 mcg/dL) [13]. Selenium deficiency was defined as a level < 85 mcg/L, which is a cut-off associated with increased mortality [14].
Data was analyzed using SPSS (Chicago, IL, USA) software, version 11.5. Normal distribution of data were assessed using Kolmogorov-Smirnov test and independent sample t-test was used to compare numeric variables such as age, weight, high, serum albumin, serum Zn and Se levels between HIV-infected patients and healthy subjects. Differences between serum Zn and Se concentrations from recommended cutoffs were evaluated by onesample t-test. ANOVA was used to compare serum Zn and Se concentrations between groups that were categorized based on malnutrition severity. For determination of differences between severity and prevalence of malnutrition between HIV infected individuals and healthy group, chisquare and fisher-exact test were used. Correlations between data were evaluated using pearson correlation. Descriptive statistics (cross-tabs) followed by selection of chi-square and risk were used for generation of odds ratio and confidence interval. P-values of less than 0.05 were considered as significant.
Results
One hundred HIV-infected adult patients with the mean age of 35.4 ± 7.8 years (range: 21-45 years old) and 100 healthy individuals with the mean age of 32.4 ± 7.8 years (range of 20-43 years old) completed this study. Table 1 shows socioeconomic status and past medical history of HIV-infected patients. Comparisons of demographic characteristics (including age, weight, height, BMI) and some nutritional parameters (including serum albumin concentration, and CD4 lymphocytes count) of HIV-infected and healthy subjects are shown in table 2. None of the HIVinfected individuals express any significant past medical problem. In this study 31% of injection drug users had a history of medication consumption such as benzodiazepines, acetaminophen-codeine, tramadol, antibiotics and non-steroidal anti-inflammatory drugs and 3% of them used herbal medications. None of the HIV-infected Most of the HIV infected individuals were pale, 60% of them were anemic (serum hemoglobin level of less than 12 g/dL) and all of them had some degree of temporal atrophy. Gastrointestinal symptoms including nauseas, vomiting, diarrhea and decreased appetite were observed in 9% of patients.
Significant and severe recent weight losses were detected in 7% and 5% of patients respectively. Based on CDC definition, 12% of the patients had wasting syndrome.
Serum albumin level was significantly lower in HIVinfected patients compared with healthy subjects (P = 0.03). Seventeen percent of patients had serum albumin level of less than 2.5 g/dL. Additionally, serum albumin concentration was significantly lower in HIV-infected subjects who suffered wasting syndrome than the HIV-positive individuals without wasting (2.1 g/dL versus 2.7 g/dL; P = 0.01). There was positive correlation between patients' serum albumin level and CD4 lymphocyte count (r = 0.9, P = 0.003). Additionally individuals with severe malnutrition had a significant lower CD4 counts in comparison with individuals with normal, mild, or moderate malnutrition. (P = 0.004). Table 3 compares the prevalence of different types of malnutrition between HIV-infected individuals and healthy subjects. Moderate malnutrition was the most prevalent type of malnutrition in HIV-infected individuals (observed in 38% of patients) followed by mild malnutrition (observed in 24% of the patients).
Serum level of Zn and Se among HIV-infected and healthy subjects are shown in table 4. As can be seen serum level of Zn and Se were significantly lower in HIV-infected patients (P = 0.01, P = 0.02 respectively). Mean concentrations of Zn and Se were significantly lower than recommended cutoffs (p = 0.01 and p = 0.03 respectively). In addition those patients who may have been infected by used syringe had significant lower serum Zn (32.4 ± 10.6 vs. 67.2 ± 14.3 mcg/dL) and Se (55.8 ± 14.6 vs. 84.1 ± 9.9 mcg/L) concentrations compared with those who were probably infected via sexual contact (P < 0.001 for both comparisons).
Prevalence of Zn and Se deficiency of the HIV infected individuals and healthy persons also are shown in Table 4. As presented in this Table, 65% and 38% of HIV infected individuals had Zn and Se deficiency respectively. Patients with moderate malnutrition had significant lower serum Zn and Se levels than non-depleted patients. Severe malnutrition was a risk factor for Zn deficiency (Odds Ratio = 2.3 (95% confidence interval = 1.2-4.5 and P = 0.001).
Discussion and conclusion
In this study 77% of newly diagnosed HIV-infected patients who were not in advanced phase of the disease were evaluated and founded to have some degree of malnutrition.
Malnutrition is a significant clinical problem in HIVinfected individuals. In this population, wasting has been associated with disease progression and increased mortality [15].
Although malnutrition is usually encountered at the advanced phase or end of the HIV-infection course, however, as seen in our study it may also occur in the first stages of the HIV-infection as well [16].
According to the last report of Iranian Diseases Prevention and Control Center, 94.3% of Iranian HIV infected individuals are male [17] and consequently most referred patients to our center were male, so the present study was designed for this sex group.
As anthropometric measurements provide inexpensive and non invasive method to evaluate nutritional status [18], these parameters were used in this study to evaluate nutritional status of HIV-infected individuals.
The results of this study showed moderate to severe malnutrition in 53% of the HIV-infected subjects. Contrary to other studies which stated sexual contact as the main route of HIV infection [19,20], the most prevalent route of HIV infection in our patients was IDU. Moderate to severe malnutrition were detected in 68% of this main subgroup of the patents that is comparable with the findings of Nazrul Islam et al who reported mild to severe malnutrition in more than 60% of their patients who were drug addicts [21]. Due to educational, socioeconomic and behavioral characteristics, addicted persons especially injection drug users are more vulnerable to malnutrition [21]. Most of the injection drug users in present study had less than 10 years education and were unemployed, unmarried and smoker with an insufficient monthly income. Also some of them were homeless and alcoholic.
Although it was reported that there is not correlation between body weight loss and level of immunosuppression [16], the results of this study showed significant reverse relationship between severity of weight loss and serum CD4 lymphocyte count that is compatible with the findings of Rivera et al [22].
As reported in another study [23], wasting syndrome was present in about 12% of the patients. During wasting in HIV infection, the body tries to compensate energy from available sources such as visceral proteins [24]. Although normal serum albumin concentration has been reported in HIV-infected patients [5], serum albumin levels in our patients were significantly lower than those of the healthy subjects. Also the patients with wasting syndrome had significant lower serum albumin than the patients without wasting.
Similar to the findings of Koch et al [25], Graham et al [26] and Zamrzly et al [27], serum levels of Zn and Se in our HIV-infected individuals were significantly lower than the healthy group.
This is the first study about nutritional status of Iranian HIV infected individuals and there are some limitations to our study. The first one is that the cross sectional design of the study limits our conclusions and does not provide information about effects of nutritional status on disease progression and outcome of HIV infected individuals. We have started a controlled clinical trial to assess effects of nutritional support on disease progression and outcome of HIV infected patients. In another hand we know that serum Zn and Se levels may not be the best indicator of total stores for these trace elements in the body and cellular concentrations or related enzymes activity may be more reliable.
Based on World Health Organization nutritional recommendations for HIV infected persons, adequate nutrition is critical for health and survival for all subjects regardless of HIV infection condition [28]. Following presentation of the study results, nutritional assessment is a part of clinical assessment of HIV infected individuals. Patients and their family have counselling about maintain adequate healthy diet and nutrition care in our centre. Also it is emphasized that these patients should be taking daily recommended of relevant micronutrients through diet or supplements.
In conclusion malnutrition and serum Zn and Se deficiency are common in Iranian HIV-infected patients and early evaluation of nutritional status of these subjects and providing appropriate nutritional support and mineral supplementation along with the specific anti-retroviral treatment are recommended. | 2014-10-01T00:00:00.000Z | 2008-12-09T00:00:00.000 | {
"year": 2008,
"sha1": "6d9f3f8b9bcdd0519d695a845ac74f4b8bb892f2",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-8-165",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc1c5ef78522c724946680c67fc53a67407ad45b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203848379 | pes2o/s2orc | v3-fos-license | Risk factors associated with the occurrence of Brucella canis seropositivity in dogs within selected provinces of South Africa
The growing population of free-roaming dogs in informal communities in South Africa may increasingly place humans at risk of possible zoonotic infections including, but not limited to, Brucella canis. Worldwide, the prevalence of B. canis infection has increased during the last two centuries, resulting in increased reports of dog and human infections. This study investigated the risk factors associated with B. canis infection in dogs in three predefined areas: Gauteng, the Eastern Cape and Western Cape provinces, of South Africa. Dogs aged 7 months and older presented to welfare organisations and breeders in the study areas were selected for sampling. A comprehensive questionnaire on dog ownership, general health and vaccination status was completed prior to sampling. One blood sample of 8 mL was collected aseptically per dog. Then, equal amounts (4 mL) were transferred to the different vacutainer tubes. The 2-mercaptoethanol-tube agglutination tests were used after validation. Fifty-two dogs out of the combined sample of 1191 dogs from the three study areas tested positive for B. canis, representing an overall occurrence of 4.4%. A binomial logistic regression model was fitted to identify risk factors associated with B. canis in dogs within the study areas. Dog age (0.371; p < 0.05) and external parasite infestation (0.311; p < 0.05) were significantly associated with the B. canis infection. Ownership and sterilisation need to be further investigated as possible risk factors because both had odds ratios of 1684 and 1107, respectively, in the univariate model.
Introduction
For thousands of years, dogs have been people's closest friend and companion, and depended almost entirely on humanity for food, shelter and care (Wang et al. 2016). The emergence of rural and peri-urban slums and informal communities has resulted in increased urban and peri-urban dog populations and scavengers (Marzetti et al. 2013). The presence of stray and uncontrolled dogs originating from different locations in South Africa and possibly from neighbouring countries, together with poor levels of biosecurity, hygiene and relative lack of infrastructure in informal settlements, facilitates infection of dogs with pathogens, including Brucella spp. Katona and Katona-Apte (2008) noted that malnutrition causes immunodeficiency worldwide, with groups such as infants, children, adolescents and the elderly most affected, emphasising the close relationship between malnutrition and primary infections in immune-compromised humans. Mason, Musgrove and Habicht (2003) indicated that there is a possibility to reduce the world's disease burden by at least 32% if poverty and malnutrition are addressed. The burden of poverty is complicated by HIV and AIDS infections in poorer communities, with a resultant increase in the numbers of immune-compromised humans, which can be aggravated by concurrent infection with Brucella canis.
According to the 2011 statistics of the City of Johannesburg Municipality, Gauteng (CJM-Gauteng), 124 075 informal dwellings (shacks in a backyard) and 125 748 informal dwellings (shacks not in a backyard but in an informal settlement or on a farm) exist around Johannesburg, and a huge number of children under the age of 18 live in informal dwellings (Statistics South Africa 2013). Based on the Johannesburg Community Survey conducted in 2007, 48% of households in informal dwellings have one or more children. The high number of children living in shacks points to the increased risk of exposure to possible infection with zoonotic diseases from stray or pet dogs.
Other contributory factors to the increased risk of zoonotic infection from dogs might be the low level of schooling among adults in these communities who may not be aware of the existence of zoonotic diseases. According to the Housing Development Agency (2013), the proportions of dwellings classified as informal settlements have gradually declined, from approximately 20% in 2001 to 9% of the total households by 2011, in the Nelson Mandela Bay Municipality, Eastern Cape. Informal dwellings in the Overstrand and Theewaterskloof municipalities in the Western Cape made up between 16.4% and 17.5% of total households as in 2014 (WCGPT 2015). Lucero et al. (2005) are of the view that human cases of infection with B. canis are probably underreported and could be more common than what is indicated in published reports. This is attributed to the difficulty of diagnosing B. canis infection in humans. Furthermore, Lucero et al. (2010) suggest that the risk of B. canis infection for people handling dogs or in close contact with dogs, especially dogs kept in kennels, was higher than in people who do not come into close contact with dogs.
In the majority of reported human cases, B. canis infection is the direct result of exposure to whelping females when high concentrations of the Brucella organisms occur in the birth fluids and vaginal discharge (Kazmierchak 2012). The most common symptoms reported in humans (particularly young children) infected with B. canis include fever, diarrhoea, vomiting, headache, fatigue, myalgia and nausea as well as clinical signs of endocarditis (Nomura et al. 2010). Cohabitation with dogs significantly heightens the risk of B. canis infection among mates, with urine being the most important source of infection.
Since the first report of B. canis in dogs in the United States (US) in 1966, the bacterium has been detected globally, presenting itself in various forms. Brucella canis has now been reported in the US, Canada, Central and South America, some European countries, Tunisia, Nigeria, South Africa, Madagascar, the Philippines, Malaysia, India, Korea, Japan, Taiwan and China (CFSPH 2018).
The risk factors and prevalence of B. canis in dogs and humans in South Africa to date are unknown, with only a few positive cases in dogs reported and no relevant targeted study has been conducted. Gous et al. (2005) believed that the two cases found in dogs in the Western Cape province, 5 months apart, might have been indicative of a higher prevalence of B. canis in South Africa than that recognised at the time. They proposed that B. canis might also be endemic in South Africa, with distribution possibly limited to the stray dog population in informal settlements (Gous et al. 2005).
Although cases of B. canis in dogs in South Africa have previously been reported by Van Helden (2012) in the towns of Bedford, Somerset West, Hermanus and Knysna (Gous et al. 2005), this is the first study in South Africa that investigated the occurrence and risk factors of B. canis in dogs in multiple provinces of South Africa. The need for a study such as this one is underscored by the fact that notwithstanding the reported isolated cases of B. canis in dogs, no study to date has investigated the occurrence of B. canis in humans in South Africa.
Therefore, the objective of this study was to investigate the risk factors associated with the occurrence of B. canis among dogs originating from different communities in three selected provinces of South Africa to provide empirical evidence to guide informed decisions on the future control and prevention of B. canis by the regulatory authorities.
Materials and methods
The study was conducted in selected locations of the City of Johannesburg Municipality-Gauteng (CJM-Gauteng), the Port Elizabeth Municipality in Nelson Mandela Bay Metropolitan (NMBM), the Eastern Cape province and Theewaterskloof and Overstrand (T&O) municipalities in the Western Cape province, South Africa. The CJM-Gauteng, including all its surrounding cities and districts, is by far the biggest metropolitan area in the country, boosted significantly by the wide net of smaller cities that constitute the megacity. According to the City of Johannesburg 2011 census, the City of Johannesburg Local Municipality has a total population of 4.4 million of which 76.4% are black Africans, 12.3% are white people, 5.6% are mixed race people and 4.9% are Indians or Asians. There are 1 434 856 households in the municipality with an average household size of 2.8 persons per household. As there is a provision for a maximum of two dogs per household (City of Johannesburg 2011; Statistics South Africa 2013), the total maximum number of dogs expected in the study area is 284 106 × 2 = 568 212 dogs.
In NMBM, the study was conducted predominantly within the greater Port Elizabeth area, Eastern Cape, in collaboration with the state veterinarian and welfare organisations situated in the townships. The NMBM covers an area of 1958.91 km², with Port Elizabeth, one of the largest cities in South Africa, being located within the NMBM and the Algoa Bay region (ECSECC 2010). According to the Nelson Mandela Bay Integrated Development Plan 2011-2016, the metropole has a current population of 1.3 million, making it the fourth largest city in South Africa.
The Western Cape part of the study was conducted in two local municipalities in the Overberg region of the Western Cape, namely, the T&O municipalities. They cover an area of 4940 km 2 in the western interior of the Overberg region. Theewaterskloof has a total population of 117 167 (2016) and a total number of 33 118 households, 77.5% of which reside in permanent dwellings and approximately 7500 households staying in informal dwellings occupying the area between the Riviersonderend mountains to the north and the Kogelberg and Kleinrivier mountains to the south. The Overstrand Local Municipality (OLM) is the smallest of the four municipalities, which stretches over only 675 km² and has a total population of 93 407 and 35 718 households with an average household size of 2.6 persons per household.
During the presentation of dogs for welfare services or treatments, dogs were recruited into the study using the simple random sampling method whereby each member of the subset has an equal probability of being chosen from among the group. This method was accepted because the total population of dogs to be sampled was unknown in each location. Dogs older than 7 months were taken as mature and thus sexually active.
Determination of the sample size was based on population estimates of 11.1 humans per dog ratio for rural areas (Rautenbach, Boomker & De Villiers 1991), and the fact that a sample size for a large population or theoretically infinite population is 323, estimated with a desired absolute precision of 5% with a confidence interval of 95%. Based on these guidelines, a sample size of 400 was used for the Western Cape, Eastern Cape and CJM-Gauteng.
A pretested three-part structured questionnaire was prepared and completed by each participant. The first section of the questionnaire comprised demographic details of the dog owners. The second section of the questionnaire comprised the information about the dog, such as the breed, dog size, geographic location, parasite burden and body condition of the animal. Body condition scoring (BCS) was performed using the five-point Hill's BCS method (Anon 2010). In the third section, all clinical signs associated with B. canis were collected. In addition, vaccination history and sterilisation information was recorded. Detailed information regarding the history of origin of the animal was recorded for future reference.
Serum and heparinised blood samples (one sample from each dog) (n = 1191) were collected aseptically using an 8 mL syringe and 0.5 inch precision glide 20-gauge needle from the dog's cephalic vein. Each blood sample was divided on site equally into a serum tube without coagulant and a tube containing heparin. All blood samples were marked with a pre-allocated dog number, packed in a cooler box (≈4 °C) and transported to the University of South Africa (UNISA) Bacteriology Laboratory at the Florida Science Campus, the State Veterinary Office in Port Elizabeth or the Stellenbosch Veterinary Laboratory (SVL). At the laboratory, sterile blood samples from the red top vacutainer were left for at least 3 h to ensure that clotting had taken place before being centrifuged at 2800 rpm for 15 min. The serum was then transferred, using a pipette, to a pre-marked 2 mL cryotube for overnight storage at -20 °C. Similarly, the heparinised blood samples were transferred directly into 2 mL premarked cryotube containers, refrigerated overnight at -4 °C. The two samples were packed in sterile containers, sealed and marked before being transported according to the World Organization for Animal Health (Office International des Epizooties [OIE]) protocol for transporting infectious agents (OIE 2012), to the ARC-OVR in Pretoria for analysis.
The 2-mercaptoethanol-tube agglutination test (2ME-TAT), recommended by the Department of Agriculture Forestry and Fisheries (DAFF), was first validated and then employed to test the samples for the presence of antibodies among positive reactors. For the determination of diagnostic sensitivity, the following samples were used: 3 reference sera and 28 proficiency test samples. From the results obtained, the sensitivity was found to be 100% (31/31). For the determination of diagnostic specificity, the following samples were used: 1 reference serum and 12 proficiency test samples. From the results obtained, the specificity was found to be 100% (13/13). For the determination of repeatability, two positive reference sera, one negative reference serum, one Synbiotics kit positive control sample and five proficiency samples were tested on three occasions and there was 100% agreement for all results. The 2ME-TAT was performed at the ARC-OVR bacteriological and serological laboratory with the following serial number: # 17-1202; the controls (high positive, medium positive and negative controls) had the following serial numbers: # 212-H 0601 (high), # 212-M 0402 (medium) and 212-N 0402 (negative).
Culture of positive reactors was performed at the SVL. All negative cultures were sub-cultured at intervals to confirm the presence or absence of any bacteria. A positive B. canis was used as a control.
The data were analysed using IBM SPSS Statistics version 22. Descriptive statistics were summarised and presented as frequencies, percentages and means. Pearson's chi-square test was used to examine associations between categorical explanatory variables and the outcome variable (disease status of the dog). In cases where the variables involved exceeded two categories, pairwise comparisons were performed and the Bonferroni method was used to adjust the p-value and lower the chance of making Type 1 errors (Sedgwick 2012). For continuous explanatory variables such as BCS, the one-way analysis of variance (ANOVA) method was used and, where associations were found, Tukey's honestly significant difference (HSD) test was performed to identify exactly where the statistically significant differences existed. A logistic regression model was fitted to the data to identify risk factors associated with B. canis in dogs within the study areas. Variables for inclusion in the multivariable logistic regression were determined by first fitting the univariable model. A less strict p-value of 0.25 was set as the cut-off point for selection of variables from the univariable analysis to include in the multivariable model, as the more commonly used level of 0.05 can exclude variables that are known to be important.
Ethical considerations
Ethical clearance to conduct this study was obtained in 2014 from the University of South Africa, College of Agriculture and Environmental Sciences ethics committee before the study commenced (Ethical clearance number: 2014/ CAES/061). In addition, Section 20 approval to conduct this
Results
The analysis was based on a sample of dogs (n = 1191) drawn from the three study areas (Table 1). The overall mean age of the 1191 dogs sampled was 38 months. However, there were notable differences in the average age of dogs, with CJM-Gauteng having the youngest mean age of 29.9 months compared to 43.4 months for NMBM and 40.6 months for T&O (Table 1).
Fifty-two dogs out of the combined sample of 1191 dogs from the three study areas tested positive for B. canis, representing an overall occurrence of 4.4%. The proportion of positive cases was highest in the NMBM study area (9.8%; n = 39/400), compared to T&O (2.0%; n = 8/400) and CJM-Gauteng (1.3%; n = 5/391) (Table 1). Fewer than 5% (n = 52/1191) of dogs from all the three study areas tested positive for B. canis, representing an overall crude prevalence of 4.4%. The percentage of dogs that tested positive for B. canis was 5.1% (n = 44/858) among household dogs, compared to 2.4% (n = 8/333) among stray dogs. The chi-square test results showed that B. canis occurrence was significantly higher among household dogs compared to stray dogs (chi-square = 4.010, p = 0.04). Likewise, infection with parasites was significantly associated (chi-square = 279.459, p = 0.00) with the occurrence of B. canis in dogs, with dogs that had severe infestation having a higher proportion of positive cases (11.1%) compared to those with slight infestation (7.0%), mild infestation (3.8%) and no infestation (2.0%).
Both male and female dogs had almost similar levels of occurrence (4.6% vs. 4.2%). Medium-sized dogs had a higher occurrence (6.2%) compared to small dogs (2.6%) and large dogs (2.2%). Household dogs had a higher occurrence (5.1%) compared to stray dogs (2.4%). Sterilisation status appeared not to influence the occurrence of B. canis (Table 1). Dogs with poor body conditions had a higher proportion of B. canis infection (5.9%) compared to others, and dogs with reproductive conditions (9.3%) had a higher positive number compared to those with joint illnesses (4.3%), no conditions (4.2%) and other conditions (1.9%).
From the three study areas, 62.6% (n = 746/1191) of the dogs were classified as having an acceptable BCS (score of 3), while there were almost equal proportions for BCS 4 or 5 (18.8%; n = 224/1191). Dogs with a BCS considered to be poor (one or two) were the lowest in number (18.6%; n = 221/1191) BCS (Table 1).
Altogether most dogs (89%; n = 1059/1191) included in the study did not show any clinical signs, 5% (n = 60/1191) showed signs of reproduction problem-related clinical signs, joint problem signs (2%; n = 24/1191) and other related signs (4%; n = 48/1191). About half (51%; n = 607/1191) of the dogs from the three study areas did not have any external parasites. However, 39% (n = 469/1191) of the dogs had a slight infestation, followed by 7% (n = 79/1191) that had mild infestation. Only 3% (n = 36/1191) were severely infested with external parasites. Based on the initial univariable analysis (Table 2), variables that were shown to be significant predictors of B. canis on the generous Wald chi-square statistic (p ≤ 0.25) were selected for inclusion in the multivariable model and these were age (Wald chi-square = 15.113, p = 0.00), external parasites (Wald chi-square = 14.725, p = 0.00) and ownership (Wald chisquare = 3.834, p = 0.05) ( Table 2). The other three variables that included clinical signs, sex and sterilisation fell short of the cut-off p-value of 0.25 and therefore were excluded from the next step.
The model was checked for significance and was statistically significant (chi-square = 31.11, p = 0.00). The Nagelkerke R 2 indicated that the model explained 8.6% of the variance in the occurrence of B. canis in dogs. The Hosmer-Lemeshow test was used to determine whether the model fitted the data well. This is the case if the indicated p-value is more than 0.05 and the model was therefore a good fit (chi-square = 1.970, p = 0.85). The classification statistics estimated the logistic regression model to give an accurate prediction on B. canis outcomes about 96% of the time.
The contribution of each of the three predictor variables to the model was computed. Ownership (Wald chi-square = 1.727, p = 0.19) did not contribute significantly to the prediction of B. canis among the dogs, while age and external parasites were both shown to be significant risk factors (p < 0.05) (Appendix 1). Because the ownership variable (categorised as household dogs against stray dogs) was indicated as 'not a significant contributor' in the model, it was removed from the list of covariates and the logistic model was re-computed. The final re-run model remained significant (chi-square 29.198, p = 0.00) and the Hosmer-Lemeshow test showed a good fit (p = 0.99). The Nagelkerke R 2 value of 0.080 also remained essentially unchanged, indicating that the model explained about 8% of the variance in B. canis among the dogs (Table 3). The two variables retained in the final model were age and external parasites, as shown in Table 3. The Exp (B) coefficient represents the adjusted odds ratio and estimates the change in the odds of B. canis positive results in the dogs for a unit change in the respective predictor variable.
The age of the dog was a significant risk factor for the occurrence of B. canis (Wald's chi-square = 11.628, p = 0.00). The indicated Exp (B) value of 0.371 shows that dogs aged 36 months or younger had 63% less risk of testing positive for B. canis. Conversely, this means that dogs older than 36 months were 2.7 times more likely to test positive for B. canis. External parasites were a significant risk factor for B. canis (Wald's chisquare = 11.999, p = 0.00). The indicated Exp (B) value of 0.311 implies that dogs with no external parasites had 69% less risk of testing positive for B. canis. Conversely, dogs with external parasites were 3.2 times more likely to test positive for B. canis (Table 3).
Discussion
In this study, we have described the occurrence and risk factors for the detection of B. canis in selected provinces in South Africa. Six risk factors were tested: the age of dogs, presence of external parasites, ownership of dogs recruited into the study, clinical signs at the time of presentation, sex of dogs and sterilisation status. Two variables were retained in the final model -age and external parasites -and the other variables were dropped as insignificant.
Sexual activity and predominance are found to be associated with older dogs (Trisko & Smuts 2015) and in a situation where dogs mate with infected dogs, they carry the risk of becoming infected with B. canis. Because older dogs also cover a larger roaming area, the risk of environmental contamination from mating and coming into contact with infectious material over a wider spatial dimension may aggravate the occurrence in older dogs.
Globally, the relationship between B. canis infection and age has been established in other studies. In Iran, a serological survey for B. canis conducted among 102 companion dogs in Ahvaz during 2006-2008 revealed that the group of dogs older than 5 years had a B. canis prevalence of 9.3% compared to 1.69% in the younger group (Mosallanejad et al. 2009). Although the difference was not significant, it clearly indicated that higher age is associated with the probability of infection. According to Alfattli (2016), the first documented study of B. canis in Iraq revealed the differences in prevalence for B. canis in dogs using three different tests: the rapid test kit, indirect enzyme-linked immunosorbent assay (ELISA) and 16S rDNA inter-spacer polymerase chain reaction PCR techniques. They observed 4.8% for dogs < 1 year, 5.36% for dogs between 1 and 4 years and 11.48% for dogs > 4 years of age.
A serological study by Ayoola et al. (2016) between 2011 and 2014 in the Lagos and Ogun states of Nigeria among hunting and stray dogs similarly revealed age to be a significant factor playing an important role in B. canis infection. The study found that dogs older than 3 years were more than six times more likely to be seropositive for antibodies for B. canis than dogs younger than 3 years. The reason for this as explained earlier is that older dogs are more likely to get exposed to infected material and/or other infected dogs for a longer period, and hence they have an increased risk of getting infected.
Talukder, Samad and Rahman (2012) conducted a seroprevalence study on 30 stray dogs from the Mymensingh Municipal Corporation area in Bangladesh. The study revealed that the prevalence of B. canis in younger dogs up to 7 months of age was consistently 0%, while in the older dogs the results were 14.81%, 7.40%, 7.40% and 11.11% using the Rose Bengal plate test (RBPT), serum agglutination test (SAT), TAT and ELISA tests, respectively. All dogs up to 6 months of age were found to be seronegative because they were not sexually mature and active yet. Anyaoha (2015) revealed that in an earlier study in Nigeria, a seroprevalence of 3.4% in dogs younger than 1 year was reported, while a seroprevalence of 10.1% was reported in dogs 1-3 years old and 15.7% was reported in dogs above 3 years of age. Although ownership of dogs did not reach significance in the final analysis, in the univariate model it was observed that ownership of dogs was a positive predictor for the occurrence of B. canis (OR: 1684; p = 0.05). In addition, previous works have found that ownership of dogs predicted the occurrence of B. canis, in that the more stray dogs present, the higher the incidence of B. canis (Chikweto et al. 2013). However, these findings must be interpreted with caution because, while in former studies 'ownership' referred to individuals who presented their dogs at the clinics, or those who regularly provided medical care and management for their dogs, it is doubtful if the same definition can be applied in the present study. It should be noted that samples were obtained during the implementation of low-cost welfare services provided to the communities. In such instances, the economically impoverished individuals will more likely utilise those services as an opportunity to attend to the health of their animals. This may be a reason for the observed difference in this study compared to the others conducted previously.
Conclusion
Although this study indicated dog age and, to a lesser extent, external parasite infection as the only risk factors for B. canis infection in dogs in the study area, based on the findings of the univariable analysis, other possible risk factors such as dog ownership and sterilisation need further investigation, especially in the informal sector of South Africa. The confirmation of the prevalence of B. canis in dogs in the study area has implications for humans. The prevalence of the disease in humans in South Africa is unknown and is in need of more attention and future research. Brucella canis is a zoonotic disease that affects children mostly and as such dog owners need to become more responsible and to be aware of the risk factors that contribute to possible B. canis infection in order to prevent and/or reduce human infections. Efforts from the veterinary authorities and welfare agencies should be intensified to reduce the number of free-roaming dogs in informal settlements. This will help minimise contact between infected dogs and the susceptible dog population. There is a need for public health authorities to educate the public on hygienic practices for people owning a pet dog, in particular children. Spaying or castration of dogs entering sexually maturity should be promoted to assist in reducing the burden of canine brucellosis. | 2019-09-26T09:01:19.133Z | 2019-09-25T00:00:00.000 | {
"year": 2019,
"sha1": "ea4a11dc40bacb9c1aa3a06be26c281242b95aad",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4102/jsava.v90i0.1956",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc56383fd70ff4b396130408c71d5a3587c08cf1",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245779543 | pes2o/s2orc | v3-fos-license | Negative Influences of Differentiated Empowering Leadership on Team Members’ Helping Behaviors: The Mediating Effects of Envy and Contempt
Purpose Given the popularity of empowerment practices among scholars and practitioners, this research examines whether a manager’s differentiated empowering leadership negatively affects team members’ helping behaviors and, if so, how. Methods The authors conducted one multi-source and time-lagged survey (with 44 managers and 212 team members) and two scenario-based experiments (with 120 participants in Study 2 and 121 participants in Study 3) to test the research model. Results Team managers’ differentiated empowering leadership decreases team members’ helping behaviors. In particular, for team members who receive less empowerment, differentiated empowering leadership may decrease their helping behaviors by eliciting their envy. For team members who receive more empowerment, differentiated empowering leadership may decrease their helping behaviors by inducing their contempt. Conclusion This research introduces the concept of differentiated empowering leadership in response to calls to investigate the dark side of empowering leadership. It reveals that unequal distribution of authority among team members by managers can undermine employee relations and elicit negative emotions of envy and contempt, thereby decreasing employees’ helping behaviors.
Introduction
With increasingly fierce competition in the external business world, 1 managers cannot quickly or effectively cope with the management challenge by relying only on their own knowledge, skills, and experience. Accordingly, there is a growing awareness that managers should increase their companies' effectiveness and flexibility by empowering their subordinates. 2 In their leadership development programs, companies such as Google, Microsoft, and LinkedIn have long trained their managers on empowerment. Given this trend among practitioners, organizational behavior and organizational psychology researchers are paying greater attention to the empowering behaviors of managers (ie, empowering leadership) and have conducted dozens of studies in this area. 2 Empowering leadership is defined as managers' behaviors in fostering employee autonomy, authority, and self-responsibilities. As this research stream progresses, a group of influential scholars have found two main gaps that require further research. First, researchers should further consider the dark side of empowering leadership. 2,3 To date, the majority of studies have focused on the positive effects of empowering leadership. For instance, research has found that managers' empowering behaviors can increase employee performance, 4 as well as extra-role behaviors. 5 However, increasing evidence suggests that empowering leadership is not always beneficial in workplace contexts. 6,7 Hence, Cheong et al (2019) have called for closer examination of whether and how managers' empowering behaviors might cause outcomes that are less positive or even negative. 2 Second, research on the configural properties of empowering leadership should be carried out in the team context. 8 Given the popularity of team-oriented work structure in the workplace, 9 the growing but still limited body of research on empowering leadership in that context has focused exclusively on the influence of shared empowering leadership on employee attitudes and behaviors. 2 In a team, shared empowering leadership involves the team manager treating team members equally and granting each of them a similar degree of autonomy, authority, and support. 10 However, a manager is more likely to empower different team members differently than to empower without distinction. 11 Research in connection with variation in managers' empowering behaviors toward different team members is beginning to emerge, but remains insufficient. In theory building and empirical studies, scholars of organizational psychology are seeking to complement research based on the perspective of shared team properties (reflecting experiences, attitudes, perceptions, or behaviors that are held in common by all team members, ie, shared empowering leadership) with an emphasis on configural team properties (reflecting the array, pattern, or variability of individual experiences, cognitions, and behaviors within a team, ie, differentiated empowering leadership). 12 Accordingly, to obtain a more thorough understanding of the impact of empowering leadership, we should examine the effects of differentiated empowering leadership.
We find that team members' helping behaviors are important outcomes that may be affected by managers' differentiated empowering leadership. Helping behaviors among team members contribute to the maintenance of good interpersonal relationships, which are beneficial for team operation and effectiveness. 13 Recently, researchers have started to explore the effects of shared empowering leadership on helping behavior. 10 However, our knowledge of this relationship is far from complete, and further studies are required. 10 One way to advance our understanding of the relationship is to investigate whether and how differentiated empowering leadership, which is different from shared empowering leadership in that it is based on the perspective of configural team properties, can affect helping behaviors. Specifically, drawing on social comparison theory and the literature regarding envy and contempt, we propose that the underlying mechanism that links differentiated empowering leadership and team members' helping behaviors is negative emotion (in particular, envy and contempt). Managers can affect employees' behaviors by influencing their emotions. 14,15 Variation in a team manager's empowerment toward different team members can induce intense feelings of emotion, affecting team members' intention to help others. In summary, we seek to address the issues mentioned above by exploring the following research questions: RQ1. Can differentiated empowering leadership by team managers negatively influence team members' helping behaviors?
RQ2. Are emotions (ie, envy and contempt) the underlying mediating mechanisms that link differentiated empowering leadership and team members' helping behaviors?
Our findings contribute to research on empowering leadership and will be of value to practitioners using empowerment. First, this study contributes to the literature on the unintended influences of empowering leadership by exploring whether and how managers' differentiated empowering leadership can decrease team members' helping behaviors. Thus, a better and more thorough understanding of the impact of empowering leadership is obtained. Second, this study enriches the research on the underlying mechanism of the effects of empowering leadership by clarifying the mediating roles of negative emotions (here, envy and contempt) in linking differentiated empowering leadership and team members' helping behaviors. Third, this study clarifies how empowering leadership works within a team by using the configural team property perspective to construct an idea of differentiated empowering leadership. Last, the findings of this study provide insights for managers on how to empower their members in the team context.
The remainder of this paper is structured as follows. The second section delineates the rationale we used to develop our hypotheses. The third section presents the empirical testing of our hypotheses. The fourth section draws conclusions from the research findings, discusses their theoretical and practical implications, and notes their limitations. Figure 1 illustrates our overall research model.
Theory and Hypotheses Differentiated Empowering Leadership and Social Comparison Theory
Before we frame our hypotheses, it is necessary to define differentiated empowering leadership. In line with previous research, 8,16,17 we define it as the extent to which a team manager exhibits varying levels of empowering behaviors toward team members. Differentiated empowering leadership is high when managers distribute power, autonomy, and authority unequally among their team members. In contrast, differentiated empowering leadership is low when managers empower their team members in an identical and equal way. According to leader-member exchange theory, a team manager can develop differentiated exchange relationships (high vs low quality) with each team member by giving varying amounts of authority and support. 18 Social comparison theory suggests that people have an innate motivation to draw comparisons with similar others in order to evaluate themselves or reduce uncertainty. 19 Depending on the target of comparison, social comparison can be classified as upward 20 or downward. 21 Upward social comparison is comparison with those considered to be superior on a given characteristic, whereas downward social comparison is comparison with those considered to be inferior on a specific characteristic. 20,21 If a team manager distributes authority, such as decision-making and work autonomy, among team members in a highly unequal way (ie, creates a situation with highly differentiated empowering leadership), each member's status within the team may be differentiated. Some team members will receive abundant empowerment, while the others will receive little empowerment. Team members with more empowerment then become the insiders of the team manager and are in a superior position within the team. 8 Team members with less empowerment are considered the outsiders of the team manager and hold inferior positions in the team. 8
Direct Effect of Differentiated Empowering Leadership on Helping Behavior
We propose that managers' differentiated empowering leadership has a negative direct effect on team members' helping behaviors for two reasons. First, according to social comparison theory, individuals tend to define themselves through comparison with other individuals. Team members who receive more empowerment can be considered (or self-classified) as insiders of their team manager. In contrast, team members with less empowerment can be considered (or self-classified) as outsiders. This faultline created by the team manager's high degree of differentiated empowerment can cause tension between team members. 22 Prior research has indicated that leader-
11
member exchange (LMX) differentiation, which is similar to differentiated empowering leadership, can induce conflict between insiders and outsiders. 23 Second, balance theory proposes that the contrasting quality of relationships between different manager-member dyads can cause a deterioration in member-member relationships. 24 Chiniara and Bentein (2018) found that LMX differentiation can have a negative effect on team cohesion. 25 As team members' relationships become estranged, they may refuse to help each other.
Although no prior empirical research has investigated the relationship between differentiated empowering leadership and helping behaviors, studies on similar concepts provide evidence that supports our argument. For example, despotic leaders, who have a great degree of authority, may be more likely to empower each team member differently. Zhou et al (2021) found that despotic leadership negatively affects employees' job satisfaction, and employees who are not satisfied with their jobs are less likely to help their colleagues. 26 Chen and Zhang (2021) noted that LMX relational separation can reduce employees' intentions to carry out altruistic behaviors such as helping. 27 Thus, we propose the following hypothesis: Hypothesis 1: Team managers' differentiated empowering leadership has a negative effect on team members' helping behaviors.
We next seek to clarify the underlying mechanisms for this phenomenon. Integrating social comparison theory with the concepts of envy and contempt, we speculate that differentiated empowering leadership can induce nuanced and different negative emotions among team members according to the extent of the empowerment they receive from their team managers. Specifically, differentiated empowering leadership can elicit feelings of envy on the part of team members who receive less empowerment. In contrast, differentiated empowering leadership can induce contempt among team members who receive more empowerment. With the combination of these mechanisms, both emotions, envy and contempt, can reduce the intention of each team member to help the others. We articulate these two mediation mechanisms in the following sections.
The Mediating Role of Envy Among Team Members with Less Empowerment
Drawing on social comparison theory and the literature on envy, we argue that, among team members with less empowerment, envy can mediate the relationship between managers' differentiated empowering leadership and team members' helping behaviors. Envy is a typical emotional reaction elicited by social comparison, especially upward social comparison with a superior target in a domain that one values. 28 The occurrence of envy depends on the following two conditions: First, two persons are similar or close to each other; Second, one person has something that the other person values but does not have. 29 On this basis, we argue that team members who have less empowerment are likely to experience envy resulting from upward social comparison with team members who have more empowerment. As part of a team, members interact frequently and should be considered nominally equal in status; hence, they are similar and comparable. Moreover, in the workplace, the authority that is distributed by team managers is something that all of the team members care about and is often used as a basis for comparison.
Thus, team members with less empowerment may envy those who have more opportunities for participative decision-making and greater work autonomy. Team members with less empowerment are likely to have feelings of envy because of the substantial gap in authority and status compared to others in the team who have high degrees of differentiated empowering leadership. Pelled (1996) observed that group members may experience unpleasant feelings because of the social comparison process. 30 More directly, Lee (2001) argued that subordinates who have poor exchange relationships with their supervisors will be jealous of colleagues who have good exchange relationships with their supervisors. 31 Hence, we propose the following hypothesis: Hypothesis 2a: For team members with less empowerment, differentiated empowering leadership is positively associated with their emotion of envy.
We further argue that team members who are envious may further suppress their intention to help others. Envy is a painful feeling of emotion. 32 Cohen-Charash and Muller (2007) suggested that people who experience envy tend to ease their emotion by narrowing the gap between themselves and the person they envy. 29 Scholars have pointed out that one way to equalize status is to hurt the envied person through counterproductive work behaviors, 32 or by undermining them. 33 However, engaging in malicious harm is highly risky and may be punished formally (eg, with a low performance rating) or informally (eg, through shunning or retaliation) by the organization and its managers. 34 As a result, an envious team member may be more inclined to choose a way to equalize status that their organization and managers will not easily detect. One such way is refusing to help those who are envied.
In a team context, there is a high level of task interdependency among team members. 35 If an envious team member chooses not to help a team member he/she envies, this may lead to poor performance from both parties. Nevertheless, the envious member may succeed in decreasing the envy-provoking advantage that the envied team member has, and this will help to ease the experience of envy. In this connection, empirical research indicates that envy can inhibit the helping behaviors of employees. For example, Kim et al (2010) found that employees with strong feeling of envy decreased their organizational citizenship behaviors. 36 Sun et al (2021) found that envious employees were less likely to help the coworkers they envied. 37 Hence, we predict that envy has a negative effect on team members' helping behaviors: the stronger the feeling of envy, the fewer helping behaviors they will conduct.
Integrating these considerations with Hypotheses 1 and 2a, we propose the following indirect effect hypothesis: Hypothesis 2b: For team members with less empowerment, envy mediates the negative effects of differentiated empowering leadership on helping behaviors.
The Mediating Role of Contempt Among Team Members with More Empowerment
Drawing on social comparison theory, we propose that team members who have more empowerment are inclined to experience emotions of contempt resulting from downward social comparison with team members who have less empowerment. Contempt is an emotional experience that implies disdain for and social exclusion of another person or group. 38 It is a typical emotional reaction after downward social comparison. 39 As team members with less empowerment have less authority, such as decisionmaking or work autonomy, they are considered the outsiders of the team managers. Drawing on the dynamic social model of contempt, team members who have more empowerment may experience a sense of superiority, leading them to despise team members who have less empowerment. 40 Accordingly, the large gap in power and status caused by a high degree of differentiated empowering leadership is likely to induce emotions of contempt among members with more empowerment toward those with less empowerment. Sias and Jablin (1995) argued that high-status members disrespect low-status members in groups with high levels of LMX variability, 41 and Tse et al (2013) found empirical support for this argument. 42 Hence, we propose the following hypothesis: Hypothesis 3a: For team members with more empowerment, differentiated empowering leadership is positively associated with their emotion of contempt.
We argue that team members who experience feelings of contempt may suppress their intention to help their disrespected colleagues for the following reasons. First, once an individual has formed feelings of contempt toward someone else, they will try to distance themselves from that person or even exclude them from their social network. 40 Although task interdependence makes it impossible for team members to isolate someone else within the team completely, the emotion of contempt can motivate them to reduce interpersonal interaction with the target of the contempt. 38 Consequently, for team members who have more empowerment, the emotion of contempt may reduce their interpersonal interactions with and intention to assist team members who have less empowerment.
Second, research on the motivational effect of helping behaviors has demonstrated that gaining the favor of receivers of help and building good interpersonal relationships are two essential motivators. 43 Thus, because of their emotion of contempt, team members who have more empowerment have less motivation to help team members who have less empowerment. Hence, we speculate that contempt is negatively related to team members' helping behaviors: the stronger the feeling of contempt, the less helping behavior they will conduct. Schriber et al (2017) argued that individuals with dispositional contempt tend to be cold. 44 More directly, Tse et al (2013) found that employees who experience feelings of contempt were less likely to help their colleagues. 42 Integrating these considerations with Hypotheses 1 and Hypothesis 3a, we propose the following hypothesis: Hypothesis 3b: For team members with more empowerment, contempt mediates the negative effects of differentiated empowering leadership on helping behaviors.
Research Approach
We employed one field survey and two scenario experiments. In Study 1, a field survey with a multi-source and time-lagged design, we examined the main effect of team managers' differentiated empowering leadership on team members' helping behaviors (testing Hypothesis 1). Field survey is a method used extensively in organizational psychology and management research, and it has high external validity. 45 However, Study 1 did not test the underlying mediating mechanisms directly, and its design does not permit causal conclusions to be drawn. To address these limitations, we conducted two scenariobased experiments, a method that has high internal validity and can be used to test causal conclusions. Specifically, in Studies 2 and 3, we manipulated differentiated empowering leadership to explore its indirect effects on team members' helping behaviors by increasing the less empowered team members' emotion of envy (testing Hypotheses 2a and 2b) or by increasing the more empowered team members' emotion of contempt (testing Hypotheses 3a and 3b) see Appendix for all of our research instruments.
Study 1 Participants and Design
We collected data from a large beverage chain corporation with stores in many provinces in China (mainly in the eastern region). Each store had an independent team with a team manager (the store manager) and several team members, which met the sample requirements of this study. With the approval of the company's executive manager, we invited 50 stores at random to participate in our survey. We emphasized the voluntary nature of their participation and guaranteed complete confidentiality to all participants. All surveys were administered electronically using mobile phones, and labeled IDs (eg, Leader 1 for the team manager of Team 1 and member 1-1 for a member of Team 1) were used to match the data.
We invited 262 employees and 50 managers from the 50 stores to complete our survey at Time 1. After two weeks, as some team managers and members did not respond at Time 2, the final sample consisted of 212 team members and 44 team managers from 44 stores. Of the 212 team members, 59.4% were female. Their mean age was 26.65 (SD = 7.38), and their average job tenure was 1.35 years (SD = 1.42). The majority (81.6%) had received senior high school or higher education. Of the 44 team managers, 54.5% were female. Their average age was 29.39 (SD = 5.53), and their average job tenure was 3.82 years (SD = 3.72). All of them had received senior high school or higher education.
Measures
The original English questionnaires were translated into Chinese using back-translation processes. 46 Unless otherwise indicated, we used 7-point Likert scales (1 = strongly disagree, 7 = strongly agree) to collect the responses to each item.
Differentiated empowering leadership. Differentiated empowering leadership is a configural team property. In line with Li et al (2015, 2017), 8,16 we evaluated it using the coefficient of variance (ie, by dividing the within-team standard deviation of empowering leadership by the mean score of empowering leadership of all the members). This measurement was used by Wu et al (2010) in their study of differentiated transformational leadership. 17 The team members rated their team manager's empowering leadership using six items from Chen and Aryee's (2007) delegation scale. 47 A sample item is "My team manager does not require that I get his/her input or approval before making decisions" (Cronbach's α = 0.9).
Helping behaviors. Team managers rated team members' helping behaviors using a 7-item scale developed by Podsakoff et al (1997). 48 A sample item is "Help each other out if someone falls behind in his/her work" (Cronbach's α = 0.89).
Control variables. Previous research has found that an individual's perception of empowering leadership could affect their helping behaviors. 5 We therefore controlled for perceived empowering leadership rated by team members using Chen and Aryee's (2007) 6-item scale. 47 In addition, in line with previous research, 16 we included age, gender, job tenure, and education level in the analyses as control variables. Table 1 presents the means, standard deviations, and correlations of all the variables. Before the hypothesis testing, we carried out a collinearity test. The results showed that the variance inflation factor (VIF) scores of all the predictors were below 5, 49 indicating that collinearity is not an issue in this study. The nested structure of the data and the significant between-team variances in helping behavior (ICC(1) = 0.42) justified the use of hierarchical linear modeling in the analyses. 50 To handle missing data in Study 1, we used the mean imputation method. To facilitate the interpretation of the results, we grand-mean centered all the variables at the team level and group-mean centered all the variables at the individual level (with the exception of gender, a dummy variable). 51 Table 2 presents the findings of our analyses. The results for Model 2 suggest that differentiated empowering leadership had a negative effect on team members' helping behaviors (β = -0.23, p < 0.05), which supports Hypothesis 1.
Results
Study 1, a field survey with three-wave manager-member paired data from 212 team members and their 44 team managers, supports our hypothesized main effect that team managers' differentiated empowering leadership decreases team members' helping behaviors. However, Study 1 did not directly test the underlying mechanisms for that effect, and its design does not allow causal conclusions to be drawn. To address these limitations, we conducted two scenario-based experiments (Studies 2 and 3).
Study 2 Participants, Procedures, and Measures
The scenario-based experiments of Studies 2 and 3 were implemented via an online survey platform. In Study 2, a total of 130 participants with work experience were recruited via an online advertisement and were rewarded with 10 Chinese yuan (about 1.55 US dollars) for their participation. We employed a two-scenario design, and all participants were assigned at random to one of these two conditions: high versus low levels of differentiated empowering leadership. It is worth emphasizing that all the participants in Study 2 were manipulated as less empowered team members.
Participants read one of the following scenarios, which served as the manipulation of differentiated empowering leadership: Scenario 1 (high differentiated empowering leadership): Ms. Xu is the manager of your team, who likes to empower her team members. She often invites some specific members to discuss team issues, and they have more opportunities to participate in decision-making. Ms. Xu makes most of your team's work plans and strategies after consulting those specific members, and the opinions expressed by the remaining team members are not taken seriously. In addition, those specific members have high levels of work autonomy, such as how and when to complete their work. You belong to a group of members with relatively less empowerment. Scenario 2 (low differentiated empowering leadership): Ms. Xu is the manager of your team, who likes to empower her team members. She often invites all team members to discuss team issues. Although some specific team members have more opportunities to participate in the discussion, all members have decision-making authority to a certain extent. Ms. Xu makes most of your team's work plans and strategies after consulting all members. Although the opinions of some specific members are more valued, the voices of all members can be heard more or less. In addition, although some members have
15
more rights to determine their own work procedures, all team members have a certain level of work autonomy, such as how and when to complete your work. You belong to a group of members with relatively less empowerment. Next, we asked the participants to answer the following questions. First, we asked them to evaluate their emotion of envy on a 9-item scale adapted from Cohen-Charash's (2009) episodic envy scale (sample item: "I have a grudge against team members with more empowerment"; 1 = strongly disagree, 9 = strongly agree; α = 0.87). 52 Second, the participants rated their intentions to help using six items that fit more closely with our experiment and were adapted from the classic scale developed by Williams and Anderson (1991) (sample item: "I will help them if those more empowered team members are absent"; 1 = strongly disagree, 7 = strongly agree; α = 0.9). 53 Third, we conducted a manipulation check using one item, "The authority our team manager gives to different members varies greatly" (1 = strongly disagree, 7 = strongly agree). The data for 10 participants were not included in the analyses because their response time was less than 1 minute, which indicated a lack of reliability. As a result, we retained a sample of 120 participants eligible for analysis, with 60 participants in each of the two scenarios.
We transformed the independent variable, namely differentiated empowering leadership, into a dummy variable (1 = high degree of differentiated empowering leadership, 0 = low degree of differentiated empowering leadership). We then used PROCESS software (Hayes, 2018) to test the mediation effect of Hypothesis 2b. 54 The results of our analysis indicate that envy played a mediating role in linking differentiated empowering leadership and helping behaviors (indirect effect = -0.24, 95% CI = [-0.55, -0.01]), which supports Hypothesis 2b. In summary, the results of Study 2 indicate that, for team members with less empowerment, differentiated empowering leadership reduced their intention to help team members who have more empowerment, and their emotion of envy mediated this negative effect.
Study 3 Participants, Procedures, and Measures
In a similar vein, in Study 3, we recruited 136 participants and assigned them at random to one of the two scenario experiments. However, all the participants in Study 3 were manipulated as team members with more empowerment. The procedures of Study 3 were similar to those of Study 2. Participants read the scenario for one of the two conditions, which served as the manipulation of differentiated empowering leadership. The two scenarios were almost the same as those used in Study 2, with the exception of the last sentence, which was rewritten as You belong to a group of members with relatively more empowerment.
As in Study 2, we then asked the participants to evaluate their contempt emotion and their intention to help. Participants first rated their emotion of contempt using eight items adapted from the contempt scale developed by Schriber et al (2017) (sample item: "I often feel like that the less empowered team members are wasting my time"; 1 = strongly disagree, 5 = strongly agree; α = 0.74). 44 Second, participants evaluated their intention to help using the same scale as in Study 2 (sample item: "I will help them if those less empowered team members are absent"; 1 = strongly disagree, 7 = strongly agree; α = 0.87). Third, we conducted a manipulation check using the same item as we used in Study 2. We excluded 15 participants whose response times were less than 1 minute. As a result, we retained a sample of 121 participants eligible for analysis, with 61 participants in Scenario 1 and 60 in Scenario 2.
Results
Manipulation check. t-test results showed that participants' perceived degree of differentiated empowering leadership in Scenario 1 was significantly higher than in Scenario We used the same method to test Hypothesis 3b. The results showed that the mediation effect of contempt was significant (indirect effect = -0.16, 95% CI = [-0.41, -0.01]), which supports Hypothesis 3b. In summary, Study 3 showed that, for team members with more empowerment, differentiated empowering leadership reduced their intention to help those less empowered, and their contempt emotion mediated this negative effect.
Theoretical Implications
First, this study extends the literature on empowering leadership by adopting a holistic view of leadership. 55 The results determine the negative effects of differentiated empowering leadership on helping behaviors, thereby supporting Hypothesis 1 and responding to calls for more attention to the dark side of empowering leadership. Although the majority of research on empowering leadership has focused on its positive outcomes, 3 researchers have started to explore its potential non-positive and even negative results to secure a more comprehensive understanding of its influences. 7,56 Specifically, drawing on social comparison theory, we demonstrate that differentiated empowering leadership has negative influences on helping behaviors. These results advance our understanding of the overall effects of empowering leadership and contribute to the burgeoning research on the unintended influences of a popular leadership style that is broadly seen as positive (eg, transformational leadership). 57 Although there is no prior research on the relationship between differentiated empowering leadership and helping behaviors, studies of similar concepts support our findings. For example, Chen and Zhang (2021) demonstrated that LMX relational separation can be detrimental to the altruistic behaviors of subordinates. 27 Similarly, Wang et al (2017) found that LMX differentiation can decrease employees' organizational citizenship behaviors (such as helping). 58 Second, our research demonstrates that for team members with low empowerment, envy is the underlying mechanism that links differentiated empowering leadership and helping behaviors. This finding supports Hypotheses 2a and 2b. Although no prior research has examined this particular mediating relationship, support for our findings can be found in a number of studies. For example, Thompson et al (2018) found that supervisors' differentiation of subordinates elicits emotions of jealousy, 59 while Sun et al (2021) demonstrated that envious employees reduce their helping behaviors toward their coworkers. 37 We also confirmed that for team members with high empowerment, contempt mediates the negative effects of differentiated empowering leadership on helping behaviors, which supports Hypotheses 3a and 3b. Again, prior studies provide support for this conclusion. For example, Sias and Jablin (1995) argued that in groups with high levels of LMX variability, high-status members disrespect low-status members, 41 while Tse et al (2013) found that employees who experience feelings of contempt reduce their helping behaviors. 42 By integrating social comparison theory with the literature on envy and contempt, our research demonstrates that emotional mechanisms are essential in explaining the influence of empowering leadership on employees' outcomes.
Third, by adopting a configural view, our research answers the call to explore the influences of empowering leadership in the team context. 8 With team structure used by a growing number of companies, scholars have begun to examine team-level empowering leadership more closely. 16 However, to date, most studies have simply grafted individual-level theories and research paradigms onto team-level research, ignoring the frequent interactions among team members that can impact their perceptions of managers' empowerment. 2,8 Here, however, we capture a more nuanced picture of the interactions between empowering managers and team members by adopting a perspective on configural team properties. Specifically, we offer the new knowledge that the harmful effects of differentiated empowering leadership on team members' helping behaviors may be due to negative emotions aroused in the interaction processes.
Practical Implications
Team managers can benefit from the insights our research provides into how to empower team members. Managers should be very cautious about the empowerment strategies they employ within the team. Because of the intensity of their interactions, team members know each other very well. The extent of the empowerment of different members is far from a private matter between the manager and the target member; it is an open secret among the team members. The extent of the decision-making authority and work autonomy distributed by team managers is something that all team members care about and is often used as a basis for comparison. Our research suggests that if team managers empower different team members unequally, tensions among team members are likely. Members with less empowerment will envy members with more empowerment, while members with more empowerment will despise members with less empowerment. The accumulation of these negative emotions will split the team and further reduce team members' helping behaviors. Rasool et al (2021) found that a toxic workplace environment is harmful to employee engagement. 60 Harassment and ostracism, which can be induced by team members' emotions of envy or contempt, are typical characteristics of the toxic workplace environment. Therefore, team managers should empower their members equally and identically.
Limitations and Future Directions
This study has several limitations. First, although we used a time-lagged and multi-source design to minimize common method variance, the cross-sectional design in Study 1 prevents us from making causal inferences. We addressed this limitation by conducting two scenariobased experiments (Studies 2 and 3) that allowed us to draw causal conclusions; we also encourage future research to employ longitudinal designs to validate our model. Second, we used two scenario experiments to examine the mediation effects of envy and contempt. Although the internal validity of our experiment design is good, we encourage future research to increase external validity by using the experienced sampling method. Third, we focused on the direct and indirect effects of differentiated empowering leadership on team members' helping behaviors. We therefore encourage future research to explore the boundary conditions of our research framework. For example, it has been argued that emotional intelligence is good for social interaction. 61 Future research in the team context can therefore explore whether the negative effects of differentiated empowering leadership on members' helping behaviors is attenuated when managers have a high level of emotional intelligence.
Conclusions
To our knowledge, this is the first study drawing on the configural view to explore the negative effects of empowering leadership on helping behaviors. In line with our predictions, the results of a survey and two scenario experiments suggest that differentiated empowering leadership decreases team members' helping behaviors. Specifically, for team members who receive less empowerment, differentiated empowering leadership can decrease their helping behaviors by increasing their emotion of envy. For team members who receive more empowerment, differentiated empowering leadership can decrease their helping behaviors by increasing their emotion of contempt.
Ethical Statement
The study was conducted in accordance with the Declaration of Helsinki. Before commencing the data collection, the study was approved by the Research Project Ethical Review Committee of Social Sciences Faculty at Wuhan University. According to our research design, the study did not violate any legal regulations or standard ethical guidelines. We introduced our research purpose, goals, and plans to each participant and got their consent. Additionally, we emphasized that all the participants could reject any questions or withdraw from the study at any time. Lastly, their anonymity and confidentiality were assured. | 2022-01-07T16:12:04.768Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "306341ea8e543b02e051b1f1a848aa55a803ac34",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=77364",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d7f1d962fe1e28c270024c3c8e03dc75ba768ff",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18268757 | pes2o/s2orc | v3-fos-license | The Ecology of Acidobacteria: Moving beyond Genes and Genomes
The phylum Acidobacteria is one of the most widespread and abundant on the planet, yet remarkably our knowledge of the role of these diverse organisms in the functioning of terrestrial ecosystems remains surprisingly rudimentary. This blatant knowledge gap stems to a large degree from the difficulties associated with the cultivation of these bacteria by classical means. Given the phylogenetic breadth of the Acidobacteria, which is similar to the metabolically diverse Proteobacteria, it is clear that detailed and functional descriptions of acidobacterial assemblages are necessary. Fortunately, recent advances are providing a glimpse into the ecology of members of the phylum Acidobacteria. These include novel cultivation and enrichment strategies, genomic characterization and analyses of metagenomic DNA from environmental samples. Here, we couple the data from these complementary approaches for a better understanding of their role in the environment, thereby providing some initial insights into the ecology of this important phylum. All cultured acidobacterial type species are heterotrophic, and members of subdivisions 1, 3, and 4 appear to be more versatile in carbohydrate utilization. Genomic and metagenomic data predict a number of ecologically relevant capabilities for some acidobacteria, including the ability to: use of nitrite as N source, respond to soil macro-, micro nutrients and soil acidity, express multiple active transporters, degrade gellan gum and produce exopolysaccharide (EPS). Although these predicted properties allude to a competitive life style in soil, only very few of these prediction shave been confirmed via physiological studies. The increased availability of genomic and physiological information, coupled to distribution data in field surveys and experiments, should direct future progress in unraveling the ecology of this important but still enigmatic phylum.
INTRODUCTION
Although the Acidobacteria were only recognized as a phylum relatively recently, their abundance across a range of ecosystems, especially soils, has demanded research into their ecology. 16S rRNA gene-based approaches as well as environmental shotgun metagenomic analyses have revealed that the Acidobacteria represent a highly diverse phylum resident to a wide range of habitats around the globe (Chow et al., 2002;Kuske et al., 2002;Gremion et al., 2003;Quaiser et al., 2003;Fierer et al., 2005;Stafford et al., 2005;Janssen, 2006;Sanguin et al., 2006;de Carcer et al., 2007;Kim et al., 2007;Singh et al., 2007;DeAngelis et al., 2009;Jesus et al., 2009;Kielak et al., 2009;Navarrete et al., 2010Navarrete et al., , 2013bZhang et al., 2014). However, despite their high abundance and diversity, we still have relatively little information regarding the actual activities and ecology of members of this phylum, a shortcoming that can be attributed to a large extent to the difficulties in cultivating the majority of acidobacteria and their poor coverage in bacterial culture collections (Bryant et al., 2007;Lee et al., 2008;da Rocha et al., 2009;Eichorst et al., 2011;Navarrete et al., 2013b). However, environmental surveys have provided insight into some the environmental factors that may drive acidobacteria dynamics, such as pH and nutrients (Fierer et al., 2007;Jones et al., 2009;Lauber et al., 2009;Navarrete et al., 2013b).
In this review, we couple the complementary data coming from physiological, genomic and metagenomics studies to seek a better understanding of the role of Acidobacteria in the environment, thereby providing some initial insights into the ecology of this important phylum. We aim to not only give a more complete picture of the current knowledge of Acidobacteria, but also seek to provide a solid base for future experiments geared toward gaining a better understanding of the ecological roles played by members of this phylum.
HISTORY AND GENERAL INFORMATION ON THE PHYLUM Acidobacteria
The introduction of molecular biological strategies into microbial ecology over the past decades has yielded a new perspective on the breadth and vastness of microbial diversity. The phylum of the Acidobacteria is one of the bacterial lineages that has profited most from the cultivation-independent interrogation of environmental samples. Indeed, in the past two decades, this phylum has grown from being virtually unknown to being recognized as one of the most abundant and diverse on Earth. This phylum is particularly abundant in soil habitats that can represent up to 52% from the total bacterial community Sait et al., 2002) and averaging approximately 20% of the microbial community across diverse soil environments (Janssen, 2006).
Although 16S rRNA gene sequences related to the Acidobacteria were obtained as early as 1993 (Stackebrandt et al., 1993) it was only in 1997 that they were associated with sequences belonging to cultured members of the current Acidobacteria phylum. Based on phylogenetic analysis of 16S rRNA gene sequences, the Acidobacteria phylum raised from the originally described four to six subdivisions (Kuske et al., 1997;Ludwig et al., 1997;Barns et al., 1999Barns et al., , 2007 to over eight subdivisions in 1998 (Hugenholtz et al., 1998) and in 2005 this number increased to 11 (Zimmermann et al., 2005) deeply branching and strongly supported subdivisions. Currently there are 26 accepted subdivisions (Barns et al., 2007) in the Ribosomal Database Project. The first recognized strain and species of the phylum Acidobacteria was Acidobacterium capsulatum obtained from an acid mine drainage in Japan (Kishimoto and Tano, 1987;Kishimoto et al., 1991;Abed et al., 2002). Although the second isolate belonging to this phylum was Holophaga foetida first described in 1994, it was not initially recognized as related to Acidobacteria capsulatum. Instead, it was thought to belong to the phylum Proteobacteria (Liesack et al., 1994). A few years later, a closely related bacterium named Geothrix fermentans was isolated (Coates et al., 1999) and subsequently another closely related bacterium Acanthopleuribacter pedis, the first acidobacteria obtained from a marine sample, was described (Fukunaga et al., 2008). Since these isolates were very distantly related to A. capsulatum, it was proposed that it should be included in a new class named Holophagae. Acidobacteriia and Holophagae are the only classes currently included in the most recent edition of the Bergey's Manual (Thrash and Coates, 2014).
The vast majority of isolates cultivated to date are affiliated with acidobacteria subdivision 1 (Class Acidobacteriia). They are all heterotrophic, most species are aerobic or microaerophilic and some species (Telmatobacter bradus, Acidobacterium capsulatum) are facultative anaerobic bacteria (Pankratov et al., 2012). Members of subdivisions 3, 4, 8 (currently Class Holophagae), 10, and 23 are heterotrophic as well. Thermotomaculum (subdivision 10) and Thermoanaerobaculum (subdivision 23) are thermophilic anaerobic bacteria (Izumi et al., 2012;Losey et al., 2013). Chloracidobacterium thermophilum is photoheterotrophic (Bryant et al., 2007;Tank and Bryant, 2015) and Pyrinomonas methylaliphatogenes can consume H 2 (Greening et al., 2015), both from subdivision 4. Subdivision 8 contains one aerobic (Acanthopleuribacter) and two strictly anaerobic isolates (Holophaga and Geothrix). There are reports of acidobacteria isolates belonging to subdivisions 2 and 6, but they still do not have valid taxonomic names George et al., 2011;Parsley et al., 2011). Subdivisions 1 and 3 of the phylum Acidobacteria together with thermophilic Thermoanaerobacter species are capable of biosynthesizing total fatty acids lipid (Damste et al., 2011). Currently, there a total of 40 species belonging to 22 genera: eleven genera of subdivision 1, two of subdivision 3; four of subdivision 4, three of subdivision 8, one of subdivision 10, and one of subdivision 23 (Figure 1). In addition, there are the genome sequences of Koribacter and Solibacter, but there is little information on their physiology.
THE IMPACT OF NEW ISOLATION METHODS
Changes in the traditional methods for culturing bacteria from soils have significantly improved the isolation of Acidobacteria strains in recent years. These new strategies involve the use of relatively low concentration of nutrients, non-traditional sources of carbon or complex polysaccharides Sait et al., 2002;Pankratov et al., 2008;Eichorst et al., 2011), longer periods of incubation, the use of gellan gum as solidifying agent, Sait et al., 2002), nonstandard CO 2 atmospheric conditions of incubation, addition of quorum-signaling molecules, catalase or cations George et al., 2011;Navarrete et al., 2013a), amendments of inhibitors for undesired organisms, and amendment of environmental extracts in growth media (Foesel et al., 2013).
It is suggested that raising the CO 2 concentration may not only better mimic the CO 2 concentrations typically found in soils, but may also decrease medium pH, thereby benefiting certain members of the acidobacteria, especially moderate acidophilic strains belonging to subdivision 1 (Sait et al., 2006). This combination of strategies seems to enrich not only for Acidobacteria but for many other groups of slow-growing bacteria. The association of a molecular technique such as the high-throughput plate-wash PCR or colony-PCR (de Castro et al., 2013), both using phylumspecific 16S rRNA gene primers has improved the screening and identification of colonies belonging to the Acidobacteria subgroup 1. Once Acidobacteria isolation under low nutrient conditions is achieved, strains can often be transferred to richer media (e.g., TSA and R2A) for more convenient propagation (de Castro et al., 2013). Despite the importance of these recent advances in cultivation methods, further improvements are clearly needed since only eight of a total of 26 subdivisions are known to have representatives in culture. A large number of Acidobacteria isolates have been recovered from the Australian soil Ellin bank Sait et al., 2002;Davis et al., 2005). These studies were important during the development of new strategies for culturing soil bacteria, and two of these isolates, 'Koribacter versatilis' Ellin345 (CP000360) and 'Solibacter usitatus' Ellin6076 (CP000473), were used in the first Acidobacteria genome investigations (Ward et al., 2009). However, many of these bacteria have not yet been fully characterized and still do not possess valid taxonomical names. Further, micro-cultivation strategies combined with single-cell sequencing should provide access to new acidobacterial genomes, and in turn this genomic information may help to inform future isolation efforts as cultivation is still required for most physiological characterizations.
FIGURE 1 | Dendrogram showing the general characteristics of Acidobacteria type species, only. The dendrogram was obtained using nearly-full 16S rRNA gene sequences obtained and aligned using the Ribosomal Database Project. Dendrogram were obtained with MEGA7 (Tamura et al., 2011) using the Neighbor-joining method (Saitou and Nei, 1987); bootstrapping values were based on 1000 repetitions and are shown next to the branches (Felsenstein, 1985). There were a total of 1343 positions in the final dataset. 1. Acidobacterium capsulatum was originally described as an aerobic bacteria, but later it was demonstrated a weak anaerobic growth by fermentation (Pankratov et al., 2012). 2. H. foetida is homoacetogen and G. fermentans is fermentative. ND, not determined. Quote marks (') indicate that this species is deposited in a culture collection, but its name does not retain its validity and standing in nomenclature.
THE IMPACT OF ACIDOBACTERIA GENOMES AND LINKS TO PHYSIOLOGICAL STUDIES
The first comparative genome analysis between of A. capsulatum and two bacteria originated from the Ellin collection, 'S. usitatus' Ellin6076 (subdivision 1), and 'K. versatilis' Ellin345 (subdivision 3) provided numerous insights into the physiology of members of the Acidobacteria (Ward et al., 2009;Challacombe et al., 2011). Since then, the number of acidobacterial genomes being sequenced remains rather limited. Currently, there are 10 published genomes of Acidobacteria available: five subdivision 1 (Ward et al., 2009;Rawat et al., 2012aRawat et al., ,b, 2013Rawat et al., , 2014, one subdivision 3 (Ward et al., 2009), two subdivision 4 (Costas et al., 2012;Lee et al., 2015), and one subdivision 8 -class Holophagae (Anderson et al., 2012); 1 subdivision 23 (Stamps et al., 2014). Below, we summarize some of the major findings revealed via the currently available acidobacterial genome sequences linked to physiological studies.
CARBOHYDRATE METABOLISM
Among all the physiological aspects revealed by genomics, carbohydrate metabolism has been studied most widely, which is not surprising considering that carbon usage is one of the physiological requirements for the description of new species in taxonomic studies. At least one species of each of the eight recognized genera of subdivision 1 is able to use D-glucose, Dxylose, and lactose as carbon sources (Figure 2). The ability to use glucose and xylose makes sense given the fact that cellulose or xylan are often the major carbon sources in the culture media most typically used for the isolation of Acidobacteria. In addition, these bacteria were able to use most of the tested oligosaccharides, although maltose and cellobiose were not able to support growth of Edaphobacter species. Interestingly, the majority of subdivision 1 species were unable to use fucose or sorbose, carbohydrates that are only minor components of plant cell walls and rather scarce in soil (Li et al., 2013).
Although several acidobacterial genomes have been shown to contain genes encoding for the degradation of different polysaccharides (Figure 2), experimental data on the use of polysaccharides generally do not support genomic predictions. At least 50% of the genera have members able to use starch, lamminarin, and xylan. In contrast, chitin usage has not yet been demonstrated for any member of Acidobacteria subdivision 1. Similarly, cellulose was another substrate predicted to be degraded by Acidobacteria genome annotation. However, only Telmatobacter bradus (subdivision 1) has been demonstrated to be able to use crystalline cellulose (Pankratov et al., 2012) and Edaphobacter cerasi (Yamada et al., 2014) is able to grow on CM-cellulose. Terracidiphilus gabretensis produces extracellular enzymes implicated in the degradation of plantderived biopolymers what was confirmed by genome analysis by the presence of enzymatic machinery required for organic matter decomposition . In contrast to Acidobacteria from subdivision 1, members of subdivision 4 are able to use chitin as a carbon source (Foesel et al., 2013;Huber et al., 2014) (Figure 2). Most of the Acidobacteria subdivisions 1, 3, or 4 examined to date is unable to use carboxymethyl cellulose, but there is evidence that Aridibacter kavangonensis (subdivision 4) is able to utilize micro-crystalline cellulose (Huber et al., 2014). Although it is still premature to draw general conclusions related to the degradation of these abundant polysaccharides by Acidobacteria in nature, xylan degradation has been broadly demonstrated, which may play a role in plant cell wall degradation (Pankratov et al., 2008;Eichorst et al., 2011).
The discrepancies between genome predictions and observed activities may stem from our ability to provide cultivation conditions that lead to the expression of the target activities. Alternatively, current automatic genome annotation pipelines may not successfully differentiate genes involved for instance in chitin and cellulose degradation from genes involved in the degradation of other glycosyl hydrolases, such as xylan. Systematic studies on the degradation of cellulose by Acidobacteria grown on different culture conditions may help to test the hypothesis of gene regulation by sugars present in the media, for example. On the other hand, it has been reported that in bacteria many genes involved in cellulose degradation may be involved in the infection of plant cells or in the synthesis of bacterial cellulose (Koeck et al., 2014).
Enzymatic activities observed in Acidobacteria have usually been detected using commercial kits with chromogenic substrates. Members of subdivision 1 possess a broader range of enzymes related to sugar usage than those from other subdivisions (Figure 3; Supplementary Table S1). Galactosidases are enzymes involved in the hydrolysis of galactose-containing sugars, while beta galactosidades are involved in the degradation of lactose. Since all genera of Acidobacteria subdivision 1 are able to use lactose, it is not surprising to find this enzyme included in their enzymatic profile. Glucosidases are involved in the degradation of polysaccharides, especially cellulose and starch. Although starch is used by most Acidobacteria (Figure 2), cellulose degradation has not yet been unequivocally demonstrated for most Acidobacteria, as explained above.
Interestingly, β-glucosidase has successfully been purified and characterized from A. capsulatum (Abed et al., 2002). This enzyme was induced by the presence of cellobiose in the medium, and it was active on p-nitrophenyl-p-D-glucopyranoside (100%), cellobiose (39%), and β-gentiobiose (33%). In a subsequent study, the gene xynA for a endo-β-1,4-xylanase from A. capsulatum (Koch et al., 2008) was cloned and expressed in Escherichia coli . This enzyme was demonstrated to be active on xylan and cellobiose (100% of relative activity), but has low activity on CM-cellulose (5.4%) and no activity on filter paper of Avicel. Janssen et al. (2002) were the pioneers to use gellan gum as solidifying agent in culture medium used for Acidobacteria isolation. In contrast to agar, which is obtained from seaweed, gellan gum is a substrate that is produced (and degraded) by soil bacteria. At least two species of Acidobacteria have demonstrated ability to use gellan gum: Telmatobacter bradus and Bryocella elongata. One known pathway for the degradation of gellangum involves the action of gellan lyases, α-L-rhamnosidases, FIGURE 2 | Usage of carbon sources by Acidobacteria in culture-based experiments with type strain species. A positive score was recorded if at least one species within a genus is able to use a respective sugar. (A) Acidobacteria subdivision 1. (B) Acidobacteria subdivisions 3 and 4. Usage of carbon source obtained from original references that described each of the type species, in order of publication: Kishimoto et al., 1991;Liesack et al., 1994;Coates et al., 1999;Eichorst et al., 2007;Fukunaga et al., 2008;Koch et al., 2008;Kulichevskaya et al., 2010Kulichevskaya et al., , 2012Kulichevskaya et al., , 2014Pankratov and Dedysh, 2010;Männistö et al., 2011Männistö et al., , 2012Okamura et al., 2011;Dedysh et al., 2012;Izumi et al., 2012;Pankratov et al., 2012;Baik et al., 2013;Foesel et al., 2013Foesel et al., , 2016Losey et al., 2013;Crowe et al., 2014;Huber et al., 2014;Whang et al., 2014;Yamada et al., 2014;Pascual et al., 2015;Tank and Bryant, 2015;Garcia-Fraile et al., 2016;Jiang et al., 2016;Llado et al., 2016). unsaturated glucuronyl hydrolases, and β-D-glucosidases (Baik et al., 2013). Members of Acidobacteria subdivision 1 were reported to be able to use the monosaccharides rhamnose and glucose. Also, at least one β-D-glucosidase was characterized from A. capsulatum, which may be involved in gellan gum degradation. However, the in silico comparison of 10 available genomes, offered no evidence of gellan lyase (EC 4.2.2.25) genes for gellan lyase usage. Moreover, Naumoff and Dedysh (2012) reported that the presence of α-rhamnosidases is not a phylogenetically determined trait, but that this function was obtained by lateral gene transfer from Bacteroidetes and in the case of Holophaga foetida from a fungus. The investigation of the enzymatic pathway for gellan gum degradation may merit further investigation, since this is a bacterial polysaccharide. Therefore, in addition to the possible metabolism of polysaccharides derived from plants, the usage of gellan gum suggests an interaction with other soil bacteria .
NITROGEN METABOLISM
Nitrite reduction was observed in all three genomes reported in 2009, and nitrate reduction in two of the initially analyzed genomes (Ward et al., 2009). Nitrate reduction has been investigated in almost all members of subdivision 1, with the exception of Acidobacterium and Acidicapsa. Among all Granulicella species, G. mallensis was reported to perform nitrate reduction (Männistö et al., 2012). A. rosea and B. elongata are also able to reduce nitrate to nitrite. Among other subdivisions, only Geothrix fermentans subdivision 8 was shown to be able to reduce nitrate. This organism is an iron reducer that can use nitrate as an alternative electron acceptor. All of these Acidobacteria were able to use yeast extract that, in addition to ammonium, may be a preferred nitrogen source (Coates et al., 1999). The presence of the nirA gene, which encodes nitrate reductase, also appears to be limited to subdivision 1, suggesting that members of this subdivision may reduce nitrate to nitrite by the assimilatory pathway, which is further reduced to ammonia and assimilated into glutamate. Nevertheless, the direct uptake of ammonium seems likely as all genomes described to date appear to contain genes for the ammonia transporter channel (Amt) family (TC 1.A.11). Nitric oxide reductase genes (norB but not norC) were identified in 'Koribacter versatilis,' 'Solibacter usitatus' and Geothrix fermentans genomes. Genes encoding dinitrogenase, a heterotetramer of the proteins NifD and NifK (genes nifD and nifK, respectively) and dinitrogenase reductase, a homodimer of the protein NifH (gene nifH), were found only in the genome of H. foetida. Ammonia monooxygenase (amo) and nitrous-oxide reductase (nosZ) genes were not found in any of the available genomes. Contrary to the previous report (Ward et al., 2009), homologs of nitrate reductases narB and narG were not found in available genome sequences via in silico genome comparison, and physiological testing would therefore be required to suggest this function in Acidobacteria. In summary, there is no clear evidence for the involvement of Acidobacteria in key N-cycle processes such as nitrogen fixation, nitrification, or denitrification.
EXOPOLYSACCHARIDES
Exopolysaccharide (EPS) production has frequently been reported in cultured Acidobacteria species (Eichorst et al., 2007;Pankratov and Dedysh, 2010;Whang et al., 2014). Initial genomic analyses revealed that at least one Acidobacteria encodes genes involved in the EPS biosynthesis, specifically for the production of bacterial cellulose, genes involved in the cellulose synthesis are encoded on genomes belonging to subdivision 1 (exception 'K. versatilis' Ellin345). However, it is not known if this is a general characteristic of the phylum Acidobacteria. It has been suggested that EPS-producing bacteria may be able to survive for long periods in soil due to the protection provided by their EPS. The dominance of Acidobacteria in acidic environments and its resistance to pollutants like uranium (Ellis et al., 2003;Gremion et al., 2003;Barns et al., 2007), petroleum compounds (Abed et al., 2002), linear alkylbenzene sulfonate (Sanchez-Peinado et al., 2010) and p-nitrophenol (Paul et al., 2006) may therefore be related to the ability to produce large amounts of EPS.
The functions of EPS in soil are numerous. It may be involved in the formation of the soil matrix, may serve as a water and nutrition trap, and may be involved in bacterial adhesion that can facilitate soil aggregate formation (Eichorst et al., 2007Pankratov and Dedysh, 2010). Based upon the presence of cellulose synthesis genes and a large number of novel highmolecular-weight proteins with excretion pathway motifs, it has been postulated that many Acidobacteria have the potential to form biofilms, resist desiccation and facilitate soil aggregate formation. However, to date, there are still no physiological studies demonstrating actual acidobacterial EPS production or demonstration of its ecological role.
TRANSPORTERS
Acidobacteria have a large proportion of genes encoding for transporters (Challacombe et al., 2011). The comparison of three acidobacterial genomes (two Acidobacterium capsulatum subdivision 1 and one Ellin6076 subdivision 3) showed about 6% of the total coding sequences are transporters (Ward et al., 2009). The Carbohydrate transport and metabolism category given by cluster of orthologous groups (COG) classification can range from 8.6% in Terriglobus saanensis type strain SP1PR4T (Rawat et al., 2012b) to 9.18 % in Granulicella mallensis type strain MP5ACTX8T (Rawat et al., 2013). An overview of transporter families in 10 acidobacterial genomes is provided in Figure 4. The majority of transporters found in acidobacterial genomes belong to the Drug/Metabolite transporter superfamily. The high number of different transport systems facilitates the acquisition of a broad range of substrate categories, including amino acids, peptides, siderophores, cation, or anions. The presence of a broad substrate range of transporters for nutrient uptake suggests an advantage of Acidobacteria in complex environments and adaptation to oligotrophic conditions, such as nutrient-limited soil conditions (Figure 4).
Although iron metabolism and iron transporters were discussed in genome sequence exploration studies, these characteristics have not been unequivocally demonstrated in culture-based studies. The only direct indication is the observation of iron accumulation in B. elongata . Based on genome content, we speculate that the representatives of subdivisions 4, 8 and B. aggregatus may also be able to uptake ferric iron, and genes encoding transport systems involved in translocation across the outer and cytoplasmic membranes, i.e., cobalamin/Fe 3+ -siderophore uptake transporters and Fe 3+ hydroxamate transporters, have been identified in these genomes. Genomic analyses also suggest that Acidobacteria may release siderophores to scavenge iron from soil minerals by formation of Fe 3+ complexes that can be taken up by those transporters or use siderophores from other microorganisms.
ECOLOGICAL INFERENCES DERIVED FROM METAGENOMIC APPROACHES
Large genome fragments recovered by metagenomics may contain intact metabolic pathways for mining ecologically relevant genome fragments from the environment. The first acidobacterial metagenomic insert was described by Liles et al. (2003). From a bacterial artificial chromosome (BAC) library, 12 (out of 24,400) clones were identified as containing acidobacterial 16S rRNA sequences (nine clones from subdivision 6, two from subdivision 4, and one from subdivision 5), and one clone affiliated with subdivision 5 was selected for full sequencing. Up to date there is no representative isolate available for this subdivision 5. The annotation of 20 ORFs revealed genes involved in cell cycling, cell division, folic acid biosynthesis, DNA repair, and an ABC transporter. In addition, a novel 1,4-butanediol diacrylate esterase gene was found with 40% sequence identity to an esterase from Brevibacterium linens. This enzyme is known to catalyze the conversion of insoluble butanediol diacrylate to a hydrolyzed soluble form for the use as a carbon source, suggesting that the bacterium containing this fragment may possess this capability.
Another six acidobacterial genomic fragments (four out of six related to subdivision 6) were recovered from a sandy ecosystem (Quaiser et al., 2003). Interestingly, two of the recovered clones affiliated with subdivision 6 contained regions of homology encoding a tyrosyl-tRNA synthetase, a metal-depending protease, FIGURE 5 | The Acidobacteria subdivisions phylogenetic tree. The phylogenetic tree is based on 220 sequences of 26 different subdivisions (GPs) of Acidobacteria from Silva database (http://www.arb-silva.de/) classified by RDP classifier. The sequences were aligned in Clustal X12 and the selection of conserved blocks from multiple alignment was carried out by Gblocks (Talavera and Castresana, 2007) for phylogenetic analysis. The phylogenetic tree was based on Neighbor-Joining clustering algorithm with 1000 bootstrap. Circles in the branches of the tree represent bootstrap support more than 75%. Outgroup is Castenholzii roseiflexus. as well as eight or nine purine biosynthesis proteins. In a subsequent examination of fosmid libraries from deep sea sediments (Quaiser et al., 2008), recovered this same syntenic region in eight out of 11 acidobacterial genome fragments affiliated with subdivision 6. In a metagenomic study of Acidobacteria in a former agricultural soil, an additional four out of 17 fosmids (from a library of 28,800 clones) were recovered with the same genomic region (Kielak et al., 2010). Thus, it appears that a large percentage of the subdivision 6 Acidobacteria members present in both terrestrial and marine environments, contain this conserved genomic region adjacent to their rRNA operons. However, the ecological and evolutionary significance of this striking pattern remains unknown.
In a study designed to recover genes encoding the synthesis of N-acyl homoserine lactones (NAHL), Riaz et al. (2008) identified a qlcA gene with a lactonase activity for the degradation of NAHLs. Sequencing of the genomic fragment containing this gene revealed that nine out of 20 ORFs were related to sequences derived from members of Acidobacteria. Similarly, genes involved in polyketide synthesis pathways were identified from a fosmid library in cloned inserts containing genes significantly similar to genes from 'S. usitatus' (Parsley et al., 2011). Primers targeting the mtaD homolog (encoding protein involved in myxothiazol biosynthesis) identified in a metagenomic library were also used to screen acidobacterial isolates. In four out of six isolates examined (belonging to subdivisions 3, 4, and 6) mtaD homologous sequences were identified, suggesting widespread distribution of PKS pathways among Acidobacteria (Parsley et al., 2011). Genes involved in PKS biosynthesis were also identified in sequenced acidobacterial genomes (Ward et al., 2009). The only report of antibacterial metabolites in relation to genomic potential was described by Craig et al. (2009), who isolated and characterized metabolites derived from acidobacterial genome fragments hosted by Ralstonia metallidurans. These compounds showed inhibitory activity against E. coli, Bacillus subtilis, and Staphylococcus aureus. However, further analysis revealed that the novel metabolites reported by these authors appeared to arise from a mixed biosynthetic pathway involving both the type III polyketide synthase encoded by acidobacterial genome fragments and endogenous R. metallidurans enzymes. Multiple reports on PKS pathways identified in Acidobacteria suggest widespread distribution of such pathways across this phylum, suggesting a role in the persistence, resistance and abundance of these bacteria in soil ecosystems. However, without the characterization of the polyketide products, the production of antimicrobials by Acidobacteria remains speculative.
A moderately thermostable lipase (optimum temperature between 50 and 60 • C) from a member of Acidobacteria phylum was also described by metagenomic approach from forest soil (Faoro et al., 2012). Interestingly, phylogenetic analysis revealed that the lipase-encoding gene was of fungal origin and was acquired via horizontal gene transfer. In total, largeinsert metagenomics analyses have served to recover numerous acidobacterial genomic regions, but the actual ecological insights offered from such fragmented datasets remain limited.
ENVIRONMENTAL SURVEYS THAT CORRELATE Acidobacteria DISTRIBUTION WITH ENVIRONMENTAL FACTORS OR CONSTRAINTS
Given the high diversity within the Acidobacteria phylum, as well as within particular subdivisions (Figure 5), it is expected that they also represent a wide range of physiological traits, as observed for other highly abundant and diverse bacterial phyla, such as the Proteobacteria. However, most of the studies up to date focus on Acidobacteria at phylum level leading to gross generalizations that may not mirror the ecological traits representative of lower taxonomic levels. Nevertheless, some general trends have been discerned from such broadlevel analyses. In the most expansive study conducted to date, pyrosequencing of 16S rRNA gene fragment was used to examine the biotic or abiotic factors that most influence the abundance, diversity and composition of soil acidobacterial communities in different types of soils (88 types; Jones et al., 2009). Fierer et al. (2007 studies have shown, based upon 16S rRNA gene sequence distributions across 71 soils differing in geochemical characteristics, that the abundance of Acidobacteria was generally higher in soils with very low resource availability (low C mineralization rate) and that the proportions of Acidobacteria were higher in C-poor bulk soils in comparison to the rhizosphere. However, this position has been challenged by Jones et al. (2009) andNavarrete et al. (2013b) who found a positive correlation between acidobacterial abundance and organic carbon availability. At the phylum level, many studies have shown that Acidobacteria is sensitive to inorganic and organic nutrients inputs (Cederlund et al., 2014;Koyama et al., 2014;Pan et al., 2014;Navarrete et al., 2015) and Acidobacteria seemed to have a role in recovering soils as beneficial to soil nutrient cycling and plant growth after drastic disturbance (Huang et al., 2015).
A number of studies have compared acidobacterial distribution and diversity in relation to proximity to plant roots or plant exudates. Numerous studies based on 16S rRNA sequences have shown a higher proportion and diversity of Acidobacteria in the bulk soil as compared to in the rhizosphere (Marilley and Aragno, 1999;Sanguin et al., 2006;Fierer et al., 2007;Singh et al., 2007;Kielak et al., 2008). Other studies have shown Acidobacteria abundant in red pepper (Capsicum annuum L.) rhizosphere (Jung et al., 2015). Additionally, it was shown by means of shotgun metagenomics that Acidobacteria is overrepresented in the soybean rhizosphere as compared to the bulk soil .
In several cases, Acidobacteria have appeared to tolerate various pollutant such as PCBs and petroleum compounds, linear alkylbenzene sulfonate, p-nitrophenol, (Abed et al., 2002;Paul et al., 2006;Sanchez-Peinado et al., 2010) and heavy metals (Ellis et al., 2003;Gremion et al., 2003;Barns et al., 2007) leading to speculation that Acidobacteria may be involved in the degradation of certain pollutants. However, although the relative abundance of Acidobacteria has often been correlated to specific pollutants, no data, up to date have been reported that support actual activities related to pollutant degradation. Such pollutants appear to have little effect on acidobacterial diversity levels. For example, uranium-contaminated soil possessed an extremely broad diversity of acidobacterial populations, and these soils were actually the basis for the expansion of acidobacterial subdivisions to 26 (Barns et al., 2007).
With the broader characterization of subdivisions and increasing depth of coverage it has now become possible to break down analyses to the subdivision level, which can be far more informative. The available studies provide distribution patterns of different acidobacterial subdivisions across different environmental gradients such as pH, nutrients, and carbon.
Especially striking is the predominance of Acidobacteria in low pH conditions, in particular members from subdivision 1 (Sait et al., 2006). On the other hand, Barns et al. (1999) suggested that some acidobacterial subdivisions had an aversion to low pH conditions in soil and other case, subdivision 6 can be either positively or negatively correlated with soil pH (Chan et al., 2006;Mukherjee et al., 2014). The abundance of subdivisions 1, 2, 3, 12, 13, 15 was negatively correlated with soil pH, yet subdivisions 4, 6,7,10,11,11,16,17,18,22,25 showed a positive correlation. Similarly, terminal restriction length polymorphism (T-RFLP) and cultivation strategies detected a significant decrease of Acidobacteria from subdivisions 1, 2, and 3 at soil pH higher than 5.5 (Männistö et al., 2007). This trend appears also to hold for cold-adapted populations of Acidobacteria (Lipson and Schmidt, 2004;Männistö et al., 2007). Similar trends have been found in other environments as well, and Acidobacteria from acid mine drainage systems seem to be even better adapted to more acidic conditions (pH 2-3) than Acidobacteria from soil environments (Kleinsteuber et al., 2008). Furthermore, it has been observed that the phylogenetic diversity of acidobacterial communities becomes increasingly constrained as soil pH deviates from neutrality (Jones and Martin, 2006). A possible explanation may lie in increased cell specialization and enzymes stability at more extreme pH, where closely related Acidobacteria may share similar cellular strategies to deal with discrepancies between intra-and extra-cellular pH. Despite this strong correlation with pH, it is not yet clear if this represents a direct causal relationship or if it is the result of other environmental factors that co-vary with pH.
In fact, by measuring soil factors such as Al, Ca, Mg, K, B, and micronutrients that had not been taken into account in previous studies, Navarrete et al. (2013b) demonstrated that subdivisions 4, 6, and 7 may actually respond to decreases in soil aluminum and soil Ca and Mg in tropical soils. The subdivisions 6 and 7 responded to high contents of soil Ca, Mg, Mn, and B, what elements are required for the growth of all living organisms. Magnesium ions are required by large number of enzymes for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA. However, the ionic magnesium cannot directly be up taken by the biological membranes because they are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium and other ions, both into and out of cells (Beyenbach, 1990). Future studies on Acidobacteria subgroups 4, 6, and 7 functions certainly will elucidate the role of those subgroups in soil. In addition, subdivision 10 has been shown to correlate with soil factors linked to soil acidity such as pH, Al, and Al saturation while subdivision 13 was correlated with soil P, B, and Zn (Navarrete et al., 2013b). The subdivision 2 has been reported in Amazonian forest soil (Navarrete et al., 2013b) and in Mata Atlantica and Cerrado soils (Catao et al., 2014) correlated with Al 3+ levels and CO 2 concentration. Aluminum in tropical and subtropical soils is toxic to crops (Yachi and Loreau, 1999). This might suggest that subdivisions 2 and 10 may have metabolic tolerance to aluminum in soil. Subdivision 4 abundance tends to increase with increasing soil pH, suggesting different physiologies for members of this subdivision (Foesel et al., 2013). Thus, it is important to treat soil pH as is a master variable that is related to additional changes in other soil factors, such as Al concentration and macroand micro-nutrient availabilities (McBride, 1994), which may represent the actual drivers for observed microbial community dynamics. Navarrete et al. (2013b) and Pessoa-Filho et al. (2015) showed that different subdivisions showed disparate correlations with respect to soil nutrient or chemical status. Although subdivision 1 has negative correlations with P, C, and N, members of subdivisions 5, 6, and 17 appeared to be high abundant in more nutrient-rich soils. Similarly, Männistö et al. (2012) observed phenotype dependent responses (within members of subdivisions 1 and 2) to seasonal changes in Arctic tundra soil ecosystem, which were related to nutrient and carbon availability. Foesel et al. (2013) provided culture-independent evidence for a distinct niche specialization of different Acidobacteria, even from the same subdivision, due to particular soil physicochemical (pH, temperature, nitrogen or phosphorus) and biological parameters (respiration rate, abundances of ciliates or amoeba, vascular plant diversity) in grassland and forest soils. Pessoa-Filho et al. (2015) showed the natural dominance of acidobacterial subdivisions 4 and 6 correlated with Mg and Ni contents present in serpentine tropical savanna soil. Additionally, da Rocha et al. (2013) showed that the Holophagae (subdivision 8) respond to leek roots (qPCR based study) either by increased cell division and/or by increase in genome quantity per cell.
ARE ACIDOBACTERIA OLIGOTROPHS?
The strong negative correlation between the abundance of Acidobacteria and concentration of organic carbon in soil has led to the conclusion that members of this phylum may be oligotrophic bacteria. However, it was pointed out that not necessarily all members would be oligotrophic (Fierer et al., 2007). In addition, genome sequences revealed the presence of only one or two copies of 16S rRNA genes suggesting lower growth rates, which has been previously correlated with oligotrophy (Klappenbach et al., 2000;Eichorst et al., 2007;Kleinsteuber et al., 2008;Ward et al., 2009). It is important to mention that although cultivation strategies regarding Acidobacteria isolates are often referring to nutrient limiting media Davis et al., 2005) some of the isolates were shown to be able to grow in higher carbon sources concentrations (Eichorst et al., 2007;de Castro et al., 2013).
The two observations mentioned above (negative correlation of Acidobacteria with organic carbon and lower growth rates) are also consistent with the ecological role of K-strategists. It has been predicted that K-strategists would prosper in environments with low abundance of nutrients, which is not the same as to say that they are oligotrophs (Andrews and Harris, 1986). Compared to r-strategists, K-strategists are predicted to have lower growth rates, but high efficiency in converting nutrients to biomass as well as high tolerance to toxic compounds, among other characteristics. Microbes that present the ecological K strategy are better competitors in oligotrophic environments (Andrews and Harris, 1986). Although the term oligotroph has been used in different ways, it usually describes an organism that is not able to grow or thrive in environments with high nutrient concentrations (Koch et al., 2008). However, it may be premature to assume that all Acidobacteria have the same ecological strategy, since metagenomic data and physiological description of different subgroups have indicated that there is a high variation within this phylum.
ACIDOBACTERIAL INTERACTION WITH OTHER MICROBES
Additional evidence for interaction with soil bacteria is the fact that Edaphobacter aggregans and B. elongata were isolated from co-cultures with methanotrophic bacteria. It was demonstrated that B. elongata was unable to use CO 2 and other C1 carbon compounds, which would be produced by the methanotrophic partner. Instead, it was proposed that the Acidobacteria was using the exopolysaccharides produced by the methanotroph as carbon source (Koch et al., 2008;Dedysh et al., 2012).
It is suggested that there is ecological relationship between Acidobacteria and Proteobacteria because they are often observed to be intimately associated with each other in the environment, and may influence each other's position in the community. Meisinger et al. (2007) observed, via Fluorescence in situ hybridization (FISH) counts, that members of subdivisions 7 and 8 were always associated with epsilon or gamma-proteobacteria in filamentous microbial mats in hydrogen sulfide-containing springs. It was therefore hypothesized that the Acidobacteria often live as chemo-organotrophs in association with the autotrophically fixed carbon in the poorly oxygenated regions. Enrichment strategies have also often recovered consortia comprised of Acidobacteria and Proteobacteria, as exemplified by the co-cultivation of subdivision 6 members from freshwater lake sediments with Alphaproteobacteria (Spring et al., 2000). However, it is not yet clear if co-cultivation stems from overlapping niches between the different consortium members or if they have necessary metabolic interactions. It has been suggested that co-cultures containing acidobacteria should be studied more closely to reveal potential ecological interactions and growth preferences . Also, given advances in sequencing, such enrichment cultures should be able to yield full genome sequences of a much broader range of acidobacteria that available in pure culture. Certain groups of Proteobacteria have been associated with copiotrophic lifestyles and given this association, Smit et al. (2001) hypothesized that the ratio between Proteobacteria and Acidobacteria (P/A) may provide insight into the general nutrient status of soils. Low P/A ratios would be indicative of oligotrophic soils, while high ratios would be observed under copiotrophic conditions.
CONCLUSION AND FUTURE DIRECTIONS
The high abundance and ubiquity of Acidobacteria in soils raises questions related to the physiological traits that have led to this marked success. Although genome sequences have provided important information, our integration of genomes has often not been informed by studies, and genomics analyses remain highly skewed Acidobacteria subdivision 1, the groups for which most cultures are available. There is therefore an urgent need to isolate and sequence genomes of representatives from other subdivisions in order to understand their basic characteristics. Due to the still problematic cultivation of Acidobacteria, techniques like microcultivation and single-cell sequencing should give steps forward to obtain a more representative range of acidobacterial genomes. In addition, with the increased high throughput of shotgun metagenomic studies and associated postgenomic, it should be possible to start dissecting acidobacterial genomes from environmental datasets, thereby circumventing the necessity for cultivation. As an intermediate step, metagenomic analyses of more simplified systems, such as enrichment and nonaxenic cultures should also yield access to important genome information. It must, however, be stressed that cultivation efforts remain a top priority, as these provide the necessary material for physiological studies and confirmations of genomic predictions.
The 16S rRNA data provided by next generation sequencing together with soil chemicals (macro and micronutrients) can help to elaborate specific culture medium for different Acidobacteria subdivisions isolation. Despite the limitation of current genomebased studies, the genomes obtained to date still give important hints related to the factors that explain the successful adaptation of this phylum to harsh soil conditions. These factors include the large number of high-affinity transporters, the potential utilization of a wide variety of carbohydrates as substrate, the resistance to antibiotics and production of secondary metabolites, the production of EPS and the potential use of bacterial produced polymers such as gellan gum.
Although little direct evidence, genomic studies reveal that decomposition and utilization of natural polymers such as chitin, cellulose, EPS, and gellan gum as potential important aspects for future studies. Also better knowledge about the production of EPS, biofilm, and secondary metabolites in Acidobacteria subdivisions is of importance to understand the survival, resistance, persistence in soil as well as possible interactions of members of this phylum with other soil microorganisms. As Acidobacteria are ubiquitous, they should interact, positively or negatively, with other soil habitants. Therefore, unraveling these interactions is vital for the proper understanding of their role in terrestrial ecosystem functioning.
Additionally, the recovery of 16S rRNA genes from the environment should be taken forward. These surveys have provided new insight in terms of distribution of different acidobacterial subdivisions and relation to environmental variables, but more experimental approaches need to be coupled with high throughput toolbox to tease out the actual roles of environmental variables. Further, more molecular studies that attempt to look at activities such as metatranscriptomic and stable isotope probing (SIP) approaches might be considered.
AUTHOR CONTRIBUTIONS
Analyzed the data: AK, CB, and EK. Contributed reagents/ materials/analysis tools: JvV and EK. Wrote the paper: AK, CB, GK, JvV, and EK. | 2017-05-03T23:09:16.924Z | 2016-05-31T00:00:00.000 | {
"year": 2016,
"sha1": "6a4c8dbdb8eeba63c865e5dd6fc64e30c4040350",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.00744/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a4c8dbdb8eeba63c865e5dd6fc64e30c4040350",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258113623 | pes2o/s2orc | v3-fos-license | Epidemiological profile of microbial keratitis in Alexandria-Egypt a 5 years retrospective study
Objective To evaluate the epidemiologic profile of microbial keratitis in Alexandria- Egypt, with special emphasis on risk factors, visual outcome and microbiological results. Methods This retrospective study reviewed files of patients treated for microbial keratitis during a period of 5 years at Alexandria Ophthalmology Hospital Cornea Clinic, Alexandria- Egypt, between February 2017 and June 2022. The patients were evaluated for the risk factors e.g., trauma, eyelid disorders, co-morbidities, and contact lens use. They were also evaluated for their clinical picture, the identified microorganisms, visual outcomes, and complications. Non-microbial keratitis and incomplete files were excluded from the study. Results A total of 284 patients were diagnosed as microbial keratitis in our study. Viral keratitis was the most common cause of microbial keratitis (n = 118 (41.55%)), followed by bacterial keratitis (n = 77 (27.11%)), mixed keratitis (n = 51 (17.96%)), acanthamoeba keratitis (n = 22 (7.75%)) and the least cause was fungal keratitis (n = 16 (5.63%)). Trauma was the most common risk factor for microbial keratitis (29.2%). Fungal keratitis had a statistically significant association with trauma (p < 0.001), while the use of contact lenses had a statistically significant association with Acanthamoeba keratitis (p < 0.001). The percentage of culture-positive results in our study was 76.8%. Gram-positive bacteria were the most frequently isolated bacterial isolate (n = 25 (36.2%)), while filamentous fungi were the most frequently isolated fungi (n = 13(18.8%)). After treatment, there was a significant increase in the mean visual acuity among all groups; it was significantly higher in Acanthamoeba keratitis group with a mean difference of 0.262 ± 0.161 (p = 0.003). Conclusion Viral keratitis followed by bacterial keratitis were the most frequent etiologic agents causing microbial keratitis found in our study. Although trauma was the most frequent risk factor for microbial keratitis, contact lens wear was found an important preventable risk factor for microbial keratitis in young patients. Performing culture properly whenever indicated before starting antimicrobial treatment increased the cultures’ positive results.
Introduction
Microbial keratitis (MK) is an infection of the cornea caused by a range of pathogens including bacteria, viruses, parasites (e.g., Acanthamoeba), and fungi (yeasts, and filaments). It is considered a potentially sight-threatening disease if improperly managed especially in developing countries [1]. The incidence of this disease varies around the world. In the United States, it is 11 cases per 100.000 inhabitants [2], While in developing countries that number is far bigger, reaching 799 cases per 100.000 inhabitants per year in Nepal [3].
The history of contact lens (CL) wear, ocular trauma, changes in the ocular surface (blepharitis, penetrating keratoplasty, and dry eye), and systemic diseases (diabetes, and rheumatoid arthritis) are the most significant risk factors associated with the onset of MK [4].
The diagnosis of MK is made on the clinical basis together with microbiological evaluation [5]. The microbiological profile of microbial keratitis has shown great differences worldwide. An American study found that in the northern cooler states, bacterial keratitis is more prevalent while in the southern states fungal keratitis is more prevalent [6]. Due to the continuous shifting in microbiological profile and antibiotics resistance profiles reported in several studies, microbiological investigations and antibiotic susceptibility are mandatory to provide an effective treatment [7].
Although MK is one of the main causes of corneal blindness and visual disability, especially in developing countries [8], there is a lack of previous reporting of microbial keratitis epidemiology in our region. This study aimed to characterize the epidemiological profile and the most important risk factors for MK at Alexandria ophthalmology hospital, Alexandria, Egypt.
Methods
This is a retrospective study of patients diagnosed with microbial keratitis in the period between February 2017 and June 2022 at the cornea clinic in Alexandria Ophthalmology Hospital in Alexandria; a Mediterranean city in Egypt at the western edge of the Nile River. Being a large, specialized hospital in Alexandria, it is considered an important referral centre in Alexandria and the surrounding cities. This study was conducted after approval from the Medical Research Ethics Committee, Ministry of Health and Population of Egypt. The study included patients of both sexes of all ages. Non-microbial keratitis including Mooren's ulcers, chemical burns, and Shield ulcers were excluded. Files with incomplete data and patients lost to follow-up before complete healing were excluded from further analysis.
The relevant data were collected from the hospital's medical records of patients diagnosed with MK at the cornea clinic, then analysed using the appropriate statistical methods. The collected data included patients' age, sex, general history of systemic diseases, ocular history of MK {onset, duration of symptoms, history of recurrence}, risk factors (trauma, CL use, and previous history of ocular surgeries). Ophthalmologic examination data included lid examination, visual acuity at the time of presentation and after complete cure, and ulcers features at initial presentation (site, size, and depth). Ulcers' sites were determined as central (involving the central 4-mm diameter of the cornea) and peripheral ulcers. The ulcer size was classified as small (˂ 2 mm), moderate (2-5 mm), or large (> 5 mm). The density of infiltration, the severity of corneal oedema, hypopyon presence, and keratic precipitates (KPs) presence were documented. Also, Corneal scraping results, the given treatment, and the clinical outcome were recorded. The visual acuity was measured using Snellen's chart and recorded in decimal notation. Study participants with counting fingers, hand motions, light perception, and no light perception visual acuity were assigned a decimal of 0.02, 0.004, 0.002, and 0, respectively.
Microbiological investigation protocol
Corneal scraping was ordered according to the American Academy of Ophthalmology recommendations [9]. Under aseptic conditions and after instillation of topical anaesthetic eye drops, corneal scrapes were obtained with a sterile blade 15 aiming at the ulcer edge and floor for in vitro culture. In vitro culture included chocolate agar, blood agar plate (BAP), MacConkey, Sabouraud's dextrose agar plates (SDA), and brain heart infusion broth enrichment media (BHI). All media were sent to Alex. Ophthalmology Hospital Microbiology Laboratory where they were incubated at 37• c for 24 to 48 h. Regarding incubated BHI broth it was inspected for turbidity; turbid broth was sub-cultured on BAP, Mac-Conkey agar plate, and SDA. In case of the presence of growth, the colony morphology was inspected. The disc diffusion method on Mueller Hinton agar was used to conduct an antimicrobial susceptibility test on each identified bacterium. Regarding SDA, it was incubated at 37•c aerobically and checked for fungal growth every other day for 14 days.
In positive history of contact lens wear, the lens case was sent to the Medical Research Institute parasitology lab. Swabs from the contact lens, lens case, and lens solution [10] were spread on clean glass slides then fixed with methanol and allowed to air-dry for 5 min. The slides were then stained with Giemsa stain and examined for Acanthamoeba trophozoites and cysts with an oil immersion lens.
Management protocols
Viral keratitis diagnosis and treatment were based on their typical clinical appearance and/or previous ocular history, they did not require any microbiological investigation. Epithelial keratitis cases received topical antiviral for 10 days which increased in geographic ulcers to 14 days (Fig. 1). Stromal keratitis, and endotheliitis cases received systemic antiviral acyclovir 400 mg 5 times daily and topical steroids 5 times daily for one week with gradual tapering till oedema subsided. In stromal keratitis combined with epithelial defect, steroids were not given until complete epithelial healing occurred. In neurotrophic ulcers, systemic doxycycline 100 mg was given twice daily for 1 month together with preservative-free artificial tears every 2 h, while autologous serum was used in resistant cases. In herpes zoster ophthalmicus (HZO), the dose of systemic antiviral increased to 800 mg 5 times daily for 2 weeks. Bacterial keratitis cases were divided into non-sightthreatening and sight-threatening keratitis. Non-sightthreatening keratitis with small, superficial, off-axis lesions with infiltrate size of 2 mm or less received empirical treatment according to a standard protocol with moxifloxacin monotherapy, a fourth-generation fluoroquinolone broad-spectrum antibiotic [11]. Sightthreatening bacterial keratitis characterized by medium or large size ulcers, deep infiltration, rapid progression within 3 days, presence of hypopyon or involving visual axis, treatment with topical fortified vancomycin and fortified gentamycin was given to cover both Gram-positive and Gram-negative pathogens (Fig. 2). Drops were given every hour for the first few days to achieve therapeutic tissue concentrations and rapid control of the infection, then the frequency was reduced later based on the clinical response [9]. Topical steroid was contraindicated until complete cure. Oral or parenteral antibiotics were used only in ulcers with perforation, scleral involvement, or endophthalmitis. Treatment was modified primarily by the clinical response taking into consideration the results of cultures and sensitivity testing, especially if the patient is not responding to initial therapy. Cases with blepharitis were treated with topical azithromycin twice daily with lid hygiene and in severe cases, systemic doxycycline was added twice daily.
Fungal keratitis diagnosis was based on the history of trauma or exposure to vegetable matter, the clinical presentation of raised or grey ulcers, satellite or multiple lesions, feathery edges, thick hypopyon, and lab results. The standard approach to treatment in mild cases is topical natamycin 5% every hour in combination with prophylactic fourth-generation fluoroquinolone 5 times daily. Modification of treatment was done in cases not responding to natamycin. Considering microbiological results, amphotericin B 0.15% was added in candida spp and voriconazole was administered in resistant cases. In severe cases with severe stromal infiltrate and thick hypopyon, systemic itraconazole 100 mg was added to topical treatment twice daily for 10 days (Fig. 3). Treatment was continued with a gradual decrease in frequency according to the activity of keratitis till the resolution occurred; one month in mild cases and 3 months in severe cases [12].
Acanthamoeba keratitis cases were divided into mild and severe keratitis. Mild cases with epitheliopathy and radial keratoneuritis were treated using polyhexamethylene biguanide drops every hour around the clock for the first few days of treatment with gradual tapering of drops depending on clinical response, while in severe cases with ring infiltration combined therapy with polyhexamethylene biguanide and propamidine 0.1% was given [13] (Fig. 4). Medications were continued for 3 months in mild cases and 6 months in severe cases to prevent relapses. Mixed keratitis was diagnosed if two or more types of microorganisms were simultaneously present during the same infective episode (Fig. 5). Treatment was adjusted according to the clinical picture and lab results.
General lines of therapy in impending perforation or perforated cases included systemic doxycycline 100 mg twice daily, systemic vitamin C 1 g twice daily, antiglaucoma eye drops of beta blockers or carbonic anhydrase inhibitors (CAI), and cycloplegics.
Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (Armonk, NY: IBM Corp). They were tested for normality by the Shapiro-Wilk test Categorical data were represented as numbers and percentages. Chi-square test was applied to compare two groups. Alternatively, Monte Carlo and Fisher Exact correction test was applied when more than 20% of the cells have an expected count of less than 5, while ANOVA was used for comparing the studied groups and followed by Post Hoc test (Tukey) for pairwise comparison. Kruskal Wallis test was used to compare different groups for non-normally distributed quantitative variables and followed by Post Hoc test (Dunn's for multiple comparisons test) for pairwise comparison, Wilcoxon signed ranks test for non-normally distributed quantitative variables, to compare between two periods. The significance of the obtained results was judged at the 5% level. Superscript letters in the illustrating tables were added to the values of the different studied groups. Values with different superscript letters have a statistically significant difference, while those with similar superscript letters does not have a statistically significant difference.
Results
A total of 585 patients were diagnosed as keratitis during the study period. Three hundred and one patients were excluded and 284 patients with microbial keratitis were included. The collected data were divided according to the causative organisms into 5 groups: viral, bacterial, fungal, acanthamoeba, and mixed keratitis. Viral keratitis was the most common cause of microbial keratitis (118 cases-41.55%) followed by bacterial keratitis (77 cases -27.11%), mixed keratitis (51 cases -17.96%), and acanthamoeba (22 cases -7.75%). The least cause was fungal keratitis (16 cases -5.63%) (Fig. 6).
Demographic data
The age of the studied population ranged from 2.5 to 88 years old. The mean age was 40.4 years. The mean age of the Acanthamoeba group was significantly younger than the other groups (23.2 years, p < 0.001) ( Table 1).
Of the 284 patients, 163 cases (57.4%) were males and 121 cases (42.6%) were females. In the acanthamoeba group, all cases were female and this was statistically significant in comparison with the other groups (p < 0.001) (Fig. 7).
Risk factors
Ocular trauma was the most common predisposing factor for microbial keratitis. It occurred in 83 cases Table 1 Comparison between MK groups according to age p: p-value for comparing between the studied groups, SD Standard deviation * Statistically significant at p ≤ 0.05 Numbers with different letters are significant Of the 284 patients, 163 cases (57.4%) were males and 121 cases (42.6%) were females. In the acanthamoeba group, all cases were female and this was statistically significant in comparison with the other groups (p < 0.001) ( (29.2%). Thirty-three cases (11.6%) had a history of contact lens wearing. Acanthamoeba keratitis had a statistically significant association with contact lens wearing (100%) (p < 0.001). Of the 284 studied cases, 35 had blepharitis (12.3%), which was significantly higher in the bacterial group (24 cases-31.2% of all bacterial keratitis patients) (p < 0.001). Ocular surgery and diabetes mellitus were found non-significant risk factors (Table 2).
• Onset duration and Cure duration
The time between the onset of complaints and examination was different among groups. We found that most cases with bacterial keratitis (43 -55.8%) came within the first week of complaints and this was statistically significant (p < 0.001). It was also founded that 80 cases (67.8%) among the herpetic group and 28 cases (54.9%) of the mixed group came for ocular examination between one week to one month. Most Acanthamoeba keratitis cases (10 -45.5%) and fungal keratitis cases (7 -43.8%) had a statistically significant delayed referral (more than one month) (p < 0.001).
After the exclusion of 69 cases that failed to show up from further analysis, we found that the cure duration in most cases with bacterial keratitis (36 -62.1%)) and viral keratitis (56 -56.6%)) was 2 weeks or less and this was statistically significant (p = 0.011). The cure duration was longer in the fungal group (7 -87.5%) and in the mixed group (25 -65.8%) where it reached more than 2 weeks (Table 3).
• Visual acuity before and after treatment
There was a significant increase in the mean visual acuity among all groups. The Acanthamoeba group showed the largest gain in visual acuity (mean difference of 0.262 ± 0.161) while the mixed group showed the least gain (mean difference of 0.098 ± 0.155). The Acanthamoeba group was associated with the best mean final visual acuity (0.400 ± 0.191) ( Table 4). We excluded 11 paediatric patients because their vision couldn't be documented and 69 cases that failed to follow up, hence the difference in the number of eyes before and after treatment.
• Corneal features
The absence of ulcer was found to be significantly associated with viral and Acanthamoeba groups (p < 0.001). Central ulceration was present in 125 cases (44%). Medium-sized ulcers showed a statistically significant association with fungal and mixed MK groups. (p < 0.001) Superficial ulceration was present in 179 cases (63%), while there was a statistically significant absence of corneal perforation in the fungal and the Acanthamoeba groups (Table 5).
Regarding the infiltration, it was found that the viral group was significantly associated with absence and minimal infiltration. On the other hand, bacterial, fungal, and mixed MK groups were significantly associated with dense infiltration (p < 0.001). KPs were significantly associated with viral and mixed groups 72 cases (25.4%) (p < 0.001). Hypopyon was present in 45 cases (15.8%), and it was significantly absent in the viral and in the Acanthamoeba groups (p < 0.001) ( Table 6).
Viral keratitis
One hundred and fifteen cases (97.5%) were caused by herpes simplex virus while only 3 cases (2.5%) were caused by herpes zoster virus. Stromal keratitis was the most common presentation of HSV (71 cases -60.2%). Bilateral herpes simplex keratitis occurred in only 3 cases (2.5%) ( Table 7).
Microbiological profile
Corneal scraping for culture and sensitivity was indicated in 69 cases out of a total of 144 cases of bacterial, fungal, and mixed keratitis. Fifty-three cultures (Table 8). Thirty-four cases receiving antimicrobial therapy and 41 non-indicated cases were excluded from doing corneal scraping and culture.
In suspected cases of Acanthamoeba with positive history of CL wear, cytological detection of Acanthamoeba trophozoites and cysts from CL, lens cases, and lens-cleaning solutions was done. Among the 22 Acanthamoeba cases, fifteen CL cases were investigated for Acanthamoeba; eight of them were positive (53.3%).
Fate and complications
Complications were encountered in 9 cases (4.2%). Five complications (2.3%) were in the bacterial group, whereas the viral and the mixed group each had 2 complicated cases (0.9%). Five patients had progressive corneal thinning and corneal perforation, one case ended by endophthalmitis, and two cases ended by corneal melting. Two cases were referred for penetrating keratoplasty and one case required tarsorrhaphy (Fig. 8).
Discussion
Microbial keratitis (MK) is considered a major cause of visual loss worldwide. Understanding its epidemiology, risk factors, etiological agents, and clinical characteristics will help to reach an accurate diagnosis and in turn proper management. MK varies demographically, and hence, regular regional updates become important. Our study was conducted aiming to describe the latest update of the epidemiological profile of MK in Alexandria-Egypt. In our study, viral keratitis was the most common cause of microbial keratitis (n = 118-41.55%). Similarly, the Asia Cornea Society Infectious Keratitis (ACSIKS) study demonstrated that viral keratitis represented the most common cause (n = 434-46%) of MK in China (HSK 24% and HZO 17%) [14]. In our study, 115 cases (97.5%) were caused by herpes simplex virus and only 3 cases (2.5%) were caused by herpes zoster virus. The higher incidence of HZO in China as published by the ACSIKS study may be a reflection of the referral pattern to the ophthalmology centers included in this study. Another two studies, conducted in Menoufia -Egypt and China observed that 15% and 21% of MK, respectively, were caused by herpetic keratitis [15,16]. The reason for the variation may be due to the climate differences between Alexandria and Menoufia; Alexandria has a cooler climate compared to the warmer climate in Menoufia as it is located in the South Nile Delta of Egypt. The reported incidence of bilateral herpetic keratitis in the literature varies from 1.3% to 12% depending on the diagnosing criteria [17]. In our study, the incidence of bilateral cases was low as it occurred in only 3 cases (2.5%) of HSV.
An important issue associated with Herpetic keratitis is neurotrophic keratopathy (NK). NK can result in poor corneal healing, increased risk of further MK, and other corneal complications such as melting and perforation [18]. NK occurred in 7 cases of total herpetic keratitis and was responsible for the only 2 complicated cases in the viral group.
MK affects individuals across all age groups, especially people aged between 30 and 55 years [19][20][21][22]. This is attributed to the underlying risk factors such as ocular trauma associated with the working age group. In our study, we observed that the mean age was in the fifth decade in all groups except for Acanthamoeba where the mean age was in the third decade. Similarly, the studies of Tong et al. and Stapleton et al. reported that patients affected by CL-related MK were usually between 25 and 40 years old [23,24].
Interestingly, many studies have reported that CLrelated MK has been shown to exhibit a female predominance of 57-69% [25], and that was similar to our results, as all CL wearers (100%) were female. Except for the Acanthamoeba group, there is a high male prevalence in all MK groups like other studies of MK in South America [26], Asia [14], and Africa [27] reported male prevalence, ranging from 58 to 75%.
Ocular trauma was the most common predisposing factor for microbial keratitis in our study; it occurred in 83 cases of the total cases (29.2%). Likewise, Srinivasan et al. [1] and Keay et al. [28] also found that the most predisposing factor for microbial keratitis was corneal trauma in 65.4%, and 36.4%, respectively. Blepharitis was significantly higher in the bacterial group (n = 24-31.2%)). Schaefer et al. [29] also reported blepharitis as a predisposing factor for bacterial keratitis in 21% of cases. Other risk factors e.g., ocular surgery and diabetes, showed non-significant relationship. Similar findings were reported by Keay et al. [28].
In our study, thirty-three cases (11.6%) were contact lens wearers, denoting that CL wear is becoming an important risk factor, mainly due to increasing urbanization as was the case in Taiwan [30]. Acanthamoeba keratitis (AK) is highly related to CL wearing and poor lens hygiene especially if washing of lenses with tap water occurred. Al-Herrawy et al. isolated Acanthamoeba spp. from finished water samples in Egypt [31] and it is not surprising that Acanthamoeba organisms have been cultured from lens cases and saline cleaning solutions [32]. Early detection and diagnosis with AK characteristic clinical picture are critical to the outcome of its clinical course [33,34]. Ulceration in AK does not occur until very late in the disease process. Also, 29 to 49% only of AK cultured cases have a positive result [35,36]. Hence, in our study, we depended on the cytological detection of acanthamoeba trophozoites and cysts from CL cases. It has the advantage of being fast, easily performed, and readily available in most facilities [37]. Although a positive detection of acanthamoeba in the lens case does not confirm the diagnosis, it highly suggests it [38]. [41]. The differences could be due to cultural issues, financial status, awareness, or access to eye care facilities. It was found that about 80 patients (67.8%) among the herpetic group and 28 patients from the mixed group (54.9%) came for ocular examination between one week to one month. Interestingly, most of the delayed referrals (more than one month) was in the Acanthamoeba keratitis group (10 cases (45.5%)), followed by the fungal keratitis group (7 cases (43.8%)). Similarly, the long duration of admission was also reported by Otri et al. [42].
Few studies have prospectively followed patients with microbial keratitis to monitor changes in visual acuity. There was a statistically significant increase in the mean visual acuity among all treated groups in our study. Srinivasan et al. showed that patients with treated bacterial keratitis experience an approximate 2-line improvement in visual acuity from enrolment to 3 weeks [43]. In a prospective study of 273 individuals with presumed microbial keratitis in Nepal, 52.7% experienced ≥ 2 lines of improvement in pinhole visual acuity [44]. Additionally, a study of 30 patients with culture-proven bacterial keratitis found an average visual acuity improvement of 2.5 lines by 10 weeks [45].
A higher proportion of central keratitis was found in this study (61.6%), which is significantly higher in fungal, mixed, and bacterial groups (p < 0.001). Similarly, a study in Malysia reported central ulceration in 69% of cases [46]. We found that moderate-to-large ulcers are more likely to occur in fungal keratitis and this was also shown by other investigators [47,48]. The presence of hypopyon was significantly related to fungal, bacterial, and mixed groups (p < 0.001). This agrees with the finding of the study published by Chidambaram et al. They reported aspergillus species and bacterial keratitis were more associated with hypopyon [49].
The percentage of culture-positive results in our study was 76.8%, which was higher than the studies by Otri et al. in the United Kingdom (41%) [42], Omar et al. in Malaysian urban areas (47.5%) [39], and Tananuvat et al. in Thailand (25.6%) [50], and similar to the high rates of culture positivity in studies in the United States (82%) [51] and New Zealand (71%) [40]. Corneal scraping technique, methods of culturing, types of the causative organisms, different types of culturing media, and antibiotic treatment prior to corneal scraping could be the reasons contributing to this variation [39]. The high positivity in our study is attributed to the use of enrichment media (brain heart infusion broth) [52] and the proper scraping technique by well-trained ophthalmologists. An important issue to be mentioned is that the use of antimicrobial eye drops prior to culture was usually associated with negative results. Therefore, culture should be done whenever indicated prior to starting antimicrobial treatment.
Similar to other studies, most of the bacterial keratitis cases were due to Gram-positive organisms [53][54][55]. Toth et al. and Puig et al. stated that Coagulase-negative Staphylococci (CoNS) were the most frequently isolated bacteria [41]. In contrast to our results, another Malaysian study [56] found Pseudomonas aeruginosa to be the main causative organism along with other Gram-negative bacteria. In our study, Pseudomonas aeruginosa was the most common gram-negative bacteria (13%) similar to a paper published by Toth et al. [41] where Pseudomonas spp. was the etiological agent in 10% of cultured cases. This percentage is less than that reported by Norina et al. (40%) [46].
The higher prevalence of bacterial keratitis (27.11%) over that of fungal keratitis (5.63%) in our study contradicted with the Japanese, where a higher prevalence of fungi (50.7%) mainly Fusarium was reported [57]. An American study showed that the aetiology depends on the geographic location of the study population, bacterial keratitis was more prevalent in the northern cooler states, while in the southern warmer states and rural areas, fungal infections predominated [6]. This finding corresponds with our results, since our city, Alexandria, is a coastal city.
Since contact lens wearing was found to be a serious preventable risk factor for microbial keratitis, Public Health services should be directed to raising the public awareness of this problem. The role of fever in predisposing attacks of recurrent herpetic keratitis should be furtherly studied among other factors.
The limitations of this study include that it was performed retrospectively. A large number of incomplete medical records were excluded from the study and this was detrimental in limiting the study's sample size. A larger prospective multi-centre study would gather more | 2023-04-14T13:42:44.184Z | 2023-04-13T00:00:00.000 | {
"year": 2023,
"sha1": "f663c0827d7c1d7afe1ea7efebd1edc2f96fd398",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "f663c0827d7c1d7afe1ea7efebd1edc2f96fd398",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248640055 | pes2o/s2orc | v3-fos-license | Comparison of Extraction Techniques for the Determination of Volatile Organic Compounds in Liverwort Samples
This article focuses on the comparison of four popular techniques for the extraction of volatile organic compounds (VOCs) from liverworts of the Calypogeia azurea species. Since extraction is the most important step in the sample analysis of ingredients present in botanical preparations, their strengths, and weaknesses are discussed. In order to determine the VOCs present in plants, selecting the appropriate one is a key step of the extraction technique. Extraction should ensure the isolation of all components present in the oily bodies of Calypogeia azurea without the formation of any artifacts during treatment. The best extraction method should yield the determined compounds in detectable amounts. Hydrodistillation (HD), applying Deryng apparatus and solid-liquid extraction (SLE), microwave-assisted extraction (MAE), and headspace solid-phase microextraction (HS-SPME) were used for volatile extraction. The extracts obtained were analysed by gas chromatography coupled to mass spectrometry (GC-MS) to determine the compounds.
Introduction
Plants synthesize and secrete many different volatile organic compounds (VOCs). Biologically active substances that are produced by plants are phytochemicals; however, they are not ubiquitous and are the product of specialized metabolism, narrowed down to specific families and species of plants. A feature that distinguishes plant material is the wealth of compounds that are very diverse in terms of physicochemical properties that occur in a very wide range of concentrations. The sample preparation step is important for the reliability of the results of the determined compounds. The extraction of biologically active compounds depends on many factors, including extraction methods, the type of raw material, and extraction solvent. The selectivity of extraction, the number of analytes in the sample, and the ecological aspects are also of key importance [1]. Specialized metabolites may have a wide variety of polarities, solubility, volatility, thermal stability, and the presence of different functional groups. They are often found in low concentrations in the raw material; therefore, there is no universal and simple method to isolate them. Obtaining VOCs is a difficult process, as they make up a small fraction of the plant's raw material. Various extraction methods are used. The most common method of extracting essential oil from a plant is hydrodistillation (HD). Most often, it is carried out with a glass apparatus of the Clevenger or Deryng type. Both devices are recommended pharmacopoeia devices for determining the content of essential oils. The first is described by the European Pharmacopoeia (VI) and the latter according to the Polish Pharmacopoeia (VII) [2]. The efficiency of HD is a fairly complex issue. The general disadvantage of distillation methods is that it is difficult to quantitatively determine the essential oil of small amounts of plants, as the yields are typically low [3]. In the publications that have appeared thus far, various methods of extracting VOCs present in liverworts have been used [4][5][6].
Most liverworts contain oil bodies, which are intracellular organelles surrounded by a single membrane in which a wide variety of specialized metabolites are synthesized and accumulated, such as terpenes, terpenoids, sesquiterpenes, sesquiterpenoids, and aromatics [15][16][17][18][19][20]. The presence of nitrogen, sulfur, or both nitrogen and sulfur compounds in liverworts is very rare. The most characteristic chemical phenomenon of liverworts is that most sesqui-and diterpenoids are enantiomers of those found in higher plants [6]. Furthermore, the determination of the composition of secondary metabolites that fall under the scope of chemotaxonomy can be one of the methods to help identify taxonomically difficult species [27].
In this study, various techniques were used to extract volatile components from Calypogeia azurea. The purpose of the research is to compare the most frequently used methods of extracting volatile organic compounds from liverwort cells. The results allow the provision of information on which extraction technique is most appropriate for this species of liverwort.
The collected information may be useful in further research into other liverworts from the Calypogeia genus.
Results and Discussion
Twenty-two samples of Calypogeia azurea from Poland (Table 1) were analysed for volatile specialized metabolites in the study. Supplementary Tables S1-S7 show the percentage of compounds detected present in liverwort cells. A total of 73 compounds were detected, 42 of which were identified. Depending on the extraction method used, the content of the identified VOCs differed. The study compares four extraction methods: Three (preparative) methods, which used different types of solvents with different polarities (HD, SLE, and MAE), and the non-preparative SPME method. The most common isolation method is SLE, and the extraction efficiency and activity are highly dependent on the type of solvent used. The polarity of the extraction solvent strongly influences the compounds present in the test sample. Therefore, extraction solvents are selected as they are critical to the complex sample matrix. The extraction solvent system is generally selected according to the purpose of extraction, the polarity of the components concerned, the polarity of the undesirable components, the total cost, safety, and environmental concerns [28]. Essential oils, which were prepared with different solvent methods, took different colours. The colour of the oil produced by the HD was slightly blue to purple. The compounds 1,4-dimethylazulene (53) (the bold numbers in the brackets refer to the compounds in Tables S1-S7) were responsible for these colours. Colourless solutions were formed during the maceration from n-hexane and turned dark green on extraction with methanol. Methanol is a very good extractant for chlorophylls, especially for resistant vascular plants and algae [29]. In the case of MAE, the extract obtained took the form of a dark brown liquid, regardless of the solvent used.
The main component of the Calypogeia azurea liverwort is 1,4-dimethylazulene (53). This VOC belongs to sesquiterpene hydrocarbons. The relative content of this compound during HD ranged from 16.64% (for n-hexane) to 19.49% (for m-xylene), depending on the solvent used to collect the extract. For this sesquiterpene, the best extraction method is SLE for 24 h n-hexane as the solvent. The relative content of 1,4-dimethylazulene (53) for this extraction method is 59.62%. MAE obtained only 1.37-0.21% of this sesquiterpene ( Figure 1).
The sum of sesquiterpene hydrocarbons produced is the highest for this research material and ranges from 13.73% to 78.07% of the total essential oil content, depending on the extraction method used. However, for the SPME method, the relative content of 1,4-dimethylazulene (53) is 42.67% of all detected VOCs present in liverwort cells. Another sesquiterpene compound, which is also a characteristic chemical compound for this species of liverwort, is anastreptene (18). The relative content of this compound ranges from 5.98% for MAE using methanol as a solvent to 15.74% for HD using m-xylene to collect the extract.
When this sesquiterpene is detected using the SPME method, the content in liverwort cells is 6.92%. The main component of the Calypogeia azurea liverwort is 1,4-dimethylazulene (53). This VOC belongs to sesquiterpene hydrocarbons. The relative content of this compound during HD ranged from 16.64% (for n-hexane) to 19.49% (for m-xylene), depending on the solvent used to collect the extract. For this sesquiterpene, the best extraction method is SLE for 24 h n-hexane as the solvent. The relative content of 1,4-dimethylazulene (53) for this extraction method is 59.62%. MAE obtained only 1.37-0.21% of this sesquiterpene ( Figure 1).
The sum of sesquiterpene hydrocarbons produced is the highest for this research material and ranges from 13.73% to 78.07% of the total essential oil content, depending on the extraction method used. However, for the SPME method, the relative content of 1,4dimethylazulene (53) is 42.67% of all detected VOCs present in liverwort cells. Another sesquiterpene compound, which is also a characteristic chemical compound for this species of liverwort, is anastreptene (18). The relative content of this compound ranges from 5.98% for MAE using methanol as a solvent to 15.74% for HD using m-xylene to collect the extract. When this sesquiterpene is detected using the SPME method, the content in liverwort cells is 6.92%. The main component of the Calypogeia azurea liverwort is 1,4-dimethylazulene (53). This VOC belongs to sesquiterpene hydrocarbons. The relative content of this compound during HD ranged from 16.64% (for n-hexane) to 19.49% (for m-xylene), depending on the solvent used to collect the extract. For this sesquiterpene, the best extraction method is SLE for 24 h n-hexane as the solvent. The relative content of 1,4-dimethylazulene (53) for this extraction method is 59.62%. MAE obtained only 1.37-0.21% of this sesquiterpene ( Figure 1).
The sum of sesquiterpene hydrocarbons produced is the highest for this research material and ranges from 13.73% to 78.07% of the total essential oil content, depending on the extraction method used. However, for the SPME method, the relative content of 1,4dimethylazulene (53) is 42.67% of all detected VOCs present in liverwort cells. Another sesquiterpene compound, which is also a characteristic chemical compound for this species of liverwort, is anastreptene (18). The relative content of this compound ranges from 5.98% for MAE using methanol as a solvent to 15.74% for HD using m-xylene to collect the extract. When this sesquiterpene is detected using the SPME method, the content in liverwort cells is 6.92%. The main component of the Calypogeia azurea liverwort is 1,4-dimethylazulene (53). This VOC belongs to sesquiterpene hydrocarbons. The relative content of this compound during HD ranged from 16.64% (for n-hexane) to 19.49% (for m-xylene), depending on the solvent used to collect the extract. For this sesquiterpene, the best extraction method is SLE for 24 h n-hexane as the solvent. The relative content of 1,4-dimethylazulene (53) for this extraction method is 59.62%. MAE obtained only 1.37-0.21% of this sesquiterpene ( Figure 1).
The sum of sesquiterpene hydrocarbons produced is the highest for this research material and ranges from 13.73% to 78.07% of the total essential oil content, depending on the extraction method used. However, for the SPME method, the relative content of 1,4dimethylazulene (53) is 42.67% of all detected VOCs present in liverwort cells. Another sesquiterpene compound, which is also a characteristic chemical compound for this species of liverwort, is anastreptene (18). The relative content of this compound ranges from 5.98% for MAE using methanol as a solvent to 15.74% for HD using m-xylene to collect the extract. When this sesquiterpene is detected using the SPME method, the content in liverwort cells is 6.92%.
Comparison of Solvent Extractions
The highest percentage of specialized metabolites could be extracted using the method SLE using n-hexane as the solvent for 24 h (99.51%). MAE, using ethyl acetate as the solvent, was the lowest (55.62%). The solvent method would seem to be the best method used, however, with this method, the amount of solvents used and the extraction time are not favorable. Furthermore, for the HD method and for SLE, the amounts of extracted VOCs were calculated ( Table 2). The table shows that the best solvent for the SLE method is methanol, which can yield 0.33 mg/kg of VOCs from 1 g of the sample. In the SLE method, 1 g of liverwort was used and extracted with 10 cm 3 of solvent. Unfortunately, concentrating the extracted extract to the same volume is a difficult task due to the possible loss of VOCs during the evaporation process. Table 2 shows that the most optimal extraction time for SLE is 48 h, during which the greatest number of specialized metabolites can be isolated, with the exception of n-hexane and methanol. Although, the VOC content does not differ much between the two solvents when extracted within 24 h and 48 h. In the course of the research, the percentage concentrations of isolated VOCs were calculated for the individual extraction methods used in the research. M-xylene and n-hexane were used during HD to collect the extract. It was found that there were no major differences in the HD between the two solvents. The percentage of sesquiterpenes ranged from 50.95% for n-hexane to 55.25% for m-xylene in the HD, and for aromatic compounds from 22.47% to 24.99%. Because of the lack of significant differences in the percentages of individual groups of VOCs, it seems that it is better to use n-hexane during HD because the oil obtained from plant material can be used for further research, e.g., for the isolation of individual compounds with preparative gas chromatography.
Comparison of HS-SPME with Solvent Methods
HS-SPME is a fairly fast technique to determine VOCs in complex matrices, which are undoubtedly specialized metabolites produced by the oil bodies of Calypogeia azurea. Using this technique, it was possible to detect 98.17% of the compounds, including 85.84% identified. As shown in Table 3, a richer composition of VOCs can be obtained using this method (SPME). Compared with the solvent methods, the SPME method can detect a percentage of compounds that are lower, but the number of compounds detected is greater than that of SLE or HD.
For comparison, 66 of the 73 compounds could be detected using HS-SPME. Although only 11 of the 73 compounds were extracted with the SLE method, this may suggest that this method is not a good method for isolating VOCs from Calypogeia azurea.
In the case of HD, 53 compounds were detected for n-hexane (HD1) as a solvent, and in the case of m-xylene (HD2), it was 52 out of the 73 compounds that could be isolated with HS-SPME. During the analyzing sample of extracts resulting from the HD and SLE extraction methods, VOCs appeared, which could not be determined by SPME. Examples of such relationships are ledene (40), 4,5,9,10-dehydro-isolongifolene (51), (+)-spathulenol (54). It is sometimes possible to detect compounds that cannot be extracted by solvent methods using HS-SPME. Such compounds include α-pinene (5), β-pinene (6), and limonene (7). The MAE1-MAE4 appears to be the least effective method of all the others. Using this method, it is possible to extract 56.71% of the total VOCs for the methanol solvent, and up to 70.09% for the diethyl ether solvent.
Too drastic extraction conditions resulted in the decomposition of some organic compounds, while others, e.g., RI = 1710 (64), were extracted in the greatest amount compared to HD, SLE, and MAE. In the case of this method, the conditions were too drastic (exposure to microwave radiation, elevated temperature, and solvent) for a plant with such delicate cell walls. Table 3 presents the results of the comparisons using Student's t-tests for paired samples and Wilcoxon tests, the objective of which was to capture the significance of the differences between the solventless method and individual extraction methods using various solvents.
Comparison of the Extraction Methods with the Solventless Method
The results obtained from the difference tests, presented in Table 3, show that there are slight differences between the solvent extraction methods compared to the solvent-free method (SPME).
However, with Cohen's d indices, which define the magnitude of the observed difference between the mean and the tested sample, it was found that in the SLE method there were effects representing a small (d > 0.20) or average difference (d > 0.50). In each of the samples in the SLE method, a slightly higher level of volatile compounds was observed compared to that of the method without solvent. On the other hand, no possible differences were observed in the HD and MAE methods (d < 0.20). The differential effects obtained, despite the lack of statistical significance, indicated a potentially higher level of volatile compounds in the case of using solvents with the SLE method compared to the control method without the use of a solvent.
In Tables 4-7, analogues of the Student's t-tests and Wilcoxon tests were performed with the division into individual volatile compounds. However, because of the occurrence of single measurements in the fields of aliphatic, monoterpene, and monoterpenoid, these were excluded from the analyses. In turn, for sesquiterpenoid (Table 4) and the aromatic compounds (Table 5), only some comparisons were made due to the lack of data in the remaining conditions. When analyzing the effects obtained in the case of sesquiterpene compounds (Table 4), biased effects were found for the SLE under conditions of SLE5-2 and SLE5-3 compared to SPME (p = 0.080), indicating a higher level of sesquiterpene compounds in the extraction of the SLE method under the above conditions. No more biased or statistically significant differences were found; however, possible differences were observed in the sample using Cohen's d index. The mean level of sesquiterpene compounds did not differ at all in the SPME method compared to HD and SLE under the conditions of SLE4-1, SLE4-2, and SLE4-3 (d < 0.20). On the other hand, the other conditions for the SLE possibly showed higher levels of sesquiterpene compounds compared to the solvent-free condition (d > 0.20). Furthermore, the fourth method possibly showed a higher severity of sesquiterpene compounds under the MAE2 condition compared to SPME, while under other conditions (MAE1, MAE3, MAE4) the severity of sesquiterpene compounds was slightly lower compared to SPME (d > 0.20).
In the case of aromatic compounds (Table 5), no statistically significant differences were found under these comparable conditions. However, it was observed that only in the MAE method, Cohen d values greater than 0.20 were obtained, indicating that there was no possible difference between the means. On the other hand, in the case of HD and SLE, medium-sized difference effects (d > 0.50) were found under the tested conditions, which showed a possibly higher level of aromatic compounds compared to the solventfree method.
As in aromatics, the samples tested for sesquiterpenoid compounds did not show statistically significant differences in the SPME condition (Table 6). However, the differences between the measurements were found to be at least weak in each case (d > 0.20). Observing the averages, it was found that a higher level of volatiles is possible in the SLE and MAE for the conditions HD1, HD2, and MAE2 compared to the solventless method. For the MAE conditions, SLE1-2 and SLE1-3 showed lower levels of sesquiterpenoid compounds compared to the SPME condition, while the conditions SLE3-1, SLE3-2, and SLE3-3 showed potentially higher levels of sesquiterpenoid compared to the solvent-free condition.
As in the case of general results, no statistically significant differences were found between the individual extraction methods compared to the solvent-free method in the sample of unidentified compounds (Table 7). However, a biased effect of the difference between SPME and SLE4-1 (p = 0.068) was found, indicating a higher level of unidentified compounds in the SPME trial. Furthermore, it was noticed that only in the case of the HD method, Cohen's d index allowed us to find no difference between the solvent-free method and the HD method (d < 0.20). For the SLE and MAE methods, at least a weak difference effect was observed between the SLE1-1 and MAE4 measurements compared to the SPME method (d > 0.20). This may mean that unidentified compounds may show a lower intensity level with the third and fourth extraction methods.
Differences between Solvents in Hydrodistillation
In Figure 2, the mean levels of volatile compounds are presented in the case of using HD. Due to the lack of data, no calculations were performed to compare the compounds of aliphatic, monoterpene, and monoterpenoid using the Student's t-test for the dependent samples and the Wilcoxon test, which was aimed at confirming the effect obtained.
Differences between Solvents in Hydrodistillation
In Figure 2, the mean levels of volatile compounds are presented in the case of using HD. Due to the lack of data, no calculations were performed to compare the compounds of aliphatic, monoterpene, and monoterpenoid using the Student's t-test for the dependent samples and the Wilcoxon test, which was aimed at confirming the effect obtained. There were no statistically significant differences between the solvents in HD, t (51) = 0.07; p = 0.944; d = 0.01. Furthermore, the lack of differences was confirmed by the rank test (p = 0.722). This means that, regardless of the solvent used, the HD produced the same effect. Based on the analysis with the Wilcoxon test, no significant differences were found
Differences between Solvents in Hydrodistillation
In Figure 2, the mean levels of volatile compounds are presented in the case of using HD. Due to the lack of data, no calculations were performed to compare the compounds of aliphatic, monoterpene, and monoterpenoid using the Student's t-test for the dependent samples and the Wilcoxon test, which was aimed at confirming the effect obtained. There were no statistically significant differences between the solvents in HD, t (51) = 0.07; p = 0.944; d = 0.01. Furthermore, the lack of differences was confirmed by the rank test (p = 0.722). This means that, regardless of the solvent used, the HD produced the same effect. Based on the analysis with the Wilcoxon test, no significant differences were found -HD1
Differences between Solvents in Hydrodistillation
In Figure 2, the mean levels of volatile compounds are presented in the case of using HD. Due to the lack of data, no calculations were performed to compare the compounds of aliphatic, monoterpene, and monoterpenoid using the Student's t-test for the dependent samples and the Wilcoxon test, which was aimed at confirming the effect obtained. There were no statistically significant differences between the solvents in HD, t (51) = 0.07; p = 0.944; d = 0.01. Furthermore, the lack of differences was confirmed by the rank test (p = 0.722). This means that, regardless of the solvent used, the HD produced the same effect. Based on the analysis with the Wilcoxon test, no significant differences were found There were no statistically significant differences between the solvents in HD, t (51) = 0.07; p = 0.944; d = 0.01. Furthermore, the lack of differences was confirmed by the rank test (p = 0.722). This means that, regardless of the solvent used, the HD produced the same effect. Based on the analysis with the Wilcoxon test, no significant differences were found in terms of compounds not identified (p = 0.398) and sesquiterpene (p = 0.163). However, the bias effects of the difference were confirmed for aromatic (p = 0.066) and sesquiterpenoid (p = 0.068). It turned out that the level of aromatic compounds was slightly higher with the HD2 method, while the level of sesquiterpenoid was higher with the HD1 method in both cases. The effect size index of the difference between the means indicated a difference in the mean value difference (d > 0.50). Table 8 presents the results of the analysis of differences in the scope of the SLE method. The analysis had a two-stage character using the Friedman ANOVA test due to the small size of the groups in the measurements. As it turned out, the analysis of the differences between the groups of solvents in the method with solvents allowed us to find no differences in all the time intervals. This means that regardless of the solvent used, the intensity of the volatile compounds obtained was similar. In the case of differences within the solvent groups, it turned out that there was a border difference effect in the case of the use of methanol (p = 0.050), indicating a higher intensity of volatile compounds in the case of the measurement of 24 h than in the case of 48 h and 72 h.
Differences between Solvents in Microwave-Assisted Extraction
To verify the differences in the fourth method, an analysis of variance was performed in conjunction with an auxiliary Friedman ANOVA test, the results of which are shown in Figure 3. -mean -average rank The analyses performed with the parametric test did not show statistically significant differences in the mean intensity of VOCs, F (3.33) = 1.53; p = 0.224; η2 = 0.12. However, the nonparametric analysis, which ignored errors in the measurement of the means, showed a statistically significant effect for the differences between individual solvents in the MAE method, F = 20.01; df = 3; p <0.001. The intensity of VOCs was significantly higher under the MAE1 condition than under the MAE3 (p = 0.007) and MAE4 (p = 0.001) conditions. Furthermore, a higher level of VOCs was observed under condition MAE2 compared to condition MAE4 (p = 0.034). However, no differences were found between the conditions MAE2 and MAE3 (p = 0.197), MAE1 and MAE2 (p = 1.000) and MAE3 and MAE4 (p = 1.000).
Plant Material
The plant material of the Calypogeia azurea was collected in 2021 at Szklarska Poręba, latitude, 50°47'52.9"N; longitude, 15°31'41.8"E, and the altitude ranged from 700-1200 m ASL. The storage and transfer of plant material were carried out in airtight plastic containers. The collection temperature was 10-12 °C (ambient temperature) and the transfer temperature was 15-16 °C; the pressure was approximately 1013 Mpa (ambient pressure). Only green plants that did not show signs of drying were eligible for collection and further research. In natural habitats, liverwort samples are initially identified on the basis of their morphological structure. The research was carried out on fresh material.
Fused silica fibers coated with divinylbenzene/carboxy/polydimethylsiloxane (DVB/CAR/PDMS) (Supelco, Bellefonte, Pennsylvania, USA) stationary phases were used for the SPME analysis. s in Hydrodistillation f volatile compounds are presented in the case of using lculations were performed to compare the compounds noterpenoid using the Student's t-test for the depend-, which was aimed at confirming the effect obtained. e compounds using the hydrodistillation -HD1 -HD2.
nificant differences between the solvents in HD, t (51) ore, the lack of differences was confirmed by the rank gardless of the solvent used, the HD produced the same he Wilcoxon test, no significant differences were found fied (p = 0.398) and sesquiterpene (p = 0.163). However, ts in Hydrodistillation of volatile compounds are presented in the case of using alculations were performed to compare the compounds onoterpenoid using the Student's t-test for the dependst, which was aimed at confirming the effect obtained.
ignificant differences between the solvents in HD, t (51) more, the lack of differences was confirmed by the rank egardless of the solvent used, the HD produced the same the Wilcoxon test, no significant differences were found ified (p = 0.398) and sesquiterpene (p = 0.163). However, -average rank.
The analyses performed with the parametric test did not show statistically significant differences in the mean intensity of VOCs, F (3.33) = 1.53; p = 0.224; η2 = 0.12. However, the nonparametric analysis, which ignored errors in the measurement of the means, showed a statistically significant effect for the differences between individual solvents in the MAE method, F = 20.01; df = 3; p < 0.001. The intensity of VOCs was significantly higher under the MAE1 condition than under the MAE3 (p = 0.007) and MAE4 (p = 0.001) conditions. Furthermore, a higher level of VOCs was observed under condition MAE2 compared to condition MAE4 (p = 0.034). However, no differences were found between the conditions MAE2 and MAE3 (p = 0.197), MAE1 and MAE2 (p = 1.000) and MAE3 and MAE4 (p = 1.000).
Plant Material
The plant material of the Calypogeia azurea was collected in 2021 at Szklarska Poręba, latitude, 50 • 47 52.9 N; longitude, 15 • 31 41.8 E, and the altitude ranged from 700-1200 m ASL. The storage and transfer of plant material were carried out in airtight plastic containers. The collection temperature was 10-12 • C (ambient temperature) and the transfer temperature was 15-16 • C; the pressure was approximately 1013 Mpa (ambient pressure). Only green plants that did not show signs of drying were eligible for collection and further research. In natural habitats, liverwort samples are initially identified on the basis of their morphological structure. The research was carried out on fresh material.
The TriPlus RSH (Thermo Scientific, Waltham, MA, USA) automatic sample injector was used to ensure that the samples were dispensed with sufficient reproducibility.
HD was carried out using a Deryng apparatus consisting of a 500 mL round-bottom flask, a condenser, and a heating bowl (Lab-szkło, Kraków, Poland), recommended by the VI edition of the Polish Pharmacopoeia of 2002. The Ethos one (Milestone, Sorisole, Italy) was used for MAE.
Methods
The experimental conditions for the various extraction methods are shown in Table 9. The specific extraction conditions and methods used in this study are outlined below.
Extraction by Using Headspace Solid-Phase Microextraction
The conditions of sorption and desorption were optimized by selecting the type of stationary phase coated fibers, the amount of biological material, the time, and the temperature. A fresh amount of 5 mg of Calypogeia azurea was placed in a screw-capped vial with a 1.7 cm 3 silicone/Teflon membrane. The vial was then heated at 50 • C and solid-phase microextraction of the headspace was carried out for 60 min. Desorption was performed at 250 • C for 10 min.
Extraction by Using Solvents
An amount of 1 g of plant material was weighed and crushed with an agate mortar and pestle. They were placed in glass bottles and 10 cm 3 of solvents were added according to the increasing polarity: n-hexane, diethyl ether, methylene chloride, ethyl acetate, and methanol, and were allowed to macerate for 24 h, 48 h, and 72 h. After this time, the solvent was filtered and injected into a GC-MS.
Extraction by Using Microwave-Assisted Extraction
An amount of 5 g of fresh plant material was weighed, placed in Teflon bombs and 50 cm 3 of diethyl ether was added. The entire process was carried out with the Ethos One microwave-assisted extraction system in 3 steps: ramp time of 10 min to reach 20 • C, a hold time of 20 min at 70 • C, and cooling for 10 min. The final step in the preparation of the sample for analysis was the quantitative transfer of the samples to a 1.7 cm 3 screw cap vial.
Hydrodistillation Extraction in the Deryng Apparatus
An amount of 5 g of fresh plant material was weighed and placed into a 500 cm 3 round-bottom flask; we then added 250 cm 3 of distilled water and 1 cm 3 of solvent. For HD, two solvents, n-hexane and m-xylene, were used to collect the extract. The sample flask was heated for 3 h after reaching the boiling point. The vapors were condensed by means of a cold refrigerant. After 180 min of extraction to n-hexane, the essential oil was transferred to vials and kept at 5 • C until gas chromatography-mass spectrometry analyses were performed. HD in the Deryng apparatus was carried out according to the Polish Pharmacopoeia VI [30].
GC-MS Analysis
The analysis of the composition of the compounds present in the extracts was performed by GC-MS. For liquid samples, the injection volume was 1 µL. The sample was injected in split mode (1:25). Samples analyzed with the SPME technique were injected in splitless mode. The injector temperature in both cases was 250 • C. Helium was used as the carrier gas at a flow rate of 1.0 mL/min. The oven temperature was programmed from 60 to 230 • C at 4 • C/min and then kept isothermal at 230 • C for 40 min. The ISQ QD mass detector was operated at 70 eV in the EI mode in the m/z range 30-550; transfer line, 250 • C.
The constituents were identified by comparing their MS spectra with those of the literature, reference compounds, computer matching with the NIST 11, and data obtained from the NIST Chemistry WebBook databases, the Mass Finder 4 library, the Adams library databases, and the Pherobase databases [31,32]. The identification of the compounds was verified by Kovats' retention indices. The Kovats retention indices were determined relative to a homologous series of n-alkanes (C7-C40) under the same operating conditions. The quantitative data of the components were obtained by integrating the TIC chromatogram and calculating the relative percentage of the peak areas. Each sample was analysed three times.
Statistical Analysis
The results obtained from three separate tests were averaged and expressed as a mean ± standard deviation. In order to verify the differences between the extraction methods with respect to the specificity of volatile compounds, statistical analyses were performed using IBM SPSS Statistics 27 software. The statistical methods used included the Student's t-test for dependent samples and its nonparametric equivalent Wilcoxon test, as well as an analysis of variance for multiple measurements with its nonparametric equivalent Friedman's ANOVA. Parametric and nonparametric tests were used in parallel as a result of the varying sample sizes of the extraction methods. A threshold of α = 0.05 was used as the significance level.
Conclusions
The presented studies are the first to concern (VOCs) formed in the oily bodies of Calypogeia azurea. These studies demonstrated the advantages and optimal use of commonly used extraction techniques.
Based on the research carried out, it is possible to find that, despite many studies on the effectiveness of individual techniques for the extraction of metabolites from plant material, you cannot find undoubtedly certain and universal information on which of the techniques available today are the most effective in practice. There is no clear information on the scope of the literature on the applications of the described sample/batch preparation procedures for plant material that allows one to maximize the attainable test concentrations of metabolites. Therefore, it is considered advisable to conduct research on the effectiveness of extraction techniques for the most commonly used in order to isolate metabolites from the plant material.
The article compares four methods to extract volatile organic compounds present in the oily bodies of Calypogeia azurea liverwort.
On the basis of the conducted experiments, it has been established that one of the best methods of analysis for the determination of VOCs found in the species Calypogeia azurea is SPME. Unfortunately, this method can only be used to determine the qualitative composition. SPME extraction allows the identification of low-boiling compounds that co-elute with solvents used in other methods. The main advantage, in addition to simplicity, speed, and low cost, is its "green" character. This technique does not require any toxic solvents, which is especially important nowadays. The SPME method is a method in which the amount of research material is small, which is especially important when analyzing samples, the acquisition of which is quite difficult.
On the contrary, if more research is needed using the essential oils obtained, HD is the best extraction method. With a relatively short extraction time, small amounts of solvents are used. Small amounts of samples used to perform the extraction are beneficial for samples that can be difficult to obtain. During HD, the Deryng apparatus is used, the costs of which are not high, and the extraction costs are low. The disadvantage of this process is undoubtedly the possibility of artifacts forming during heating.
The SLE method gives the greatest relative amount, and one group of compounds, the sesquiterpenes, is a self-absorbing method that uses quite large amounts of solvents. The extracts obtained in this way are diluted too much to give a reliable result when analyzed using GC-MS.
GC-MS analysis allowed for the identification of 43 components, which, depending on the extraction method used, constituted 31.64% to 97.02% of the obtained product. The MAE method was too drastic and resulted in the creation of large amounts of artifacts. Although quick and simple, this extraction technique is too drastic for delicate plants, such as liverworts.
It seems justified to further develop the HD process to obtain essential oils from liverworts. Furthermore, HD is indeed the primary technique and SPME is a complementary method for this type of sample.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27092911/s1, Table S1: Volatile compounds detected in the samples analysed by SPME and HD; Table S2: Volatile compounds detected in the samples extracted by n-hexane; Table S3: Volatile compounds detected in the samples extracted by diethyl ether; Table S4: Volatile compounds detected in the samples extracted by methylene chloride; Table S5: Volatile compounds detected in the samples extracted by ethyl acetate; Table S6: Volatile compounds detected in the samples extracted by methanol; Table S7: Volatile compounds detected in the samples analysed by MAE.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-05-10T16:10:07.805Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "1ebe893474e4e8b0aae8e63542c9ef8676b05ca8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/9/2911/pdf?version=1651569842",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78f9b199247ef589c45e7e20de42d4f89fcb0e30",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225070241 | pes2o/s2orc | v3-fos-license | Learning Contextualized Knowledge Structures for Commonsense Reasoning
Recently, knowledge graph (KG) augmented models have achieved noteworthy success on various commonsense reasoning tasks. However, KG edge (fact) sparsity and noisy edge extraction/generation often hinder models from obtaining useful knowledge to reason over. To address these issues, we propose a new KG-augmented model: Hybrid Graph Network (HGN). Unlike prior methods, HGN learns to jointly contextualize extracted and generated knowledge by reasoning over both within a unified graph structure. Given the task input context and an extracted KG subgraph, HGN is trained to generate embeddings for the subgraph's missing edges to form a"hybrid"graph, then reason over the hybrid graph while filtering out context-irrelevant edges. We demonstrate HGN's effectiveness through considerable performance gains across four commonsense reasoning benchmarks, plus a user study on edge validness and helpfulness.
Introduction
Commonsense reasoning (CSR) is essential for natural language understanding (NLU) systems to function effectively in the real world (Apperly, 2010). For example, to answer the question in Figure 1, one must already know that printing requires using paper. Yet, since commonsense knowledge is self-evident to humans, it is rarely stated in natural language (Gunning, 2018). This makes it hard for neural pre-trained language models (PLMs) (Devlin et al., 2019) to learn commonsense knowledge from corpora alone (Marcus, 2018).
allowing such KG-augmented models to make predictions via multi-hop reasoning over the KG (Lin et al., 2019;).
Despite the growing success of KG-augmented models, obtaining helpful KG facts for a given task instance remains challenging. Existing models assume using either KG-extracted edges Ma et al., 2019;Feng et al., 2020;Yasunaga et al., 2021), PLM-generated edges (to address KG edge sparsity) , or a late fusion of both is sufficient. Both extraction and generation can produce unhelpful edges, so the model must decide which edges to focus on during reasoning. Since extracted and generated edges are derived from the same set of concepts (nodes), modeling the interactions between extracted and generated edges jointly within a shared KG structure could provide stronger signal for identifying contextually relevant edges. However, current models do not leverage this information.
In response, we propose a new KG-augmented model: Hybrid Graph Network (HGN). Unlike prior models, HGN learns to jointly contextualize extracted and generated knowledge by reasoning over both within a unified graph structure. Given the task input (i.e., context) and an extracted KG subgraph, HGN is trained to generate embeddings for the subgraph's missing edges to form a "hybrid" graph, then reason over the graph (to update model parameters) while filtering out context-irrelevant edges. HGN achieves this primarily through edge reweighting, which downweights irrelevant edges, and edge-weighted message passing, which attenuates irrelevant edges' impact on reasoning.
Our extensive experiments demonstrate that HGN improves performance over all baselines across four CSR benchmarks. In particular, among comparable methods, HGN ranks first on the Com-monsenseQA (Talmor et al., 2019) and Open-bookQA (Mihaylov et al., 2018) leaderboards. Plus, our user studies show that humans find HGNfiltered edges to be more valid and helpful than the heuristically extracted edges used in prior work.
Problem Statement
We consider CSR tasks, like question answering (QA), which can benefit from commonsense KGs. To solve CSR tasks, we focus on KG-augmented models, where a PLM is augmented with a commonsense KG. Given a CSR task, let x be the task's text input, f be the model, and f (x) be the model output. We denote a KG as G = (V, R, E). V, R, and E are the sets of nodes (concepts), relations, and edges (facts), respectively, in the KG. An edge is a directed triple of the form e = (h, r, t) ∈ E, where h ∈ V is the head node, t ∈ V is the tail node, and r ∈ R is the relation between h and t. Let [·, ·] denote concatenation of text or vectors.
As illustrated in Figure 2, a KG-augmented model f has three main components: text encoder f text , graph encoder f graph , and scoring function f score . First, s = f text (x; θ text ) is the encoding of x, where f text is usually a Transformer PLM. Second, as supporting evidence, a x-specific graph G = (V , R , E ) is constructed from G ( Figure 1). Typically, this is done via heuristic extraction by selecting V ⊆ V as the concepts mentioned in x, R ⊆ R as the relations between concepts in V , and E ⊆ E as the edges involving V and R . If G does not provide enough knowledge to build a good G , then new edges are sometimes Figure 2: High-level schematic of a typical KGaugmented model for CSR. In KG-augmented models, text encoder f text tends to be a Transformer PLM, and scoring function f score is usually an MLP. Meanwhile, KG-augmented models generally vary more in their graph encoder f graph and graph construction. added to G using a PLM-based generator . We call G the contextualized KG. g = f graph (G , s; θ graph ) is then the joint encoding of G and s. Third, the model output is computed as f (x) = f score ([s, g]; θ score ), where f score is usually a multilayer perceptron (MLP). Existing KG-augmented models mainly differ in their design of f graph , reasoning over the KG through message passing (Schlichtkrull et al., 2018a;Feng et al., 2020;Yasunaga et al., 2021) or edge/path aggregation Ma et al., 2019).
While KG-augmented models can be applied to any CSR task involving KGs (e.g., natural language inference), we consider multi-choice QA in this work. Given a question q and set of candidate answers {a i }, the QA model's goal is to predict a plausibility score ρ(q, a) for each a ∈ {a i }, so that the highest score is predicted for the correct answer. To use KG-augmented models for commonsense QA, we set x = [q, a] and ρ(q, a) = f (x).
Overview
As illustrated in §2 and Figure 2, given questionanswer pair (q, a) for an instance of the multichoice QA task, the KG-augmented QA model first obtains a (q, a)-contextualized KG G via the full KG G. Edges in G can be extracted directly from G or generated using a PLM-based generator . Then, the model transforms (q, a) and G into text encoding s and graph encoding g, respectively. Finally, s and g are used to predict (q, a)'s plausibility.
However, a contextualized KG may have low knowledge recall or precision, hindering the QA model's access to relevant knowledge. Low recall can stem from missing edges in G, low precision can be the result of bad annotations in G, Figure 3: Overview of HGN. After building a hybrid graph of extracted and generated edges ( §3.2), HGN reasons over the hybrid graph by updating the node embeddings V, hybrid edge embeddings E, and adjacency matrix A at each layer ( §3.3). Darker edges indicate higher weights. Red variables are updated in the previous step. and both can be caused by noisy edge extraction or generation when building G . HGN addresses these issues by reasoning over both extracted and generated edges within a unified graph structure. To improve recall, HGN generates new edges via a PLM-based generator, then initializes a hybrid contextualized KG containing both extracted and generated edges. Note that edge generation is generally (q, a)-agnostic and may produce irrelevant edges that hurt knowledge precision. To improve precision, HGN learns to reweight edges in the hybrid graph and reason over the hybrid graph via edge-weighted message passing. This is akin to learning the hybrid graph's structure and reduces the impact of irrelevant edges on reasoning. Additionally, to further encourage downweighting of noisy edges during reasoning, HGN is trained with entropy regularization on the learned edge weights.
The overall learning objective of HGN is defined as L = L task + βL edge , where L task is the loss for the downstream task (in our work, QA), L edge is the entropy regularization term for edge weights, and β ≥ 0 is a loss weight hyperparameter. In the following subsections, we first explain how the contextualized KG G is constructed as a hybrid graph, including its node embeddings V, hybrid edge embeddings E, and adjacency matrix A 0 ( §3.2). Next, we show how HGN uses edge-weighted message passing to update V, E, and A 0 for L layers (Figure 3), yielding a refined adjacency matrix A L of learned edge weights ( §3.3). Finally, we describe how L task is computed using s and g, while L edge is calculated using A L ( §3.4).
Hybrid Graph Construction
Node Embeddings. The first step of retrieving knowledge from G is concept grounding, which involves identifying text spans in (q, a) that match nodes in V. We define V as the set of all concepts mentioned in (q, a), are the question and answer concepts, respectively. Each node v i ∈ V is represented by an embedding v i ∈ V, which can be initialized using BERT (Devlin et al., 2019) or TransE (Bordes et al., 2013).
Hybrid Edge Embeddings. In G , we loosen the definition of an edge to be e We build fully-connected edges between question and answer nodes in G . The set of edges in G is thus defined as After concept grounding, we need an edge embedding e (i,j) ∈ E for each edge e (i,j) . Let R be the relation embeddings for all relations in R, obtained using TransE. Each extracted edge However, due to edge sparsity, many edges do not have labeled relations and cannot be initialized this way. Meanwhile, despite PLMs' limitations in commonsense, they have shown some ability to encode commonsense knowledge (Davison et al., 2019;Petroni et al., 2019) and aid KG completion . Hence, we generate edge embeddings for all unlabeled edges by feeding each unlabeled edge into a GPT-2 (Radford et al., 2019) based generator f gen (·, ·). This is further explained in the "Edge Embedding Generation" paragraph.
In summary, edge embeddings are computed in a hybrid way: Edge Embedding Generation. Inspired by recent work in PLM-based commonsense KG completion , we frame edge generation as text generation. First, for each extracted edge (h, r, t) ∈ E, we first tokenize its node pair (h, t) and relation label r. Leth,r, andt be the respective token sequences of h, r, and t. Also, let $ be the special separator token. Next, for each tokenized extracted edge, we train a GPT-2 model (Radford et al., 2019) to autoregressively generate the concatenated sequence [h, $,t, $,h,r,t].
During inference, we only have unlabeled edges Alternatively, we consider another edge generation approach proposed by . Here, f gen (·, ·) is trained to generate a relational path connecting v i to v j , then pool the path into an edge embedding. The rationale for this approach is that such paths have been shown to contain useful semantic information about the relation between v i and v j (Neelakantan et al., 2015;Das et al., 2017;.
Hybrid Graph Reasoning
The procedure described in §3.2 yields a hybrid graph, containing unweighted edges between all question-answer node pairs. Constructing this hybrid graph may improve edge recall, but does not address precision. Some edges in the initial hybrid graph may be irrelevant to the question-answer pair, either due to noisy edge extraction or generation. HGN is thus designed to downweight irrelevant edges by converting the unweighted graph into a weighted one, then learning to reweight all hybrid edges during reasoning ( Figure 3).
Learnable Adjacency Matrix. Although A 0 is a binary adjacency matrix, HGN populates it with learned edge attention weights and iteratively updates them over L layers of reasoning. We denote the adjacency matrix at layer as A , where 0 ≤ A (i,j) ≤ 1. Updating A can be viewed as softly contextualizing the hybrid graph's structure with respect to (q, a).
Edge-Weighted Message Passing. Following the general Graph Network (GN) formulation pro-posed by Battaglia et al. (2018), HGN's graph reasoning module consists of layer-wise node-toedge (v → e) and edge-to-node (e → v) message passing functions. However, we equip HGN with a modified version of GN's edge-to-node message passing function, in which each edge's weight is used to rescale information flow on that edge. Intuitively, an edge's weight signifies the edge's relevance for reasoning about the given task instance. We also use text encoding s as global context throughout message passing. Formally, HGN's update rule at layer is: (1) In node-to-edge message passing, the embedding of each edge (v i , v j ) ∈ E is updated as h (i,j) , a function of (v i , v j )'s constituent nodes and the given context s. Through s, the hybrid graph is strongly contextualized with respect to (q, a). Then, h (i,j) is used to compute edge score w (i,j) , which measures the edge's relevance to s. Each edge score is globally normalized across all edges in the graph to produce edge attention weight A (i,j) , so that low-scoring edges are softly pruned by receiving close-to-zero weight.
We use global edge attention (i.e., normalizing across E ) instead of local edge attention (i.e., normalizing across N j ) because local edge attention assumes at least one edge in N j is relevant, which may not be true. For example, given an irrelevant or incorrectly grounded concept, none of its edges will be helpful, and so all nodes in its neighborhood should be excluded from influencing the reasoning process. To demonstrate the advantage of global edge attention, we empirically compare our default HGN architecture to an HGN variant based on Graph Attention Network (GAT) (Velickovic et al., 2018), which uses local edge attention, in our experiments.
In edge-to-node message passing, the embedding of each node v j ∈ V is updated as h j , a function of v j 's neighboring edges. For each edge neighbor, edge weight A (i,j) is used to rescale the edge's influence on v j 's embedding update.
Learning Objective
Task Loss. After L layers of message passing, we obtain node embeddings Node embeddings are aggregated into v agg via attentive pooling with s as the query vector. Edge embeddings are aggregated into e agg via edgeweighted sum pooling. The final graph encoding is then given as g = [v agg , e agg ]. The probability of a being the answer to q is calculated asρ(q, a) ∝ exp(ρ(q, a)), where ρ(q, a) = f score ([s, g]; θ score ). We use cross-entropy loss for the QA classification task, so the loss for each (q, a) with label y is: L task (ρ(q, a; θ)), y) = −y logρ(q, a; θ). (2) Entropy Regularization. To encourage the model to be decisive during edge reweighting, we use a regularization term to penalize nondiscriminative edge weights. In an extreme case, a blind model will assign the same weight to all edges, degenerating G into an unweighted graph. This is a failure mode, since G is likely to contain mostly irrelevant edges, and we want the model to focus on the helpful edges. Therefore, via L edge , we train the model to minimize the entropy of the edge weight distribution (i.e., make the distribution more skewed), in order to maximize the informativeness of the predicted edge weights. Lower entropy means the model has higher certainty about edges' relevance to the given task instance, such that the model will discriminatively judge some edges as being much more relevant than others. L edge is computed as: Joint Learning. We jointly optimize L task and L edge , so graph reasoning and structure can be jointly learned. The full learning objective is:
a,y)∼X train
Ltask (ρ(q, a)), y) + β · Ledge(A L (q, a)) , where θ = {θ text , θ graph , θ score } is the set of all learnable parameters, and X train is the training set. We train our model end-to-end by minimizing L(θ) with the RAdam optimizer.
Experimental Setup
We evaluate our proposed model on four multiplechoice commonsense QA datasets: Common-senseQA ( Therefore, we build our graph reasoning model on top of retrieval-augmented methods on the leaderboard: "AristoRoBERTa" 2 for OpenBookQA and "RoBERTa (2-step IR)" 3 for QASC. In this way, we can study if strong retrieval-augmented methods can still benefit from KG knowledge and our HGN framework.
Compared Methods
We compare our model with a series of KGaugmented methods and different graph encoders: Models Using Extracted Facts. We consider seven models that only use extracted facts. RN (Santoro et al., 2017) builds the graph with the same node set as our method but extracted edges only. The graph vector is calculated as (Wang et al., 2019b) softly aligns the nodes in question and answer and do pooling over all matching nodes to get g. KagNet (Lin et al., 2019) uses an LSTM to encode relational paths between question and answer concepts and pool over the path embeddings for graph encoding.
Models Using Extracted and Generated Facts.
We consider two models that use both extracted facts and generated facts. RN + Link Prediction differs from RN by only considering the generated relation (predicted using TransE (Bordes et al., 2013)) between question and answer concepts. PathGenerator ) learns a path generator from paths collected through random walks on the KG. The learned generator is used to generate paths connecting question and answer concepts. g is calculated as the concatenation of the pooled vector over the generated paths and the pooled vector over the extracted paths.
Our Model's Variants. As described in §3.2, the edge embedding can be computed either as a relation embedding or a path embedding. We name these two variants as HGN (w/ RelGen edges) and HGN (w/ PathGen edges) respectively.
Results
Performance Comparisons. Tables 1, 3, 4 show performance comparisons between our models and baseline models on CommonsenseQA, CO-DAH, OpenBookQA and QASC. We clearly find that models with stronger text encoders perform better (i.e. RoBERTa > BERT-Large > BERT-Base). For all text encoders, our HGN shows consistent improvement over baseline models on all datasets. The improvement over all baselines are tested to be statistically significant under most settings, demonstrating the effectiveness of HGN both with and without retrieved evidence. We also submit our best model to leaderboards for CommonsenseQA and OpenBookQA. For CommonsenseQA (Table 2), our HGN ranks first among comparable approaches and shows remarkable improvement over PathGenerator and the LM Finetuning approach (ALBERT (Lan et al., 2020)). Higher-ranking (Schlichtkrull et al., 2018b) 65.56 82.42 GAT (Velickovic et al., 2018) 65.88 82.78 GN (Battaglia et al., 2018) 65.52 82.06 GconAttn (Wang et al., 2019a) 65.17 82.35 MHGRN (Feng et al., 2020) 65.92 83.07 PathGenerator 64 models either use stronger text encoders or leverage additional data resources. Specifically, Uni-fiedQA (Khashabi et al., 2020) and T5-3B (Raffel et al., 2020) are based on T5. They have 11B and 3B parameters respectively, making them impractical to be finetuned in an academic setting. ALBERT+DESC-KCR and AL-BERT+KD additionally use concept definitions from dictionaries. ALBERT+DESC-KCR and AL-BERT+KCR leverage "question concept" annotations, which are used during the construction of the CommmonsenseQA dataset and allow the model to learn shortcuts that don't generalize to other datasets. ALBERT+KRD retrieve sentences from OMCS corpus (Liu and Singh, 2004) as input. These methods are therefore not comparable with our model. For OpenBookQA (Table 5), our model ranks first among all models using AristoRoBERTa as the text encoder.
User Study on Learned Structures
To assess HGN's ability to refine graph structure, we compare the graph structure before and after being processed by HGN. Specifically, we sample 30 questions with its answer from CommonsenseQA's development set and ask 5 human annotators to evaluate the graph output by GN (with adjacency matrix A extract and extracted facts only) and by HGN (with adjacency matrix A L ). We manually binarize A L by removing edges with weight lower than 0.01. Given a graph, for each edge (fact), annotators are asked to rate its validness and helpfulness. The validness score is rated as a binary value in a context-agnostic way: 0 (the fact does not make sense), 1 (the fact is generally true). The helpfulness score measures if the fact is helpful for solving the question and is rated on a 0 to 2 scale: 0 (the fact is unrelated to the question and answer), 1 (the fact is related but doesn't directly lead to the answer), 2 (the fact directly leads to the answer). Note that the percentage of valid edges can be understood as the precision of graph edges. For a given instance, the number of valid edges is proportional to the recall of the edges. We also include another metric named "prune rate" calculated as: 1 − # edges in binarized A L # edges in A 0 , which measures the portion of edges assigned very low weights (softly pruned) during training and is only applicable to HGN.
The mean ratings for 30 pairs of (GN, HGN) graphs by 5 annotators are reported in Table 6. The Fleiss' Kappa (Fleiss, 1971) is 0.51 (moderate agreement) for validness and 0.36 (fair agreement) for helpfulness. The graph refined by HGN has both more edges and denser valid edges compared to the extracted one. The refined graph also achieves a higher average helpfulness score. These all indicate that our HGN learns a superior graph structure with more helpful edges and fewer noisy edges, which improves over previous works that rely on extracted and static graphs. Detailed cases can be found in Appendix §C.
Related Work
Commonsense QA. Commonsense QA is challenging because the required commonsense knowledge is seldom given in the question-answer context or encoded in the PLM's parameters. Thus, many works obtain this knowledge from external sources (e.g., KGs, corpora). While Lv et al. (2020) show that KGs and corpora can provide complementary knowledge, our paper focuses on improving the use of KG knowledge. KG knowledge can be acquired in different ways, either from KGextracted edges Ma et al., 2019;Feng et al., 2020;Yasunaga et al., 2021), PLMgenerated edges , or both . KG-augmented models mainly differ in how they encode KG knowledge, using message passing (Schlichtkrull et al., 2018a;Feng et al., 2020) or edge/path aggregation Ma et al., 2019;. The most relevant work to ours is . The main difference is that they coarsely combine extracted and generated knowledge via late fusion, while HGN encodes both types of knowledge within a unified graph. Besides, they use RN to pool over a set of paths for graph encoding, while HGN reasons over the graph via message passing and edge reweighting.
Graph Structure Learning. Instead of assuming a fixed graph structure, a number of graph models learn the graph structure with respect to the downstream task. Some models learn to discretely select edges for the graph (i.e., hard pruning). and Franceschi et al. (2019) sample the graph structure from a predicted probabilistic distribution with differentiable approximations. Norcliffe-Brown et al. (2018) calculate the relatedness between any pair of nodes and only keep the top-k strongest connections for each node to construct the edge set. Sun et al. (2019) start with a small graph and iteratively expand it with retrieving operations. Others learn to reweight edges in a fully connected graph (i.e., soft pruning). and Yu et al. (2019) propose heuristics for regularizing edge weights. Hu et al. (2019) use the question embedding to help predict edge weights. Unlike other edge reweighting models, HGN operates over a hybrid graph of both extracted and generated edges, while updating edge weights with respect to node, edge, and text features.
Conclusion
In this paper, we propose HGN, a KG-augmented model for CSR. To address KG edge sparsity and noisy edge extraction/generation, HGN learns to jointly contextualize extracted and generated knowledge by reasoning over both within a unified graph structure. We justify HGN's design by showing that HGN improves performance on various CSR benchmarks and user studies. In future work, we plan to increase the graph's relation expressiveness by incorporating open relations, plus make the edge extraction/generation process more dependent on the reasoning context. | 2020-10-27T01:01:16.955Z | 2020-10-24T00:00:00.000 | {
"year": 2020,
"sha1": "b393a7a481345e8a5567d8fe56209a1c1fe88d04",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2021.findings-acl.354.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "b393a7a481345e8a5567d8fe56209a1c1fe88d04",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
268562509 | pes2o/s2orc | v3-fos-license | Comprehensive Analysis of Connectivity and Permeability of a Pore-Fracture Structure in Low Permeability Seam of Huainan–Huaibei Coalfield
The connectivity and permeability of the coal seam pore structures control the occurrence and migration of coalbed methane. Coal samples were used from Huainan–Huaibei to reconstruct three-dimensional models of the pores and an equivalent pore network model, Statistical pore structure characteristic parameters. The pore structure of the coal reservoir was analyzed from the direction of multidimensional and multiangle. It shows that based on quantitative analysis, the representative Elementary volume of 500 × 500 × 500 was the most suitable experimental volume. The Y-axis direction of the Renlou sample had poor pore connectivity compared to that of other samples. Large volume connected pores dominated their pore systems. In terms of coal sample pore connectivity, the coal samples from the Liuzhuang and Qidong regions had pore connectivity better than those from the other regions. The pore connectivity of the Liuzhuang coal samples was the best. In terms of coal permeability, the Liuzhuang sample had better permeability than the other three samples, and the permeability was the best in the Y-axis direction. For all the combinations of the different types of throats, the shorter the throat, the greater the equivalent radius and the better the permeability. Conversely, the worse the permeability. During gas injection production, the closer the gas injection area was to the gas injection well, the poorer the connectivity and the lower the permeability over time. Near the production area, where the CO2 did not reach the production area, the fracture porosity and effective connected porosity of the coal reservoir increased over time. When CO2 reached the production area, the change in its connected pore structure was consistent with the change in the connected pores in the gas injection area. With this study, the coal seam pore structure on a microscale was characterized. A comprehensive analysis of the coal reservoir pore connectivity and permeability was completed. The study results are significant for the exploration and development of coalbed methane in the Huainan–Huaibei coalfield.
INTRODUCTION
Coal is a complex porous medium, that not only contains solid components, but also a large number of pores and fractures. 1,2he distribution of the pore and fracture structure of coal stores coalbed methane and allows for its migration.It also directly affects the accumulation and migration of fluids and plays a very important role in the exploration and development of coalbed methane. 3,4The pores and fractures not only restrict the gas content of the coal seam but also affect the economic viability of the coal seam. 5Therefore, whether it is direct coalbed methane extraction or gas injection extraction (CO 2 -ECBM), the connectivity of the pore and fracture system has always been the main aspect affecting the efficiency of CO 2 injection and CH 4 production. 6The analysis of the core parameters of pores and fractures is the key to studying the connectivity and structure in the coal reservoir during the gas injection production (CO 2 -ECBM).
The innovation of this research is mainly reflected in the following aspects: (1) pore structure, connected pore structure, and isolated pore structure of the coal reservoir are visually reconstructed to build the equivalent pore network model.(2) The pore structure analysis from a multidimensional perspective was completed by using a variety of algorithms and combining representative elementary volume (REV).(3) The pore connectivity analysis of the coal samples was completed by using coordination parameters with the constructed equivalent pore network model.(4) The permeability experimental analysis of the pore structure in different directions of the coal samples was analyzed by using the equivalent pore network model.Lastly, a permeability analysis of the pore structure of the coal samples was proposed by using the combination and collocation of the different types of throats.This study can provide a theoretical basis for the exploration and development of the CBM in the Huainan−Huaibei coalfield and can enrich the development of digital core technology.
Coal Sample Preparation.
Coal samples were obtained from the coal mines of the Liuzhuang, Renlou, Panyi, and Qidong in the Huainan−Huaibei areas, with sample numbers LZ, RL, PY, and QD respectively (Figure 1).The coal of the LZ coal mine is medium ash, low to extra low sulfur, low to extra low phosphorus, high calorific value, and highquality gas coal.The coal of the RL coal mine is a high-quality gas coal.The PY coal mine has coal of excellent quality, with "three low and one high," that is, low ash, low sulfur, low phosphorus, and high caloric content coal.It is an excellent coking and thermal coal.The QD coal contains gas coal, fertilizer coal, and 1/3 coking coal that contains a small amount of anthracite.The coal samples were collected from fresh underground working faces and packaged and transported under relevant national and international standards (GB/T 6948-2008 and GB/T 8899-2013).The collected samples were also wrapped with toilet paper and plastic wrap to prevent water exposure and oxidation.
2.2.CT Experiment.X-ray micron CT uses conical X-rays to penetrate the object, magnify the image through the objective lens, and reconstruct a 3D stereo model through a large number of X-ray attenuation images obtained via a 360°rotation of the precision sample table.The pore structure and relative density of the core can be obtained without damaging the sample via the attenuation information on the X-ray energy during penetration of the object.In this experiment, the X-ray CT used the Xradia 520 Versa CT scanning system produced by Carl Zeiss of Germany (Figure 2).This system consists of an X-ray source, a precision sample table, a high-resolution detector, a data processing system, and a controller system.It can meet the requirements of high-precision nondestructive testing of smalldiameter samples.
To allow for further sample preparation and scanning work, the collected fresh coal samples were prepared into cylinders of certain specifications according to the requirements of the scanning experiment for the testing unit.After an X-ray CT scan, some parameters can be obtained from the four tested samples (Table 1).
3D Visualization Reconstruction of CT Images.
The scanned CT thin sections can be used to develop the 3D visualization model of the segmentation and combination of the coal matrix, pores, and minerals.The main steps include noise reduction processing, threshold selection, image segmentation, and performing representative volume unit analysis, pore analysis, and equivalent pore network model reconstruction (Figure 3).A median filter noise reduction can ensure the integrity of the pores and allow for the transition between the pores and the matrix so that the pores can be distinguished from other components. 33The watershed algorithm can be used for threshold selection of the pore, matrix, and mineral phases, 34 The threshold selection is done via the Interactive Thresholding operation of the AVIZO software.The representative elementary volume (REV) is a basic method used to quantify the scale effect.When the coal size is larger than the REV scale, its physical and mechanical characteristic parameters are usually stable. 35At this time, the characteristics of the coal body for the REV size represent the characteristics of the entire coal body.This reduces the computer calculation time and improves the accuracy of the experimental data.
A 1000 consecutive CT sections in the middle of the coal sample were performed for this study.Different pixel sizes were cut according to different conditions for the CT sections of the coal sample itself.The watershed algorithm was used to select the threshold value.After the threshold value was selected, the slices that were processed by the threshold value were compared to the original two-dimensional slices to detect whether some pores should be deleted.At this time, partial fine-tuning of the image segmentation threshold is carried out.For the selection of REV, a 500 × 500 × 500 voxel square should be selected and extracted from the middle of the coal sample.Finally, via the segmentation and combination of the coal matrix, the pores and minerals can be visualized through the functional operation of segmentation by AVIZO software (Figure 4), to achieve a threedimensional reconstruction of CT images.The AVIZO software PNM model (pore network model) was used to construct the equivalent pore network model and complete the corresponding analysis.
2.4.Pore Structure Characteristic Parameters.For this study, the coal rock was divided into pore and solid parts.Since two-dimensional slices were used to develop a three-dimensional reconstruction of pores through the AVIZO software for this experiment, the connectivity of the different dimensions of the coal and rock pores can be studied from both two-and threedimensional perspectives.The two-dimensional angle parameters mainly consider the surface porosity, while the threedimensional Angle parameters mainly consider the pore volume, throat parameters, and coordination number.
2.4.1.Surface Porosity.The two-dimensional thin-section images that were obtained by the CT scan are displayed in the form of pixels and show both including the pore area, the coal matrix area, and the mineralized area.In this study, the pore and solid areas (Figure 5) were considered.The surface porosity is the ratio of the pore area to the overall area in the picture (eq 1).
The study of the surface porosity of the sample helps us understand the change in the local pore structure of the coal. 36
S S S
pore pore solid where θ is the surface porosity; S pore is the pore area; and S solid is the solid area.
2.4.2.Pore Volume.The pores in the coal seam serve as the storage, migration, and production place of coalbed gas.Therefore, the volume and distribution of the coal seam pores affect the exploration and development of the coalbed gas.Using the maximum-sphere algorithm, each pore position is assumed to be a sphere equal to its volume, and the equivalent pore size can, therefore, be obtained by eq 2. 37 where, D eq is the equivalent pore size, micron; V pore is a single pore fracture volume (micron cubic).
2.4.3.Throat Parameter.The throat is the channel that connects the pores and serves as the main channel through which fluid flows from one pore to another.The throat parameters mainly focus on the length and radius of the throat.The length of the throat is the path of fluid migration, and the longer the connectivity, the worse fluid migration will be.The throat radius is the width of the throat through the channel, and the larger the throat radius is, the better the connectivity.
Coordination Number.
The coordination number refers to the number of throats connected to each pore. 38,39The size of the coordination number controls the flow and production of fluid in the pore.The larger the coordination number, the better the connectivity of the pore.When the coordination number is 1, the pores do not have connectivity and are called dead-end pores (Figure 6).Absolute permeability: Flow is laminar in all directions (Poiseuille flow).
To calculate the absolute permeability of the model, the network is assumed to be filled with only one phase.During steady-state flow of an incompressible fluid, the mass conservation for each pore body is described as (eq 2): where the summation is performed on all of the pore j connected to the pore i; q ij represents the flow rate between pore i and pore j.
Under laminar flow conditions, the relation between the pressure drop and flow rate is linear (eq 3): where g ij represents the conductance of the throat between pore i and pore j.Since the conducting throats are represented by cylindrical pipes of radius r ij and length l ij , the hydraulic conductance is given by Poisuille's law, where μ is the fluid viscosity (eq 4): The pressure difference that is imposed across the network results in a linear system of equations that is solved numerically (eqs 3 and 4).This leads to the following matrix equation: G*P = S, where G is the matrix of conductance, which is a symmetrical matrix of dimensions N*N, where N is the number of pores in the network.P is a vector of the size N corresponding to the pressure in each pore.S is a vector of size N, which is constrained by the pressure boundary conditions at the inlet and outlet of the system.The total flow rate can then be computed: Q = ∑ (P i − P j )g ij on each pair of pores i, j intersecting an arbitrary cross-section of surface A. The permeability (k) of the network is then finally deduced from Darcy's law (eq 5): where △P is the gradient of pressure applied to the boundary (input pressure − output pressure) and L is the length of the network in the flow direction.
2D Slice Pore Analysis.
The information contained in the two-dimensional thin section image includes the coal matrix area, pore area, and mineralized area.The change in the local pore structure of the coal can be understood through a study of its surface porosity.If the surface porosity is 0 or close to 0, it means that the pores in this part are not connected or have poor connectivity.
Based on Figure 7, the surface porosity changes in different directions, the porosity of coal samples in the Y-axis direction in the Huainan−Huaibei area is complex, and the porosity of LZ and RL coal samples in the Y-axis direction is close to 0, which means that the connectivity of the coal samples in the Y-axis direction is poor.The variation in the amplitude of the porosity for coal samples from PY and QD of the two places on the Y-axis is large, indicating that the pore structures of these two places in the Y-axis direction are relatively complex.Therefore, it can be seen that the pore development of coal samples in the Huainan− Huaibei area based on the Y-axis direction is relatively complex.At the same time, based on Figure 7, it can be seen that the value of surface porosity in different directions in the Huainan− Huaibei area basically fluctuates around 0.1%, reflecting the low permeability characteristics of coal samples in the Huainan− Huaibei area.
3.2.Characterization and Analysis of the Pore Structure.3.2.1.3D Pore Structure Reconstruction.The pore structure of the coal samples includes two types of connected and isolated pores.Therefore, these two types of pores were extracted and reconstructed separately in this study.After image segmentation and visualization, the coal matrix, pores, and minerals were extracted separately to visualize the separation of the pores and solid parts.The axis connectivity algorithm of the AVIZO software can be used to extract the structure of connected pores after the pore structure is obtained.The isolated pores can be extracted from the total by excluding the connected pores using the image algorithm (Figure 8).
3.2.2.Quantitative Analysis of the Pore Structure.Isolated pores generally store coalbed methane, while connected pores generally serve as the migration and production channels of coalbed methane.It is therefore very important to perform a quantitative analysis of the two kinds of pores during the exploration and development of coalbed methane.Statistical data can be used to divide the isolated pores into three types: A, B, and C. The type A pore has a volume of less than 10 4 μm 3 .The type B pore has a volume of 10 4 −10 5 μm 3 ; while the type C pore are pores larger than 10 5 μm 3 in volume.Table 2 shows the percentage of isolated pore volume for each type of pore.The connected pores can also be divided into three types: D, E, and F. Of these, the pore volume of type D is less than 10 6 μm 3 .Type E has a pore volume of 10 6 −10 7 μm 3 , and type F has a pore volume greater than 10 7 μm 3 .Table 3 shows the percentage of connected pore volume of each type of pore.Irrespective of whether the pores are isolated or connected, the large-volume pores occupy dominant positions in the pore system.These coal samples, therefore, not only store coalbed methane well but also contain good migration channels to release the coalbed methane.The connected pores are the main factor affecting the connectivity of the coal samples; therefore, the volume proportion of connected pores should be assessed to evaluate the coal sample connectivity.Of the four types of samples, the PY samples occupy a small proportion of the connected pore volume and show poor connectivity.
Characterization and Analysis of the Equivalent Pore Network Model. 3.3.1. Equivalent Pore Network Model
Extraction.The equivalent pore network is a model that provides a simplified model of the complex connected pore structure in coal rock.The connected pore structure has been reconstructed by CT and the pore network model (PNM) using AVIZO software, after which an equivalent pore network model, the ball-and-stick model, was developed.This model uses the principle of maximum sphere algorithm, 40 where several spheres occupy the pore fracture space in the porous media, and where the pore fractures and throats have been identified.A sphere is called a maximum sphere if it fills the pore fracture space and is not fully contained by other spheres.A local maximum sphere is then defined as a pore body, and the link between the largest spheres is called the throat, which is used to build the equivalent pore network model (Figure 9).
Parameter Analysis of the Equivalent Pore Network
Model.The structure of connected pores can be used to extract the equivalent pore network model, while the AVIZO software can quantitatively count some of the characteristic parameters.The number of LZ-connected pores is 395 and the number of throats is 1166.The number of connected pores in the RL samples was 836 and the number of throats was 3363.PY has 582 connected pores and 1262 throats, while QD has 1491 connected pores and 4827 throats.Table 4 lists some additional parameters.
The coordination number, throat radius, and throat length have a significant effect on the connectivity of the coal samples and must be statistically analyzed.Statistical analyses show that the coordination numbers of the LZ, RL, PY, and QD coal samples are distributed within 1−20, indicating that more throats connect the pores and that the pores have good connectivity.The dominant coordination numbers of the sample include LZ = 4−6, RL = 4−9, PY = 3−6, and QD = 2−8 (Figure 10), indicating that all four samples have good connectivity based on the angle of the pores.Since the pores with coordination number 1 are dead-end pores, samples with a large number of these pores will reduce the connectivity of the sample.Sample PY has a large number of dead-end pores, and therefore, the connectivity of sample PY is poor compared to the other samples.The equivalent pore network model (Figure 9) shows that the connected pores of samples LZ and QD are better distributed in space, while sample LZ is the best.The average coordination numbers of samples LZ and QD are better than the other samples, indicating good connectivity, while the changes in the coordination number of the samples in Figure 10 reflect the quantitative equilibrium between the pores and the throats.Samples with lower variation in the coordination number show better balance.The variation in the coordination numbers of samples LZ and QD is low and the pores in these samples are better connected.However, the variation in the coordination numbers of sample PY is large, and this sample has poor balance and pore connectivity.An analysis of the throat parameters, based on the above coordination numbers and the overall perspective, indicates that the connectivity between samples LZ and QD was good, while the connectivity between samples RL and PY was poor.Using these conditions, samples LZ and QD were grouped, and samples RL and PY were grouped.The connectivity of the samples in different places was then further analyzed.Half of the proportion of throat parameters was selected based on statistical analyses.Based on Figure11, the number of throats in the LZ sample with a throat radius of 10−60 μm accounted for 74.5%, while the proportion of each interval was more balanced.The number of throats in the QD samples with a throat radius of 10− 40 μm accounted for 71.4%, and the throat number of QD samples with a throat radius of 10−40 μm accounted for a range of 10 μm.The number of throats in 10−30 μm was more prominent.The LZ samples had a throat length of 200−500 μm and the throat number of QD samples accounted for 58.5%, while the QD samples had a throat length of 200−400 μm and a throat number of 59.8%.Considering the average analysis of the throat parameters of the LZ and QD samples, the connectivity of the LZ samples is better than that of the QD samples.Similarly, the RL samples showed better connectivity than the PY samples.
Permeability Analysis
Based on the Pore Network Model.The connectivity difference of the coal samples will affect their permeability.Understanding the permeability of coal samples can, therefore, enhance the connectivity analysis of coal samples.The PNM model of the AVIZO software can experimentally simulate the permeability of coal samples in different directions and obtain certain parameters by using the algorithm carried out by the software itself (Table 5).Throats, as the connection channels between the pores, not only affect the permeability of the coal reservoir but also directly affect fluid migration in the coal reservoir.The AVIZO software can intuitively observe changes in the fluid flow (Figure 12).In Figure 12, blue represents small flows, and red represents large flows in the X, Y, and Z directions of the respective coal samples.The input pressure is set to 0.13 MPa, and the output pressure is set to 0.1 MPa.The fluid viscosity is set to 0.001 Pa s.
According to the results of permeability simulation experiments, it can be seen that the fluid seepage of the LZ and QD samples is better than that of the other two samples in terms of spatial distribution, and at the same time, it reflects better connectivity and is consistent with the results of the previous analysis.Figure 12 shows that compared to the other samples, the structure between the throats of the LZ samples is relatively simple, with a low degree of bending, and therefore, the fluid permeability is relatively good.The permeability of the LZ sample is better than that of the other samples, while the permeability of the LZ sample in the Y-axis direction not only is better than that of the other samples but also performs better compared to the other samples (Table 5).At the same time, it can be seen that the permeability of the RL sample in the X-axis direction is excellent.
Selection and Application of the REV before Pore
Structure Analysis.The REV in this study was selected based on the central part of the slice and the central part of the coal sample (Figure 13).After analyses, cubes with a REV size of 500 × 500 × 500 voxels were selected for further analyses of the LZ, RL, PY, and QD coal samples (Figure 14).
First, the selection of the cube size was based on the central part of the 2D slice.Cube extraction was then carried out based on the distribution of the continuous slices.The final size of the REV was determined according to the changes in the porosities of the extracted coal samples with different sizes.
When the cube size of the sample LZ was smaller than the voxels of part III (500 × 500 × 500), the changes in the porosity of part I and part II and the REV were rather disordered.When the cube size of the LZ sample is located in part III, the changes in the porosity were relatively stable, and the porosity only began to fluctuate a little when it reached part IV.The stable voxel cube size of 500 × 500 × 500 was therefore selected for the study, and the corresponding operations were carried out in the same way on other samples.When the voxel cube size of other samples was larger than 500 × 500 × 500, the hole fissure tended to be stable, and the size of a 500 × 500 × 500 cube was also selected for the study.
Connectivity of the Coal Samples Based on
Connected Pore Classification and Quantitative Analysis.The pores in coal can be divided into connected and isolated pores.The connected pores provide the storage and migration pathways for coalbed methane.A better understanding of the connected pores helps us to understand coal reservoir connectivity.Whether methane is directly mined or extracted via gas injection mining (CO 2 -ECBM), the connected pores in the coal reservoirs will directly affect the economic viability of the coalbed methane.A good understanding of the connected pores of coal samples in reservoirs is therefore required.
Connected pores can be further divided into fractures and effectively connected pores, but there is no definite difference between fractures and pores at present.These two features were therefore studied together in this study, while the distribution relationship between them was also analyzed (Figure 15).
Figure 15a generally shows a large volume of connected pores, which represents the main migration channel of fluid in coal and which affects coal seam gas exploitation.Figure 15b indicates the storage sites and migration channels for the coal seam gas in the coal reservoirs but mainly indicates storage.The distribution of connected pores in Figure 15c is ideal for the exploration and development of coalbed gas.The pores indicated in Figure 15c have a good methane storage potential and also show good migration channels for the extraction of coalbed methane.These channels not only make the coal seam structure more stable but also allow for overall communication throughout the coal seam.Proper pores and migration channels are very important for the development of coalbed methane and the potential evaluation of coalbed methane resources.
The connected pores of the coal samples have been divided based on their pore equivalent radius.Pores with a radius of 10− 30 μm occupy a relatively dominant position in the four coal samples, indicating that the pores in this range frequently occur in the low-permeability coal seams of Huainan−Huaibei (Figure 16).The main pore type of the LZ samples is the connected pore with a pore size of 10−60 μm.The connected pore diameter of the RL samples is relatively average in each aperture range while the connected pore radius for the PY and QD coal samples ranges from 10 to 40 μm.Based on the previous analysis results, the connected pores with pore sizes ranging from 10 to 40 μm allow for a better-connected pore system in the coal samples, and the coal samples with a pore distribution in this range show better pore connectivity.
Role of the Coordination Number on the Connectivity of the Coal Reservoir Pore Structure.
Based on previous analyses, the basic units of the equivalent pore network model include the reservoir pores and throats that act as migration channels between the pores.The black circle in Figure 17, represents the pore; the red column represents the throat; and the dashed blue line represents the fluid movement at the sample site.P in is the inlet pressure, P out is the outlet pressure, and the blue arrow shows the direction of the fluid movement.
Figure 17a,b shows that isolated pores have connected pores inside the coal sample and that the fluid activity area cannot penetrate the coal rock; therefore, there is no connectivity.Figure 17c shows a connected pore that can communicate with the outside world.Fluid can pass through the coal sample through the internal channel with good connectivity.
Figure 17d−f shows the influence of throat and pore quantity equilibrium on connectivity.The number of pores and throats is poorly balanced, and fluid enters from the left, while its final active area is limited to the right, indicating poor connectivity (Figure 17d).As can be seen from Figure 17e, the number of pores and throats is generally balanced, and fluid can easily flow along the channel in a certain direction, but it cannot move in the entire region of the coal sample.Figure 17f shows that the quantity balance between the pores and the throat is good and that the fluid that enters the sample from the left side can eventually move through the whole sample, indicating that the sample has good connectivity.
The above method has been used to evaluate the connectivity of the LZ, RL, PY, and QD samples.The constructed threedimensional model that was based on statistical data, combined with the equivalent pore fracture network model, shows that the LZ sample performs better than the other three samples in terms of the local angle and the overall angle of the sample.This means that the LZ sample has an ideal connectivity.18).Light blue is 0, which is greater than light blue, plus 1 for every step up and less than light blue, minus 1 for every step down.
The larger the equivalent radius of the throat, the better the permeability to the coal sample and the better the overall condition than the possible result of the arrangement and combination of the various types of throats (Figure 18d−f).Figure 18a−c shows that the smaller the equivalent radius of the throat, the worse the permeability of the coal sample.The overall permeability of the coal samples with a large equivalent radius connected to throat samples with a small equivalent radius is better than that of coal samples with the opposite equivalent radius (Figure 18g−i,j−l).
When considering only the thickness of the coal sample throat, the permeability of the coal sample with the throat distribution in Figure 18e is better than that of the other samples, while Figure 18c is the worst.When considering the collocation of the throat thickness, the distribution of the throats in Figure 18k is the best, while the distribution of the throats in Figure 18i is the worst.
Even though permeability analyses help in the study of pore connectivity, there are certain limitations to this study.Realworld permeability analyses are complicated by changes in the pore structure, which is influenced by changes in reservoir stress during coal seam gas exploitation.
4.5.Evolution of the Connected Pore Structure in CO 2 -ECBM Time.The connected pores can be further divided into cracks and effectively connected pores.This allows for analyses of the structural evolution, connectivity, and permeability advantages and shortcomings of connected pores in CO 2 -ECBM (Figure 19).In Figure 19, gray represents the coal matrix, yellow is the effective connecting hole, white represents the fracture, A is the gas injection well, and B is the production well.
Near the gas injection area, the pore structure changes of the coal reservoirs are mainly influenced by the effective reservoir stress and competitive adsorption of CH 4 and CO 2 (Figure 19).CO 2 injection can reduce the effective stress of the coal reservoirs and improve the fracture porosity in the vicinity of the gas injection wells, which improves the connectivity and permeability.However, at the same time, injected CO 2 will competitively adsorb with CH 4 .Since the coal matrix will preferentially adsorb CO 2 , large volumes of CO 2 will be adsorbed, which will cause a great expansion of the coal matrix.This expansion will reduce the effective porosity of the coal matrix, which reduces the connectivity and permeability of the coal sample.Simultaneously, the expansion of the coal matrix affects the connected pore structure of the reservoir to a larger degree than the effective stress of the reservoir.Over time, the coal matrix will develop poorer connectivity, and the permeability in areas close to the gas injection well will decrease.
Near the production area, where CO 2 does not reach, the coal reservoir permeability is mainly influenced by the effective reservoir stress and the matrix contraction caused by CH 4 desorption.During the initial stages of coalbed methane extraction, the boundary pressure relief of the production wells increases the effective stress on fractures in the reservoir.This reduces the fracture porosity, which further reduces the connectivity and permeability.The boundary pressure relief of the production wells also causes the desorption of CH 4 , which causes a contraction of the coal matrix and improves the effective porosity.However, the migration of free CH 4 in the fracture will increase the pressure in the fracture and delay the reduction of the fracture.Over time, the influence of CH 4 desorption on the porosity of the coal reservoir becomes more dominant.The combined effect of CH 4 desorption and the effective stress improves the fracture porosity and effectively connects the porosity of the coal reservoir.This means that when the CO 2 reaches the production zone, the changes in its connected pore structure are consistent with those of the gas injection zone.
CONCLUSIONS
This study constructed a 3D model and an equivalent pore network model of coal samples from the Huainan−Huaibei coalfield.The pore structure of coal reservoirs is analyzed from a multidimensional perspective and from a global to a local perspective.Based on the three-dimensional visualization of the indicating poor connectivity compared to other samples.Second, statistical analyses of the volume parameters of the connected and isolated pores show that large-volume connected pores dominate their pore systems, while connected pores with a radius of 10−30 μm dominate in the coal samples.2. The coordination numbers of the samples ranged from 1−20.Based on the equivalent network model and the three-dimensional visualization of the pore structure, the connectivity of the coal samples can be analyzed.The connectivity of the LZ samples was better than that of the other samples both from the perspective of locality and the perspective of the coal samples as a whole.The connectivity of the RL and PY samples was relatively poor, while the connectivity of the PY sample was the worst.3. Based on the equivalence network model, it is possible to simulate the experiment of fluid permeability in different directions of the same sample.It showed that the Liuzhuang sample is more permeable than the other samples, and the permeability was best in the Y-axis direction.For any combination of different types of throats, the shorter the throat, the greater the equivalent radius, and the better the permeability.4.After further dividing the connected pores, the evolution of the connected pore structure during CO 2 -ECBM could be determined.Near the gas injection area, the connectivity and permeability decreased over time close to the gas injection well.Near the production area, where the CO 2 did not reach, the fracture porosity and effectively connected porosity of the coal reservoir increased over time.Where the CO 2 reached the production area, the changes in its connected pore structure were consistent with those in the gas injection area.
Figure 1 .
Figure 1.Test sample sampling point distribution.The map of China was downloaded from Amap, and other photos were taken by Zhangfei Wang.(a) LZ, (b) RL, (c) PY, and (d) QD.
Figure 2 .
Figure 2. Schematic diagram of a CT scan.
Figure 3 .
Figure 3. Schematic diagram of extraction process of the pore model with labels.(a) Original CT images; (b) denoise processing; (c) threshold selected; (d) selected REV; (e) labeling analysis; (f) equivalent of the pore network model.The photographs were taken by Zhangfei Wang.
2. 4 . 5 .
Principle of Reservoir Permeability Model.The coal reservoir permeability simulation in this study is based on the equivalent pore network model.This model is combined with the built-in AVIZO software algorithm to simulate and analyze the absolute permeability in three different directions: X, Y, and Z.The absolute permeability simulation principle of the software is as follows:
4 . 4 .
Permeability Analysis Based on the Combination of Different Types of Throats.The permeability of the coal reservoir directly reflects the quality of the coal sample connectivity and helps in the analysis of the coal seam connectivity.The AVIZO software can extract the connected pore structure and build an equivalent pore network model based on this structure.The AVIZO model allows for visual observation of the distribution, migration, and flow changes of the fluids in the connected pores based on color changes.Red indicates a high flow rate, and blue indicates a low flow rate.In the equivalent pore network model, the black circles represent the pores, and columns of different colors and sizes represent the different flow rates of the throats (Figure
Figure 16 .
Figure 16.Percentage of the equivalent pore radius.
Figure 17 .
Figure 17.Effect of coordination number on the connectivity of the pores.(a−c) Difference in coordination number; (d−f) difference in the balance of the number of pores and throats.
Figure 18 .
Figure 18.Analysis of the throat distribution.(a−c) Collocation between the long and short types of fine throats; (d−f) Collocation between the long and short types of coarse throats; (g−l) Collocation between long and short types of fine and coarse throats.
Table 1 .
Dimensions of Two Coal Samples Used in This Study
Table 2 .
Volume Percentage of Isolated Pores
Table 3 .
Volume Percentage of Connected Pores
Table 4 .
Quantitative Parameter Statistics of the Connected Coal Sample Pore Fissure Network Model
Table 5 .
Absolute Permeability of the Connected Pore Structures in Different Directions | 2024-03-22T15:30:22.674Z | 2024-03-20T00:00:00.000 | {
"year": 2024,
"sha1": "690e8dc56cdc9e920ac08850bf16b7612fedafec",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c10247",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b467f1cb7235d784f71b3a31c4bbb49d2de9a7d",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
138630993 | pes2o/s2orc | v3-fos-license | On the role of CFRP reinforcement for wood beams stiffness
In recent years, carbon fiber composites have been increasingly used in different ways in reinforcing structural elements. Specifically, the use of composite materials as a reinforcement for wood beams under bending loads requires paying attention to several aspects of the problem such as the number of the composite layers applied on the wood beams. Study consolidation of composites revealed that they are made by bonding fibrous material impregnated with resin on the surface of various elements, to restore or increase the load carrying capacity (bending, cutting, compression or torque) without significant damage of their rigidity. Fibers used in building applications can be fiberglass, aramid or carbon. Items that can be strengthened are concrete, brick, wood, steel and stone, and in terms of structural beams, walls, columns and floors. This paper describes an experimental study which was designed to evaluate the effect of composite material on the stiffness of the wood beams. It proposes a summary of the fundamental principles of analysis of composite materials and the design and use. The type of reinforcement used on the beams is the carbon fiber reinforced polymer (CFRP) sheet and plates and also an epoxy resin for bonding all the elements. Structural epoxy resins remain the primary choice of adhesive to form the bond to fiber-reinforced plastics and are the generally accepted adhesives in bonded CFRP-wood connections. The advantages of using epoxy resin in comparison to common wood-laminating adhesives are their gap-filling qualities and the low clamping pressures that are required to form the bond between carbon fiber plates or sheets and the wood beams. Mechanical tests performed on the reinforced wood beams showed that CFRP materials may produce flexural displacement and lifting increases of the beams. Observations of the experimental load-displacement relationships showed that bending strength increased for wood beams reinforced with CFRP composite plates and sheets compared to those without CFRP reinforcement. The main conclusion of the tests is that the tensioning forces allow beam taking a maximum load for a while, something that is particularly useful when we consider a real construction, so in case of excess lift beam, we have time to take strengthening measures and when is about a catastrophic request (earthquake) the construction remain partially functional. The experiments have shown that the method of increasing resistance of wood constructions with composite materials is good for it. The solution is easy to implement and has low costs.
Introduction
Carbon fiber is used in many areas where a combination of high strength and low weight are required. The strongest are approximately five times stronger than steel and considerably lighter. Other useful properties it has are its ability to withstand high temperatures and its inertness. Carbon fiber is a textile consisting mainly of carbon. It is produced by spinning various carbon-based polymers into fibers, treating them to remove most of the other substances, and weaving the resulting material into a fabric. This is usually embedded in plastic -typically epoxy -to form carbon fiber reinforced plastic. The physical properties of composite materials (CFRP) are generally not isotropic in nature, but rather are typically anisotropic (different depending on the direction of the applied force or load). The majority of composite materials use two constituents: a binder or matrix and reinforcement. The reinforcement is stronger and stiffer, forming a sort of backbone, while the matrix keeps the reinforcement in a set place [1]. For instance, the stiffness of a composite panel will often depend upon the orientation of the applied forces and/or moments. Panel stiffness is also dependent on the design of the panel. In contrast, isotropic materials (for example, aluminum or steel), in standard wrought forms, typically have the same stiffness regardless of the directional orientation of the applied forces and/or moments.
Experimental results
The main purpose of this paper is to analyze bending phenomenon due to fracture at the woodcomposite samples. By using composite materials (CFRP) in reinforcement is expected growth flexural strength and shearing and confinement elements tablets (increased wood resistance).
The type of reinforcement used on the beams is the carbon fiber reinforced polymer sheet SikaWrap 230C with E module of elasticity 230 000 N/mm 2 and traction resistance 4100 N/mm 2 , carbon fiber reinforced polymer plates SikaCarbodur S 512 with E module of elasticity E=165000 N/mm 2 and an epoxy resin for bonding all the elements Sikadur 30. The wood part of all 11 beams was formed by beech dry wood which size is equal to 25 by 50 by 500 mm [2]. The beams were reinforced using one carbon fiber plate of thickness equal to 1,5 mm, width equal to 25 mm and the length is equal to 500 cm. The finished dimension of one of the beams is equal to 25 by 51,5 by 500 mm because is a beam stick together with one carbon fiber plate Sika Carbodur S 512 and epoxy resin Sikadur 30 [3,4]. Structural epoxy resins remain the primary choice of adhesive to form the bond to fiber-reinforced plastics and are the generally accepted adhesives in bonded CFRP-wood connections [5]. Advantages of using epoxy resin in comparison to common wood-laminating adhesives are their gap-filling qualities and the low clamping pressures that are required.
Composite material used to strengthen wood samples is the type of carbon fiber sheet carbon fiber plates and an epoxy matrix supplied by Building Velmix Ltd., from Sika S.A Romania. Structural epoxy resins remain the primary choice of adhesive to form the bond to fiber-reinforced plastics and are the generally accepted adhesives in bonded CFRP-wood connections. Advantages of using epoxy resin in comparison to common wood-laminating adhesives are their gap-filling qualities and the low clamping pressures that are required.
To achieve the objective proposed in this paper was presented a particular computer program, Presa.txt, for experimental determination of bending strength for the samples. This was done with Spider 8 data acquisition equipment connected to fixtures and fittings of the samples tested on the universal testing machine. Work equipment and instrumentations used in this case are: universal machine for mechanical tests [6], data acquisition system Spider 8, 12 bit resolution linear WA300 race inductive transducer, force transducer S9 50kN, signal conditioning NEXUS 2692 -A-014, 4391 type piezoelectric accelerometer, IBM ThinkPad R51 notebook. Parameters recorded after the bending tests are: F (kN) -compressive strength of hydraulic press, Ft (kN -transverse compressive strength, CRS (mm) -race piston, Acc (m/s 2 ) -acceleration of beam vibration (sensor break). The beam is leaning against the head and driven across to the breaking strength recorded maximum cross and displacement (no longer measured axial tensioning force). The results for the un-reinforced beams are reported solely for the purpose of quantitatively evaluating the effectiveness of the interventions through a comparison with the results for strengthened beams. The main purpose is to analyze bending phenomenon due to fracture at the wood-composite samples like we see in the figure number 1 from below. The samples tested were not subjected to lateral instability during loading. The total load on the beam was applied equally at one point equidistant from the reactions (the half length of the beam). We used the bending device of the universal machine for mechanical tests which has the distance between the rollers l = 460 mm.
In figure 2 it can be seen the way of fracture for the reinforced beams with one and two CFRP plates. The graphs of these reinforcement are presented in figure 3. The graphs show that for the un-reinforced beam cracks appear at a low displacement such as 6-7 mm (a) instead of 10 mm (b) for the reinforced beam with one carbon fiber plate and one down slide of wood and 13-14 mm for the reinforced beam with two carbon fiber plates and two up and down slides of wood (c). That it means the CFRP improve the resistance of the beams and permit a growth in flexural properties. Studying the graphs we can see that the reinforcement is more efficient for the beam with one carbon fiber plate and one down slide of wood and for the beam with two carbon fiber plates and two up and down slides of wood. The force and displacement growth for these type of beams rising values more bigger than at the un-reiforced beam [7,8]. The un-reinforce beam is broken very quickly at a small load. The other wood beams are broken at a bigger load and have a bigger displacement. The tested wood beams reinforced with carbon fiber sheets are shown in figure 4. The functional dependency graphs for the reinforced wood beam are represented in figure 5 from below.
In the case of the reinforcement with carbon fiber sheets we can observe on the graphs that the results indicate that the behavior of reinforced beams is totally different from that of un-reinforced one. The reinforcement also has changed the mode of failure for the tested beams. The experimental load-displacement relationships shown us that this kind of reinforcement is not so resistant like the carbon fiber reinforcement. Observations of the experimental load-displacement relationships show that flexural strength increased and middle vertical displacement decreased.
Experimental results allowed drawing the follow conclusions: the wood beams must be secured to the composite plates, in the mechanical device, to prove the effectiveness of the solution so the type of solidarity was mechanical-link wood beam ends; c) Figure 5. The functional dependency graphs [2] a) for the un-reinforced wood beam, b) for the reinforced beam with two carbon fiber sheets, c) for the reinforced beam with three carbon fiber sheets.
initial tension force decreases as the beam is loaded due to local subsidence of wood (in the tensioning device); if the mechanical system worked correctly, the lift of the beams increased up to 33 kN, meaning 220% higher than the un-reinforced beam; the first cracks in the wood beams appeared at least two times higher than the un-reinforced beam, due to quality wood (beech dry, carefully processed and without tension concentrators in its mass); elastic lift of the reinforced beams is significantly influenced by pre-tensioning, most samples having a maximum 8-13 mm flexural displacement that is an improvement over the displacement of the un-reinforced beam. The maximum load force and the maximum displacement may be, in this case, experimental parameters to quantify the strength beam quality. It should be mentioned that results are easily interpreted in the context of a reference beam (un-reinforced).
By using composite materials in constructions is expected growth flexural strength and shearing, and confinement elements tablets (increased concrete and wood resistance). The reinforcement technology with composite materials offers many advantages over conventional methods, because the composite materials show: very high resistance to bending, greater than steel; increased resistance and ductility construction without changing the geometry or stiffness; Experiments have shown sustainability of the method for wood construction reinforced with composite materials. The use of composites can be applied as a strengthening technique without necessitating the removal of the overhanging portion of the structure. The technique used proved to be easy and fast to execute, even when on in situ parts. In particular, it demonstrated to be very promising in many cases of reinforcement of old, historical structural wood parts. The CFRP pre-tensioning is easy to implement and with low costs. Effectiveness of using composite reinforcements is still modest, requiring further and deeper studies and trying new methods and design alternatives for the tested samples.
Conclusions
Following conclusions can highlight: wood is a material with a certain degree of heterogeneity, which makes its mechanical properties vary in a range too wide, so it is especially necessary to improve resistance with composite reinforcement.
As a rigid material with good strength and relatively low cost, we use a composite. Several unreinforced and reinforced wood beams were tested in order to find their flexural capacity. CFRP materials were conditioned in an environment of 65±5% relative humidity and temperature of 20±2°C as this is the service environment in which CFRP reinforced beams are expected to be used. The results indicate that the behavior of reinforced beams is totally different from that of un-reinforced one. The reinforcement has changed the mode of failure from brittle to ductile and has increased the load-carrying capacity of the beams. Observations of the experimental load-displacement relationships show that flexural strength increased and middle vertical displacement decreased for wood beams reinforced with CFRP composite plates, compared to those without CFRP plates. If in the great strength area of the beam is added composite with greater strength than wood then the wood beam can withstand a bigger force because the maximum strength is bigger too. The presence of carbon plates and sheets causes an interesting increase in stiffness varying from 20,2% to 29,6%, when compared to that of the same wood beams before reinforcement. | 2019-04-29T13:13:40.925Z | 2015-11-03T00:00:00.000 | {
"year": 2015,
"sha1": "07ba94a5b749cb1b31d72e9e2e7b13c958edb17f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/95/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7bb011f469aba8aa9c8294bc0587d78e235b3b9b",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
14473955 | pes2o/s2orc | v3-fos-license | A CO J=3-2 Survey of the Galactic Center
We have surveyed the central molecular zone (CMZ) of our Galaxy in the CO J=3-2 line with the Atacama Submillimeter-wave Telescope Experiment (ASTE). Molecular gas in the Galactic center shows high J=3-2/J=1-0 intensity ratio (~ 0.9) while gas in the spiral arms in the Galactic disk shows the lower ratio (~ 0.5). The high-velocity compact cloud CO 0.02-0.02 and the hyperenergetic shell CO 1.27+0.01, as well as gas in the Sgr A region exhibit high J=3-2/J=1-0 intensity ratio exceeding 1.5. We also found a number of small spots of high ratio gas. Some of these high ratio spots have large velocity widths and some seem to associate with nonthermal threads or filaments. These could be spots of hot molecular gas shocked by unidentified supernovae which may be abundant in the CMZ.
INTRODUCTION
The Central Molecular Zone (CMZ) of the Galaxy, a region of radius ∼ 200 pc, contains large amount of dense molecular gas (n ≥ 10 4 cm −3 ; e.g., Paglione et al. 1998). It has also been observed that gas temperature is uniformly high in the CMZ (T k =30-60 K; Morris et al. 1983). Molecular gas in the CMZ shows highly complex distribution and kinematics as well as remarkable variety of peculiar features (e.g., Bally et al. 1987;Burton & Liszt 1992). Its small-scale structure is characterized by arcs/shells and filaments (Oka et al. 1998). A population of high-velocity compact clouds (HVCCs) is also unique in the CMZ (Oka et al. 1998(Oka et al. , 2001b. The CMZ also contains a large amount of hot (∼ 10 8 K) plasma (Koyama et al. 1989), which origin has been suggested to be about 10 3 supernova explosions in the last 10 4 years (Yamauchi et al. 1990). The detection of 26 Al γ-ray line supports the occurrence of such energetic explosions (von Ballmoos, Diehl, & Schüonfelder 1987). The widespread SiO emission there has been understood as a result of large-scale shocks such as cloud-cloud collisions and/or fossil superbubbles (Martéin-Pintado et al. 1997;Hüttemeister et al. 1998). These suggest that the boisterous molecular gas kinematics in the CMZ may be a result of violent release of kinetic energy by a number of supernova explosions.
To assess the effect of supernova explosions on the kinematics and physical conditions of the CMZ, it is crucial to pick up supernova/molecular cloud interacting zone completely. The rotational transition lines of carbon monoxide (CO) in submillimeter wavelength are well established, excellent tracers of the supernova/molecular cloud interaction (White 1994;Arikawa et al. 1999;Moriguchi et al. 2005). This paper presents a brief report of the ongoing largescale, high-resolution CO J=3-2 survey of the CMZ. The full presentation of the data with detailed analyses will be published in the forthcoming paper. Detailed analyses of physical conditions and results of follow-up observations for several features of great interest are presented in separate papers (Nagai et al. 2006;Tanaka et al. 2006).
OBSERVATIONS
The CO J=3-2 (345.79599 GHz) mapping observations were carried out with the Japanese 10 m submillimeter-wave telescope at Pampa la Bola, Chile (ASTE; Kohno et al. 2004). The observations were done 2005 July 19-25 in good, stable weather conditions. The telescope has a beam efficiency of 0.6 and FWHM beam size of 22 ′′ at 345 GHz. The pointing of the telescope was checked and corrected every two hours by observing Jupitar and Uranus, and its accuracy was maintained within 2 ′′ (rms).
The telescope is equipped with a 345 GHz SIS receiver SC345. The reciever had an IF frequency of 6.0 GHz, and the local oscillator was centered at 339 GHz. Calibration of the antenna temperature was accomplished by chopping between an ambient temperature load and the sky. Since the mixer in SC345 works in a double sideband (DSB) mode, we scaled the DSB antenna temperature (T * A ) by multiplying (2/0.6) = 3.33 to get the SSB main-beam temperature (T MB ). Typical system noise temperatures during the observations were 150-300 K (DSB) including atmospheric loss. Absolute scale and reproducibility of intensities were checked by monitoring NGC 6334I several times a day. We had T MB (ASTE) = 51.2 ± 3.0 K at the peak, which is consistent with the value in the previous literature [T MB (CSO) = 49 K; Kraemer & Jackson 1999]. To minimize intensity fluctuation caused by pointing errors,we employed the absorption dip at V LSR = −6.1 km s −1 for the reproducibility check. The intensity scale was found to be stable within 3.8 % (1σ) during the observations. All spectra were obtained with an XF-type autocorrelation spectrometer. The spectrometer was operated in the 512 MHz bandwidth (1024 channel) mode, which corresponds to a 445 km s −1 velocty coverage and a 0.45 km s −1 velocity resolution at 345 GHz.
We mapped a ∆l × ∆b = 2 • × 0.5 • area in the CO J=3-2 line, collecting 6990 spectra in total. A 34 ′′ grid spacing was chosen in accordance with that of the NRO 45m CO J=1-0 survey. The observations were performed by position-switching to a clean reference position, (l, b)=(0 • , −2 • ). The on-source integration time was 10 seconds for each position and rms noise was 0.3 K in T * A . The data were reduced on the NEWSTAR reduction package. We subtracted baselines of the spectra by fitting linear lines, or if necessary, the lowest order polynomials that produce straight baselines in emission-free velocity ranges. About 500 spectra required third-order polynomical fits. . The CO J=3-2 data are from the ASTE survey, while the both J=1-0 data sets are from the NRO 45m survey (Oka et al. 1998).
Generally, each CO J=3-2 profile in the CMZ is similar in shape and intensity to the CO J=1-0 profile, suggesting these lines are opaque and thermalized. In the 13 CO J=1-0 profiles, the total velocity extent is somewhat smaller than the 12 CO lines, and the expanding molecular ring (EMR; Kaifu 1972;Scoville 1972) is less prominent. Rather less intense 13 CO J=1-0 emission from Sgr A and the L= 1.3 • complex indicates moderate CO opacities. This is not the case for the opaque Sgr B2 cloud where the 13 CO/ 12 CO intensity ratio reaches 0.5. Toward Sgr A, the J=3-2 emission is more intense than CO J=1-0 especially in the negative velocity side. Such emission with high J=3-2/J=1-0 intensity ratio may come from highly excited gas with rather low column density. . The CO J=3-2 data set have been smoothed to 60 ′′ spatial resolution and summed up to each +2 km s −1 bin. The CO J=3-2 emission in the Galactic center extends over the current spatial coverage of the ASTE survey. Its distribution closely follows characteristics of the CMZ delineated by the CO J=1-0 surveys (Bally et al. 1987;Burton & Liszt 1992;Oka et al. 1998). Four major cloud complexes are seen also in the CO J=3-2 map, from left to right, the L = 1.3 • complex, the Sgr B complex near l = 0.7 • , the Sgr A complex near l = 0.0 • , and the Sgr C complex near l = 0.5 • . The ASTE survey currently does not cover the full spatial extents of the L = 1.3 • and Sgr C complexes. At the center of the L = 1.3 • complex, we see a clear hole of emission, which is the hyperenergetic shell CO 1.27+0.01 (Oka et al. 2001b). A wavy filament connects the Sgr B complex to the Sgr A complex. This appears more prominent in CO J=3-2 than in CO J=1-0.
Analyses
We often refer ratios between molecular line intensities in diagnoses of physical conditions or chemical compositions of interstellar molecular gas. Here we use the CO J=3-2/J=1-0 intensity ratio (R 3-2/1-0 ) to extract highly excited gas from the CMZ, since the ratio is sensitive to the temperature and density of molecular gas. Figure 3 shows frequency distribution of CO J=3-2/J=1-0 intensity ratio weighted by the CO J=1-0 intensity. The CO J=1-0 data are from the NRO 45m survey (Oka et al. 1998). Both data sets have been smoothed to 60 ′′ spatial resolution and summed up to each +2 km s −1 bin. The data with 1 σ detections in the both lines have been used for the analysis. The contributions of four foreground spiral arms in the Galactic disk are presented by gray bars. The spiral arms were defined by straight lines in the l-V plane, having velocity widths of 2 km s −1 (+20 km s −1 arm) or 4 km s −1 (local, 4.5 kpc, 3 kpc arms). The R 3-2/1-0 distribution has a prominent peak at 0.85, which is the typical value in the CMZ, and a shoulder at ∼ 0.5. The low-ratio shoulder is mostly attributable to the foreground spiral arms. The bulk of molecular gas in the CMZ has higher R 3-2/1-0 than that in the spiral arms in the Galactic disk. The ratio close to unity indicates that the lowest three rotational levels of CO are thermalized and the transition lines are moderately opaque. Generally, R 3-2/1-0 can be a measure of temperature and density if the CO column density per unit velocity width is not very high. High CO J=3-2/J=1-0 ratios have been found in UV-irradiated cloud surfaces near early-type stars (e.g., White et al. 1999), and shocked molecular gas adjacent to supernova remnants (e.g., Dubner et al. 2004). Here we try to extract highly excited gas from the CO data sets by R 3-2/1-0 ≥ 1.5. One-zone LVG calculations say that R 3-2/1-0 ≥ 1.5 corresponds to n(H 2 ) ≥ 10 3.6 cm −3 and T k ≥ 48 K when N CO /dV = 10 17 cm −2 (km s −1 ) −1 (Fig.4). Figure 5 shows the spatial and longitude-velocity distribution of high R 3-2/1-0 gas. Data severely contaminated by the foreground disk gas, (−55 + 10l) ≤ V LSR ≤ +15 km s −1 , have been excluded from the velocity integration in making Fig.5a. The spatial distribution of high R 3-2/1-0 gas in the CMZ shows a number of clumps as well as some diffuse components. The most prominent one is a clump at Sgr A, which consists of several high-velocity features in both sides of V LSR = 0 km s −1 . Another prominent clump is found in the L=1.3 • complex, appearing in the l-V plane with a extremely broad velocity width. We also see a number of small spots of high R 3-2/1-0 gas especially in the inner part of CMZ (|l| ≤ 0.5 • ). Some of the high-velocity compact clouds (HVCCs) identified in the J=1-0 data exhibit high R 3-2/1-0 , although all of them do not necessarily have high ratio. Two diffuse high R 3-2/1-0 components are found in the 0.2 • ×0.2 • region centered at l ≃ 0.9 • (the L=0.9 • Anomaly) and in the periphery of the Sgr C complex. In the velocity range of −60 ≤ V LSR ≤ +15 km s −1 , we see several narrow-velocitywidth features elongated in longitude. We discuss these high R 3-2/1-0 features briefly in the following sections.
Sgr A Region
Many authors have discussed the relationship between the radio continuum source Sgr A and the molecular features at scales less than a few arcminutes (e.g., Brown & Lizst 1984: Genzel & Townes 1987. Our CO J=3-2 data with 60 ′′ resolution shows a high R 3-2/1-0 clump with the size of 6 ′ ×4 ′ at Sgr A and a small redshifted clump in the northeast (Fig. 6a). The small red-shifted clump is a well-known HVCC, CO 0.02-0.02 ). The Sgr A clump splits up several components in the l-V plane (Fig. 6b). We see a pair of high-velocity emission (named CND + and CND − ). This feature is much larger than the circumnuclear disk (CND; diameter ∼ 100 ′′ ; e.g., Christopher et al. 2005), which is the well-known molecular feature surrounding Sgr A * , and its middle point is about 1 ′ displaced from Sgr A * . We suggest that CND +− may be a rotating disk-like structure of highly-excited gas, which includes the 'classical' CND.
The component A2 may be a shocked gas clump generated by the interaction with the Sgr A East SNR (Tsuboi et al. 2006). The components A1 and A3 reside in the same direction as A2, suggesting that they could be relevant to Sgr A East as well. The component A4 appears in the high-velocity end of the bridge which connects M-0.02-0.07 and M-0.13-0.08, which are well-known dense molecular clouds near Sgr A. Spatially it follows the Galactic northwestern periphery of M-0.02-0.07. Its relatively narrow velocity width ( < ∼ 15 km s −1 ) suggests a nonshock origin. We speculate that A4 could be a photon-dominated region (PDR) formed by intense UV radiation from the central cluster.
L=1.3 • Complex
The L=1.3 • complex is the large molecular feature having a prominent elongation toward positive latitude (Oka et al. 1998). Two expanding shells have been identified at the center of L = 1.3 • complex (CO 1.27+0.01; Oka et al. 2001b). The CO J=3-2 data demonstrate the entity of these expanding shells (Fig.7). High R 3-2/1-0 gas is abundant in the Galactic northwestern rim of the 'minor' shell. The simple kinematics of high R 3-2/1-0 gas associated with the 'major' shell can be accounted by a single explosive event, while the 'minor' shell has a complex kinematics having a steep velocity gradient in its Galacitc northern part. High R 3-2/1-0 gas in the velocities from +130 to +160 km s −1 shows a striking symmetry, which origin is uncertain, however. The association of high R 3-2/1-0 gas with these expanding shells ensures that they are generated by a series of supernova explosions, suggesting that a microburst of star formation has occurred there in the recent past. Integrated intensities for three velocity ranges were presented by different colors; +100 ≤ V LSR ≤ +130 km s −1 (blue), +130 ≤ V LSR ≤ +160 km s −1 (green), and +160 ≤ V LSR ≤ +200 km s −1 (red). (b) A map of CO J=3-2 emission integrated over velocities from 0 to 200 km s −1 . Contour interval is 100 K km s −1 . Magenta ellipses denote two expanding shells identified in CO J=1-0 data (Oka et al. 2001b).
The first class consists of several components, following the lower-velocity periphery of the prominent emission feature at V LSR ∼ +80 km s −1 , which bridges the L = 1.3 • complex and the Sofue's Arm II (Sofue 1995). The origin of this class of gas is really unknown. The second class consists of three HVCCs with high R 3-2/1-0 in the rim of the Sgr B complex; CO 0.86-0.23, CO 0.88-0.08, and CO 0.88+0.00 (see §4.2.4). Their large velocity-widths and high R 3-2/1-0 suggest that they had been accelerated by supernova blast waves. Indeed, CO 0.88+0.00 seems to be associated with the TeV γ-ray bright SNR, G0.9+0.1 (Aharonian et al. 2005). The third class of gas is also adjacent to the radio shell SNR 0.9+0.1. The spatial extent of the blue-shifted gas seems to be larger than that of the SNR radio shell. Although the interaction with G0.9+0.1 or the X-ray pulsar inside is a likely origin of the blue-shifted high R 3-2/1-0 gas, the way of interaction is quite unknown. Fig. 8. (a) A map of CO J=3-2 emission with R 3-2/1-0 ≥ 1.5 of the L = 0.9 • Anomaly. Integrated intensities for three velocity ranges were presented by different colors; −150 ≤ V LSR ≤ −50 km s −1 (blue), +20 ≤ V LSR ≤ +60 km s −1 (green), and +60 ≤ V LSR ≤ +100 km s −1 (red). Gray contours show a map of CO J=3-2 emission integrated over velocities −150 ≤ V LSR ≤ +150 km s −1 . A magenta arc denotes the radio shell of SNR 0.9+0.1 (LaRosa et al. 2000). (b) Longitude-velocity map of CO J=3-2 emission for data with R 3-2/1-0 ≥ 1.5 (gray). Latitudinal integration range was from b = −0.23 • to +0.01 • . Green contours show an l-V map of CO J=3-2 emission integrated over the same latitudes.
High R 3-2/1-0 Spots
We found a number of high R 3-2/1-0 spots (Fig.9). Here we present a list of high R 3-2/1-0 spots (Table 1) for further investigations. Most of these spots have compact appearances and prefer high-velocity ends of giant molecular clouds within the CMZ. Many high R 3-2/1-0 spots have compact entities with large velocity width, exhibiting signs of shocked gas. Two of those, CO 0.07+0.17 and CO -0.04+0.05, associate with nonthermal 'threads'. CO 0.40-0.07 also overlaps with a faint radio filament. Four high ratio spots seem to be relevant to the bundle of nonthermal filaments of the Galactic Center Radio Arc. These facts strongly suggest that supernova/molecular cloud interaction plays an important role in accelerating electrons and forming nonthermal threads and filaments, which are unique and abundant in the CMZ. It has been reported that shocked molecular gas is associated with the nonthermal filament 'Snake' at the intersection with the SNR G359. 1-0.5 (Yusef-Zadeh, Uchida, & Roberts 1995;Lazendic et al. 2002). The association of shocked gas with nonthermal flaments prefers the hypothesis that localized magnetic tubes with a milligauss field are illuminated by relativistic electrons at these filaments. Fig. 9. Spatial distribution of high ratio (R 3-2/1-0 ≥ 1.5) spots superposed on the VLA 90 cm image (LaRosa et al. 2000). Red circles indicate high-velocity compact clouds (HVCCs) or high-velocity wings with high ratio. Triangles are high ratio spots in velocity ends of clouds. Crosses are those in cloud edges. The 'x' denotes a high ratio cloud in the 20 km s −1 arm.
Absorption by Foreground Gas
We see several narrow velocity width features in |l| ≤ 0.6, −60 ≤ V LSR ≤ +15 km s −1 . They are elongated in longitude, being slightly displaced from the well-known foreground arms in velocity. It is not likely that they are highly excited gas in the interam region where low density gas dominates.
Here we made a model with a warm opaque cloud veiled by a layer of cool absorber. Parameters were chosen n(H 2 ) = 10 4 cm −3 , T k = 50 K, N CO /dV = 10 18 cm −2 (km s −1 ) −1 for the warm cloud, and n(H 2 ) = 10 2 cm −3 , T k = 10 K for the cool absorber. Figure 10 shows the results of LVG calculations, CO line intensities as functions of N CO /dV of the cool absorber. Since the The cloud interacting with the nonthermal filaments of the Radio Arc (Oka et al. 2001a).
3 The cavity associated with the nonthermal filaments of the Radio Arc (Oka et al. 2001a).
J=2 level is subthermally excited in the cool absorber, the J=3-2 line is hardly absorbed unless N CO /dV ≥ 10 16 cm −2 (km s −1 ) −1 where radiative excitation by the photon-trapping process dominates. This effect raises the J=3-2/J=1-0 intensity ratio without participation of highly excited, less opaque gas.
SUMMARY
This paper presents a brief report of ongoing large-scale CO J=3-2 survey of the Galactic center with Atacama Submillimeter-wave Telescope Experiment (ASTE). We have mapped a ∆l × ∆b = 2 • × 0.5 • region of the CMZ.
The CO J=3-2 distribution closely follows characteristics of the CMZ delineated by the CO J=1-0 surveys, while it shows several faint high-velocity features previously undetected. The analysis of the J=3-2/J=1-0 intensity ratio shows that the bulk of molecular gas in the CMZ is thermalized and moderately opaque. We found clumps of high R 3-2/1-0 gas at the Sgr A region, the high-velocity compact cloud CO 0.02-0.02, and the hyperenergetic shell CO 1.27+0.01. We also found a number of small spots of R 3-2/1-0 gas in the inner part of CMZ. Some of the high R 3-2/1-0 spots have entities with large velocity width and some seem to associate with nonthermal threads or filaments. Two diffuse high R 3-2/1-0 components were found in the 0.2 • ×0.2 • region centered at l ≃ 0.9 • (L = 0.9 • Anomaly) and in the periphery of the Sgr C complex.
Most of these high R 3-2/1-0 features are likely shocked gas possibly generated by the interaction with supernova blast waves, while some could be UV-irradiated surfaces of molecular clouds. Continuation of the CO J=3-2 survey, as well as follow-up studies of high R 3-2/1-0 features detected, will reveal the ubiquity and origin of shocked molecular gas in the CMZ. 14 | 2014-10-01T00:00:00.000Z | 2007-01-30T00:00:00.000 | {
"year": 2007,
"sha1": "b28095c01bd0c0da4c130f17aceb0dde420680ee",
"oa_license": null,
"oa_url": "https://academic.oup.com/pasj/article-pdf/59/1/15/18531284/pasj59-0015.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "d5f98b427bb36520be5998f9a271e040ed9d7afd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270794874 | pes2o/s2orc | v3-fos-license | A longitudinal pilot study examining the influence of the orthodontic system chosen in adult patients (brackets versus aligners) on oral health-related quality of life and anxiety
Background In recent years, the demand for orthodontic treatment with aligners has increased, led by patient need, as aligners typically provide them with improved aesthetics and less physical discomfort. In deciding with the patient on an appropriate orthodontic system, it is important to take into account the potential discomfort and the perceptions that patients have in relation to their treatment. The objective of this study was to analyze the influence of brackets or aligners on oral health-related quality of life (OHRQoL) and anxiety levels in a sample of adult patients during the first month of treatment. Methods The pilot study was carried out at the Dental Clinic of the University of Salamanca between November 2023 and February 2024. Eighty adult patients who initiated orthodontic treatment were selected and divided into two groups: the brackets group (Victory®; 3 M Unitek, California, USA) (n = 40) and the aligners group (Invisalign®; Align Technology, California, USA) (n = 40). OHRQoL was analyzed using the Oral Health Impact Profile-14 (OHIP-14) questionnaire, and anxiety was analyzed using the State–Trait Anxiety Inventory (STAI). The follow-up time was one month, with scores recorded at the beginning (T0) and one month after starting treatment (T1). Results The mean patient age was 33.70 (± 5.45) years old. The total sample (n = 80) consisted of 66.2% men and 33.8% women. In the brackets group, one month after starting treatment, the dimension with the highest impact was that of physical pain (5.62 ± 1.51). In the aligners group, where the dimension of psychological disability had the highest score (4.22 ± 1.02). In the brackets group the total OHIP score was higher at one month (T1) (33.98 ± 6.81) than at the start of treatment (T0) (21.80 ± 3.34); this greater impact on OHRQoL one month after starting treatment was not observed in the aligners group (T1 = 27.33 ± 6.83; T0 = 27.33 ± 6.22). The orthodontic system used did not influence participants’ anxiety (p > 0.05). Age and sex were not influential factors in either OHRQoL or anxiety. Conclusions The bracket system significantly influenced patients’ OHRQoL. In the sample studied, no influence of the orthodontic system (brackets versus aligners) on anxiety was observed.
A longitudinal pilot study examining the influence of the orthodontic system chosen in adult patients (brackets versus aligners) on oral health-related quality of life and anxiety
Background
The demand for orthodontic treatments has increased because of the possibility of using aligners in treatment, principally due to patients' aesthetic demands [1].This improvement in aesthetics has produced a concomitant increase in patients' oral health-related quality of life (OHRQoL) [2,3].
Aligners were developed as an aesthetic orthodontic alternative to the use of fixed brackets [1,4].Aligners also facilitate significantly improved oral hygiene, less discomfort, and greater convenience for patients, compared to fixed brackets; therefore, they can reduce the adverse effects of orthodontic treatment compared to conventional fixed brackets [5,6].
In recent years, there has been an increased interest in research on OHRQoL and the anxiety that patients may experience during their orthodontic treatment [7].The presence of a malocclusion has been observed as negatively affecting OHRQoL, especially at the beginning of orthodontic treatment.Patients typically request orthodontic treatment to improve their dental aesthetics, oral functionality, and psychosocial well-being [8,9].
The scientific literature has concluded that orthodontic treatment can improve or worsen patients' OHRQoL depending on which phase of treatment the patient is in.The impact of orthodontic treatment on OHRQoL decreases as treatment progresses.Evaluating OHRQoL may be an effective means of analyzing the results of orthodontic treatment in patients [8,10,11].
The pain that patients describe during orthodontic treatment can be influenced by different factors, including psychological traits such as their innate response to stress [12][13][14].Different studies have reported that the pain described by patients also varies depending on the orthodontic system used.Treatment with brackets usually produces more pain compared to treatment with aligners [15,16].The discomfort and pain described by patients during the early stages of orthodontic treatment, especially during the first month, have a negative influence on patients' OHRQoL and on their anxiety when facing treatment.This impact can have a negative repercussion on patients' compliance with indications and may even make them unwilling to continue orthodontic treatment [10,11,13].
There is limited scientific evidence that analyzes the impact of orthodontic treatment on OHRQoL and the anxiety that patients describe during the early stages of treatment.The impact of orthodontic treatment on OHRQoL and anxiety levels during the first month of orthodontic treatment needs to be analysed.This first month of orthodontic treatment has a negative influence on patients' pain levels [2,7,8].
Therefore, the objective of this pilot study was to analyze the possible influence of the orthodontic system used (brackets compared to aligners) on OHRQoL and anxiety levels in adult patients during the first month of treatment.The experimental hypothesis of this study is that OHRQoL and anxiety levels differ between patients treated with brackets and aligners during the initial phase of orthodontic treatment.
Study design
This pilot study was approved by the Research Ethics Committee of the University of Salamanca (study reference number: 1074).This study followed the ethical principles established by the Declaration of Helsinki for research with humans, and the STROBE guidelines for conducting observational studies [17].
The patients were informed of the examination procedures and were guaranteed confidentiality of their collected information.Before recruitment, signed consent was obtained from each participant.
Interventions
OHRQoL and anxiety were analyzed in a consecutive sample of 80 patients who began orthodontic treatment at the Dental Clinic of the University of Salamanca.This sample was divided into two study groups: the brackets group (n = 40) and the aligners group (n = 40).No patient dropped out of the study while it was ongoing.
The participants in the brackets group were bonded with 0.022-inch slots MBT prescription stainless steel brackets (Victory®; 3 M Unitek, California, USA) in both arches.In the first clinical appointment, the upper and lower brackets and the tubes of the first permanent molars were cemented.The archwire was 0.014-inch NiTi (Ormco, California, USA) at baseline.The type of engagement with the elastomeric ligature (Dentaurum GmbH & Co., KG, Ispringen, Germany) was identical for all of the patients.
In the group of patients with aligners, the Invisalign® system (Align Technology, California, USA) was used.Tooth movements were planned at the rate recommended using algorithms from the ClinCheck Pro program.In the first clinical appointment, the attachments were cemented and the aligners were delivered.The patients were instructed to change aligners every seven days.
Eligibility criteria for participants
The inclusion criteria were as follows: adult patients (> 18 years); patients with permanent dentition (except third molars); and a maximum Little's irregularity index of 6 mm.
The exclusion criteria were as follows: patient history of previous orthodontic treatment; orthodontic treatment with extractions; patients with craniofacial anomalies; patients with untreated caries; patients with untreated gingival and/or periodontal pathology; patients with symptoms of or diagnosed temporomandibular joint pathology; patients receiving treatment with anti-inflammatory drugs, analgesics, anxiolytics, and/or antidepressants; and pregnant patients.
Outcome measures
The impact of orthodontic treatment on patients' OHRQL was analyzed using the Spanish version of the Oral Health Impact Profile-14 (OHIP-14) questionnaire [18].
The OHIP-14 questionnaire consists of 14 items that analyze the following seven domains of OHRQL: functional limitation, physical pain, psychological discomfort, physical disability, psychological disability, social disability, and disability.The responses to this questionnaire were scored using a 5-point Likert scale (0 = never, 1 = almost never, 2 = occasionally, 3 = quite often, and 4 = very often) [19].
Anxiety was assessed with the State-Trait Anxiety Inventory (STAI).The STAI is a self-reported inventory that encompasses two independent scales that measure state anxiety (STAI-S) (how one feels at a given time) and trait anxiety (STAI-T) (how one usually feels) [20].The Inventory is a 40-item Likert scale that evaluates separate dimensions of anxiety as a state (items 1-20) and anxiety as a trait (items 21-40).A score greater than 40 points is an indicator of a high degree of anxiety [20,21].
The OHIP-14 and STAI questionnaires were provided to all study participants and completed at baseline (T0) and one month after starting treatment (T1).
Statistical analysis
Data were analyzed with the SPSS version 28 program (SPSS Inc., Chicago, IL, USA).Qualitative variables were analyzed with tables of frequencies, percentages, Student's t-test, and the Chi-square test.We selected a significance level of p < 0.05.
Baseline data
The sample analyzed consisted of 80 patients divided into two study groups: 40 participants (30 men, 10 women) with a mean age of 32.15 years (± 5.79) in the brackets group, and 40 participants (23 men, 17 women) with a mean age of 35.25 years (± 4.67) in the aligners group.The descriptive statistics of the characteristics of the participants are shown in Table 1.
Oral health-related quality of life analysis
We analyzed the significance of the changes in the OHRQoL variables between the first month of treatment (T1) and at the start (T0) in the study population (n = 80).In the total sample, an increase in the negative impact of orthodontic treatment on OHRQoL was observed one month after starting treatment (T1), compared to the beginning (T0).The differences between T1 and T0 in the variables of the OHIP-14 questionnaire were all statistically significant (p < 0.01).The dimension of physical pain (+ 1.38) showed the most significant variation compared to the dimension of psychological discomfort, which showed the least variation (+ 0.56) (Table 2).
In the brackets group, statistically significant differences (p < 0.01) were observed in all dimensions and in the total OHIP-14 score, when comparing the scores after a month and at the beginning of treatment.In this group, in the first month of treatment, the brackets had a negative impact on patients' OHRQoL.The dimension that had the most significant impact one month after starting treatment was that of physical pain, compared to the disability dimension that had the least impact (Table 3) (Fig. 1).
In the aligners group, compared to the patients with brackets, no significant differences were observed in any of the dimensions or in the total OHIP-14 score one month after starting treatment.A slight worsening in OHRQoL was observed in these patients, but this was not clinically significant.One month after starting orthodontic treatment, the dimension with the most significant impact was psychological disability, compared to the dimensions of functional limitation and disability, which were the items with the least impact on OHRQoL (Table 4) (Fig. 2).
Sex and age did not have a statistically significant influence on the impact of orthodontic treatment on OHRQoL in the sample analyzed.
Anxiety analysis
No significant influence of orthodontic treatment on anxiety levels was observed one month after starting treatment (T1).A slight decrease in anxiety was observed one month after starting treatment, but this was not clinically significant (Table 5).
The scores on the STAI inventory were compared between the two study groups.At the beginning of treatment, in the anxiety-trait, a significant difference was observed between the two study groups; however, clinically, this small difference was not important (Table 6).
There were no influences of patients' sex and age on their levels of anxiety in this study.
Discussion
The scientific literature includes only a few publications on the anxiety and OHRQoL of patients treated with aligners [11,22].Therefore, this longitudinal pilot study aimed to analyze the impact of orthodontic treatment with aligners on OHRQoL and anxiety compared to the bracket system, during the first month of orthodontic treatment.
Fig. 2 Evolution of OHRQoL in the aligners group during the first month of treatment
There were no statistically significant differences between the participants of the brackets group and the aligners group in terms of their sex; additionally, whilst significant differences in age were observed between the groups, they were not clinically important.Therefore, the two study groups were considered homogeneous in terms of their sociodemographic characteristics and the severity of their malocclusion.
To analyze the impact of orthodontic treatment with brackets or aligners on anxiety, we used the STAI inventory.This tool has also been used in previous studies in orthodontics [12,13,27,28].
Orthodontic treatment can significantly improve patients' OHRQoL [19,23,24,29].In this study, in the total sample, it was observed that at one month after starting treatment, the patients' OHRQoL was worse than at the beginning.However, in the aligners group, in the dimensions of functional limitation, psychological discomfort, social disability, and disability measured with the OHIP-14 questionnaire, a trend appeared in which one month after starting treatment with aligners, the patients showed a lower degree of impact on all of these dimensions compared to the start of treatment, but without statistically significant differences.One month after starting treatment, the brackets group described a higher total score on the OHIP-14 questionnaire (33.98 ± 6.81) than the aligners group (27.33 ± 6.83).These results can be related to the fact that, during their treatment with aligners, patients were able to remove the aligners during meals.In the group of patients with brackets, one month after starting treatment (T1), the dimension of the OHIP-14 questionnaire with the highest score was that of physical pain (5.62 ± 1.51).This fact can be explained due to the discomfort that brackets can cause in patients (for example, the appearance of oral wounds).These results coincide with those reported by other authors [22,29,30].
Previous studies have also analyzed the impact of orthodontic treatment on OHRQoL, comparing patients being treated with brackets and patients being treated with aligners.These studies observed, as in this case, that treatment with aligners produced a lower impact on OHRQoL of adult patients compared to the bracket system, one month after starting treatment (brackets = 33.98 ± 6.81; aligners = 27.33 ± 6.83).Alfawal et al. [24] recorded that patients with aligners described a lower total score measured with the OHIP-14 questionnaire (14.14 ± 3.66) compared to patients with brackets (25.18 ± 4.15) one month after starting treatment.Similarly, results were obtained in the study of Jaber et al. [31], where one month after starting treatment, the group of patients with aligners described a lower total score on the OHIP-14 (5.82 ± 3.96) compared to patients with brackets (14.12 ± 9.07).
There are very few previous studies that have evaluated the possible influence of the orthodontic system used (brackets versus aligners) on anxiety levels.Gao M et al. [22] evaluated the influence of the orthodontic system (brackets compared to aligners) on the anxiety-state levels of the STAI inventory.They concluded that therapy with aligners produced lower levels of anxiety in patients compared to the use of brackets (p < 0.05) from the start of treatment to the end of the 14th day [22].In our study, anxiety-trait levels with significant differences were only observed at baseline (T0).One month after starting treatment (T1), no statistically significant differences were observed between the two treatment systems.
In the present study, it was observed that anxiety levels were lower one month after starting treatment compared to the beginning, in both study groups.These results are in agreement with those reported by other authors, such as Wang et al. [13], who analyzed anxiety levels in patients who started orthodontic treatment with brackets and also concluded that anxiety levels were lower at one month (STAI = 31.0)compared to the start of treatment (STAI = 38.0);however, these authors did not observe a substantial decrease in anxiety levels between the first month and the start, as in our study.
Sex has been reported to be an influential factor on anxiety in the general population [32].In this study, we observed that sex and age did not influence OHRQoL or anxiety levels.These results are similar to those reported in previous studies [33][34][35].
The results described in this study can provide information for orthodontists to provide to patients before starting their orthodontic treatment.Providing this information can increase patient cooperation and understanding during their orthodontic treatment.Knowing in advance the discomfort of orthodontic treatment can help professionals psychologically prepare the patient before starting their treatment.
Limitations
One of the limitations of this study is that the follow-up period was only one month, since the objective of this work was to analyze only the initial phases of orthodontic treatment.Another limitation of this study was that the type of malocclusion of the patients participating in the study was not taken into account.The degree of severity of malocclusion may influence patients' OHRQoL and anxiety levels.
Randomized clinical studies are necessary, with a follow-up period appropriate to the duration of orthodontic treatment, to validate the effects of treatment with brackets and aligners on OHRQoL and anxiety levels.Analysing participants from different demographic groups can provide interesting practical information.It would be interesting to carry out multicenter research to analyze the different factors that may influence anxiety and OHRQoL, as well as to analyze the effects of the use of analgesic drugs during treatment on OHRQoL and anxiety.
Conclusions
-Bracket treatment had a negative influence on patients' OHRQoL one month after starting treatment.-The use of aligners did not influence OHRQoL one month after starting treatment.-The orthodontic system used did not influence anxiety levels during the first month of treatment.-In the sample analyzed, neither sex nor age influenced OHRQoL or anxiety described by patients.
Fig. 1
Fig. 1 Evolution of OHRQoL in the brackets group during the first month of treatment
Table 2
Comparison of OHRQoL between baseline and the follow-up period, for the total sample
Table 4
Comparison of OHRQoL between baseline and the follow-up period, in the aligners group NS = Not significant (p > 0.05)
Table 5
Comparison of anxiety between baseline and the follow-up period, for the total sampleSTAI Mean (SD) Student's t-
Table 6
Comparison of the changes in the STAI questionnaire's variables between treatment groups NS = Not significant (p > 0.05); ** = Highly significant (p < 0.01) | 2024-06-29T06:17:19.381Z | 2024-06-27T00:00:00.000 | {
"year": 2024,
"sha1": "ac6cacde7365caf499f1743b937726012cd71f87",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4ceb045de51090ff4843ca07518e2df7ad93e383",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52471663 | pes2o/s2orc | v3-fos-license | Root Rot Diseases in Plants: A Review of Common Causal Agents and Management Strategies
Root rot is a serious threat to agriculture worldwide, continuously reducing yields and jeopardizing crop survival. Depending on the causal agent, host susceptibility, and the environmental conditions, entire fields can be lost to this disease. In this review work, we present the following root rot causal pathogens: bacteria, virus, oomycetes, and fungi. Bacterial and viral root rots are less common and not many studies have reported these causal agents. Oomycetes and fungi have been found to be the most commonly widespread root rot pathogens. Reported oomycetes are Aphanomyces spp., Pythium spp., and Phytophthora spp. Several fungi were reported to cause root rot, including, Rhizoctonia spp., Fusarium spp. and Phoma spp., Aphanomyceseuteiches and Thielaviopsisbasicola. These diseases are highly influenced by the environment, with a broad range of hosts, hidden underground symptoms and overwinter structures of many root rot pathogens, disease control and management are very complex and hard to achieve. Chemical treatment is generally only available as preventive seed treatment. Chemical treatment is generally only available as preventive seed treatments. Chemicals are also used as a way to kill the green bridge between crops. Overall, despite the complexity of this trait, resistance to root rot through enhanced varieties is the biggest promise to control such devastating diseases.
generalized damage. Two genera in this group, Pectobacterium and Dickeya, with a wide range of pathogenic species, cause wilt and rot diseases on monocot and dicot plant hosts worldwide. These bacterial pathogens secrete high amounts of plant-cellwall-degrading pectinases and polygalacturonases which digest plant cell walls and cause soft rot symptoms [15].
Within the Pectobacterium genera, an economically important pathogen is P. betavasculorum which has been reported to cause bacterial vascular necrosis and root rot of sugar beet. Within the Dickeya genera, Erwinia chrysanthemi is an economically important pathogen because it causes bacterial stem and root rot of sweet potato [16]. This pathogen causes soft rot disease in a wide range of crops in mild climate regions and in greenhouse settings [17].
Bacterial stem and root rot is common in storage but may also affect plants in the field and in seedbeds. The first symptom is the partial wilting of the plant and eventually the entire plant may collapse and die. Water-soaked, sunken brown to black lesions are observed at the base of stems and on petioles. On fibrous roots, localized lesions are observed, but the entire root system can be affected, showing the characteristic black, watersoaked appearance. Dark streaking in the vascular tissue of the roots has also been reported [18]. Viral root rots are less common than bacterial root rots. Some studies have reported cassava brown streak virus as a causal agent of root necrosis. Two distinct virus species have been identified, cassava brown streak virus and Ugandan cassava brown streak virus. Both are members of the Potyviridae family and are transmitted by the whitefly vector Bemisia tabaci (Gennadius). Dissemination of the virus is through cuttings that are taken from infected parent material [19]. As detection methods evolve, information regarding the non-traditional causative agents of root rot is expected to increase.
Oomycetes
Oomycetes, also known as water molds, are a large group of terrestrial and aquatic eukaryotic organisms that superficially resemble fungi in mycelial growth and mode of nutrition. However, molecular studies and distinct morphological characteristics have placed them in the kingdom Chromalveolata, phylum Heterokontophyta [5]. The terrestrial oomycetes are primarily parasites of vascular plants, and include several important root pathogens such as Aphanomyces spp., Pythium spp., and Phytophthora spp.
Among the Aphanomyces spp. A. cochlioides creates a major constraint in cultivation of sugar beet, causing damping off and chronic root rot [13,20] For instance, A. euteiches Drech., is responsible for one of the most damaging soil-borne root diseases in peas worldwide [26]. It can affect the plant at any developmental stage, causing rotting of the roots and epicotyls that result in stunted seedlings, yellow leaves and even dead plants [27]. The development of cultivars with tolerance or partial resistance to Aphanomyces root rot is generally considered to be one of the best options to reduce yield loss [25]. Resistance is limited in commercial cultivars [27]. Cultural practices for Aphanomyces root rot are highly dependent on environmental conditions, mainly because proliferation is through water-motile zoospores that thrive in poorly drained soil conditions [28].
Pythium, with over two hundred described species, can cause a variety of diseases including root rot in numerous plant hosts [29]. At least ten Pythium spp. cause Pythium damping off and root rot in various legumes and monocots.A rapid black rot of the entire primary root that can move up to the stem is typical of this pathogen. P. ultimum and P. irregular have been reported as the most ubiquitous pathogens in this group [30].
Root rot caused by Pythium spp. is one of the most damaging diseases restricting production of common bean [31]. This disease in corn was reported to be caused by P. graminicola in Japan. Tobacco seedlings in floating systems were also infected by P. diclinum [32]. Ornamentals under different irrigation regimes were reported to be infected by P. aphanidermatum and P. ultimum [33]. These same two species were shown to infest greenhouse cucumber production [34]. Some other specialty crops such as parsnip and parsley have up to 80 % and 100 % yield losses in Australia which is attributed to Pythium [35]. P. arrhenomanes is considered to be the most important cause of sugar-cane root rot [36]. Pythium root rot is also a relevant disease for wheat [37] and common bean.
Phytophthora spp. belong to the family Pythiaceae along with Pythium spp. and together they attack a wide range of woody ornamentals as well as annual crops. Symptoms include wilting, yellow or sparse foliage and branch dieback. Phytophthora spp. cause late blight of potato and tomato, foliar blights on peppers and cucurbits, and root or stem rots of many plant species. Taiwan cherry was reported to be infected by P. cambivora. Root rot of pea and fava bean in Southern Sweden were found to be caused by P. pisi sp. nov. [ [55]. Armillaria root rot is also a threat to apple, walnut and kiwi production [56].
Laminated root rot is the most damaging disease of younggrowth Douglas-fir and other conifers in the Pacific Northwest region of the U.S. This disease is caused by the fungus Phellinus weirii, which survives for 50 years or more in roots after trees are harvested [57]. Phellinus sulphurascens is also a major naturally occurring pathogenic fungus. The disease spreads below ground at root contact regions and often occurs in combination to Armillaria root rot [6]. Laminated root rot also been reported to be caused by Phellinidium qilianense in qilian juniper [7].
Another important tree root disease is the white root rot caused by the ascomycete Rosellinia necatrix, attacking a wide range of perennial plants [58]. White root rot is the major threat to apple in Kashmir valley. The moist conditions of the orchards and deficient irrigation system create the right conditions for this disease [59]. This pathogen is also a serious problem for avocado production in the Mediterranean [60]. Symptoms include root and collar rot of trees leading to a decline in vigor. A distinct margin is usually visible between the infected and healthy bark with a thin layer of white fungal growth found under the diseased bark. A white cottony mycelium and mycelia strands, either white or black, appear on and surrounding infected roots or inside the bark [61].
Rhizoctonia root rot is a frequent disease of many crops such as bean [62], apple [63], tobacco, blueberry [64], tomato [65], pea [66], and canola [67]. This pathogen generally attacks its hosts in the juvenile stages of development. Rhizoctonia root and crown rot, caused by R. solani, is the most widespread and damaging sugar beet disease in Nebraska [68]. Rhizoctonia root rot and bare-patch are diseases that limit the yield of direct-seeded cereals, especially wheat and barley [69]. R. bataticola is another serious threat mainly in cotton production [70].
Fusarium root rot is a common disease in several crops. Symptoms include round or irregular light brown lesions that progress to dark black lesions on below ground roots and stems, stunting and death [71]. In legumes, the major root rot causing agents are F. solani and F. avenaceum [72]. Thus, production of bean [73], soybean [74], pea [75], lentil [76], and peanut [77] is highly compromised by this type of pathogen. F. solani can cause severe rot in sweet potato roots [78] and cassava [79]. Fusarium spp. also cause other rots such as Fusarium crown rot in cereals and Fusarium stalk (stem) rot in corn. These species differ from those responsible for disease in dicots and include: F. graminearum, F. culmorum, F. avenaceum, F. verticillioides, and F. Pseudograminearum. F. culmorum causes foot and root rot and Fusarium head blight on different small-grain cereals, particularly wheat and barley [80]. F. graminearum plays a role in crown and root diseases of wheat. F. chlamydosporum infects coleus and other ornamentals. F. oxysporum is more frequently associated to wilt but it causes root rot on members of the Cactaceae family such as Schlumbergera truncate [81]. F. oxysporum also causes stem and root rot in melons [82].
Phoma root rot is caused by many species. P. betae is known to cause leaf spot and rotting of beets during storage. P. terrestris has been reported to cause pink root rot of onion [83]. Red root rot of corn is also attributed to P. terrestris [84]. Roots and basal stalk tissue infected with red root rot have a reddish pink discoloration that becomes a deeper red color as the disease progresses [85]. P. sclerotioides causes brown root rot of alfalfa and other perennial forage legumes [86].
Black root rot is primarily caused by Thielaviopsisbasicola. It infects a wide range of hosts, causing root disease in over 200 plant species. Symptoms include stem rot and damping off in some hosts in addition to rotten roots. Economically important hosts are tobacco [87], carrot [88], cotton [89], and soybean [90].
There are several other root rot pathogens that are less frequently reported. Some examples of such pathogens include Aspergillus spp., Alternaria spp., Curvularia spp., Rhizopus spp. and Penicillium spp., isolated from soil, root, stem and foliar samples of plants showing root rot symptoms [91]. Fungi Rigidoporus lignosus and Phellinus noxius are root rot pathogens reported in rubber trees. Dry root rot of chickpea caused by Macrophomina phaseolina is another significant fungal root disease that impacts chickpea production areas in India [92].
Overall, root rots are generally referred to as a complex, for example, black root rot of strawberry is attributed to Pythium, Fusarium, and Rhizoctonia pathogens [93]. Another example is the pea root rot complex where the disease is caused by a single pathogen or a combination of pathogens, including Alternaria alternata, A. euteiches, F. oxysporum, F.solani, F. avenaceum, Mycosphaerella pinodes, Pythium spp., R. solani,
Agricultural Research & Technology: Open Access Journal
Sclerotiniasclerotiorum and Phytophthora spp. [94,95]. Other fungi that can be associated with pea root rots include T. basicola and Ascochytapinodella. Phoma spp. can also be referred as part of the pea root rot complex [96].
Management Strategies for Root Rot
Effective management of root rot can be achieved by adopting resistant and tolerant varieties. Cultural practices, chemical treatments and biological control agents are also extremely important. Cultural practices can directly or indirectly affect populations of soil borne pathogens and the severity of their resultant root diseases [97].
Draining wet soils, crop rotation, soil preparation by tillage, fertilization, and weed control before planting have been reported as tools to manage root rot diseases. Planting within the recommended seed rate to avoid overcrowding can also decrease disease pressure. Crop rotation can break the disease cycle and affect soil chemistry. Many species within the Brassicaceae family contain glucosinolates that liberate products such as the volatile isothiocyanates, shown to suppress A. euteiches through hydrolysis. In this sense, Brassicaceae seed meal applications can also be used for its fungicidal benefits.
Reducing the green bridge by killing weeds or volunteer plants that allow the fungus to survive between crops is another important cultural practice. This control can be achieved by using herbicides such as glyphosate to control Rhizoctonia root rot in wheat [98]. However, fungal pathogens can survive for many years in soil as mycelium and by producing sclerotia. This increases the complexity of management, potentially mandating the application of bio control agents. Chemical treatment after planting is not a common option to treat root rots due to the advanced state of the disease by the time above ground damage is evident.
Other options for preventing root rot can include the use of chemical fungicides, inoculation or bio control agents. For instance, the population density of R. solani was reduced significantly in the rhizosphere of pea seedlings obtained from seeds pretreated with Trichoderma and/or Topsin-M. Treatment with Apron XL+Maxim4 FS+Cruiser with or without Rhizobium inoculant increased emergence and reduced root rot severity and the number of Pythium colonies compared to the untreated control.
Integrated pest management is more applicable, including timely fungicide applications, crop rotation and attention to soil moisture levels, along with developments in bio control. Plant growth-promoting rhizobacteria such as Bacillus pumilus and Pseudomonas putida, along with antagonistic fungi such as Aspergillus awamori, Aspergillus niger and Trichoderma harzianum have been used to control Fusarium root rot of pea [99]. Integration of soil application and seed treatment formulations of Trichoderma spp. for management of wet root rot of mungbean caused by R. solani has also been reported [100].
Strains of Pseudomona spp. have been shown to reduced disease symptoms of both R. solani AG-8 and P. ultimum, and the latter four also suppressed R. oryzae and P. irregular [101]. Studies on cultures of P. cinnamomi exposed to different Ca2+ fertilizers in vitro showed significant inhibition of sporangial, chlamydospore and zoospore production at millimolar concentrations while mycelial growth was mainly unaffected [102]. Trichoderma viride and Pseudomonas fluorescens were successfully used when combined as biocontrol agents for dry root rot of chickpeas.
Other alternative treatments are recently being considered, including soil type studies. Sandy soil covered with consortium of Zea mays and Vignaunguiculata was efficient in suppressiveness against disease caused by F. solani in cassava root rot. Vermicomposting for organic production was effective to reduce root rot in a complex disease of Coleus forskohlii involving Fusarium chlamydosporum and Ralstoniasolanacearum, in terms of 73% percent less wilt incidence and 82% less severity of root rot.
One pathogen may also inhibit another pathogen. For example, in a study conducted with alfalfa, co-inoculation with both A. euteiches and Phytophthoramedicaginis resulted in significantly reduced amounts of P. medicaginis DNA detected when compared with amounts detected from inoculations with P. medicaginis alone [103].
Resistance for root rot diseases is generally imparted by more than one gene and is referred to as quantitative resistance. This type of resistance provides partial and durable resistance to a range of pathogen species in different crops [104]. Root rot resistance is often quantitative. Most of the genes underlying this trait are difficult to introgress in modern-type cultivars while maintaining field and market desirable agronomic and quality traits.
Applying genetics to improve resistance
Identification of quantitative trait loci (QTL) in carefully designed genetic studies has been a major approach to study quantitative resistance. QTL studies are used to understand: epistatic and environmental interactions, race-specificity of partial resistance loci, interactions between pathogen biology, plant development and biochemistry, and the relationship between qualitative and quantitative loci. QTL mapping also opens up the possibility of positional cloning of partial resistance genes and its subsequent use in marker-assisted selection (MAS) of complex disease resistance characters [105]. MAS is the process of using morphological, biochemical, or DNA markers as indirect criteria for selecting agriculturally important traits in crop breeding [106].
Conventional plant breeding and genetic engineering are known to have introgressed diseases resistance into several crops. Conventional breeding demands significant time and effort in field trials. Recent advances in manipulating resistance include protein and RNA-mediated resistance. RNA silencing is a process that can induce mRNA degradation or translation inhibition at the post-transcriptional level. It can also induce epigenetic modification at the transcriptional level [107].
Regardless of the method, informed deployment of major resistance traits will enable the production of crop varieties with effective resistance [108]. However, the development of resistant cultivars needs content evaluation as pathogens evolve. For example, the use of cultivars with different resistance genesto Leptosphaeriamaculans was suggested to lead to a different spectrum of virulent isolates in oilseed rape production [109][110][111].
Conclusion
This work presented a review on root rot diseases that are a great threat to many crop systems around the world. Bacteria, virus, oomycetes and fungi are the main causal agents that can act as a primary pathogens or in combination with other pathogens in both field and greenhouse space. Bacterial stem and root rot also affects many crops and is common in storage, field and seedbeds. Viral root rots are even less common and were reported to be a problem in cassava production. Oomycetes and fungi are the most frequent yield reducers. Reported oomycetes are Aphanomyces spp., Pythium spp., and Phytophthora spp. Several fungi cause root rot, including, H. annosum, A. ostoyae and P. sulphurascens that have been reported as a problem for tree crops, as well as the broader host range pathogens Rhizoctonia spp., Fusarium spp.
Beneficial interactions such as legume-rhizobial and plantmycorrhizal relationships can improve the ability of a root system to withstand stress as well as provide a boost to the plant immune system. Some microbes can also inhibit other pathogens such as A. euteiches that is known to be suppressed by a vesicular-arbuscular mycorrhizal fungus. Knowledge of plantmicrobe and microbe-microbe interactions can also be the basis of biocontrol research.
Due to the high environmental influence, broad range of hosts, hidden underground symptoms and overwinter structures of many root rot pathogens, root rot disease control and management is complex and hard to achieve. Chemical treatment is generally only available as preventive seed treatment for some root rots or as a way to kill the green bridge between crops. Therefore, the adoption of resistant or tolerant varieties is still the most promising tool to control root rot. As there is no complete resistance for major diseases such as Aphanomyces and Fusarium root rots due to the quantitative nature of this trait, breeding for this trait is very complex. Greenhouse assays and field screenings can be expensive and the use of markers becomes a great tool to identify sources of resistance. Reliable markers such as SNPs are still lacking to develop a research foundation that will provide an understanding of the genetic mechanisms underlying resistance as well as to be used in MAS. | 2019-04-02T13:11:51.023Z | 2017-03-27T00:00:00.000 | {
"year": 2017,
"sha1": "0c22be76647cf015b9740fc5883a6fefe1cbb61f",
"oa_license": null,
"oa_url": "https://doi.org/10.19080/artoaj.2017.05.555661",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "073d1293a3686448c16fe8809ab2e67382745c98",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
226650451 | pes2o/s2orc | v3-fos-license | Preparation of gourmet jam made of red fruits with ginger , physical-chemical , microbiological and sensorial characterization
Extra gourmet jams serve a demanding public in terms of quality aspects, as this type of jam does not have preservatives, stabilizers and gums in its formulation. The aim of this work is to develop the physical-chemical, microbiological and sensory characterization of a strawberry and raspberry jam with ginger. The strawberry, and red and black raspberry pulps, are source of nutrients and vitamins, and grated ginger root, which is famous for anti-inflammatory properties, were used for the preparation of gourmet jam, in addition to vanilla extract, pectin and sucrose. Regarding the physicalchemical characterization, the content of total soluble solids, pH, moisture and ash were determined, all in duplicate, following the methodology proposed by the Adolfo Lutz Institute. The microbiological analysis of molds and yeasts was performed according to the methodology proposed by Normative Instruction N. o 62 (December 29, 2011). For the sensory acceptance, a 9-point hedonic scale and a 5-point hedonic scale were used for purchase intention. The microbiological results were in line with that recommended by RDC N. o 12 (January 2, 2001) and the sensory evaluation showed that the product has a good acceptance and a high percentage of purchase intention. The content of soluble solids was 50%, the legislation recommends a content from 62% to 65%, consequently, the moisture content was higher than the recommended, and this is attributed to the low percentage of sucrose used in the formulation in order to avoid the crystallization of the final product. Clean label foods are a trend in the consumer market due to their healthy appeal, the proposal of red fruit jam with ginger brings this concept as a priority, but further studies are suggested the replacement of sucrose by another sweetener constituent, with the aim of increase the content of soluble solids and decrease the moisture content of the final product.
INTRODUCTION
In the current scenario, characterized by the growing demand for practical and convenient foods, mainly healthier and tastier, is an opportunity for innovation with benefit for the food industry.
Today's consumers are increasingly interested to know and understand what types of foods they are bringing to their table, what ingredients and what implications these ingredients will have on their health. Many additives, which play an important role when it comes to making industrialized food on a large scale are being rejected by this consumer profile.
Clean label foods enter the scene to serve this new consumer. A food trend that emerged about ten years ago, in Europe and the United States, driven by the increasing desire of consumers to acquire a healthier lifestyle (BLUM et al., 2012).
These foods are formulated with special care, having in their composition only natural ingredients, i.e. free from artificial additives and a label of simple and easy to understand ingredients.
An important effort is required in the selection of the appropriate raw material and manufacturing technology to obtain quality food, a safe food, free from chemical, physical and microbiological contamination, and with good sensory characteristics.
The Brazilian Food Law defines fruit jam as "a product obtained by cooking whole or in pieces, fruit pulps or juices, with sugar and water, and concentrated to a gelatinous consistency". The classification adopted by the legislation determines that a jam can be common or extra. Jams considered extra are prepared in a proportion of fifty parts of fresh fruit or its equivalent to fifty parts of sugar.
This type of red fruit jam fits into the extra classification, with the differential -gourmet jamfor being a product of limited production, proportion of 68% fruit and 32% sugar, high quality raw material, with unique characteristics, with a "Premium" positioning, and product with added value.
Fruit jam is a product with good sensory acceptance and with a high added value (gourmet), with a market that has been growing in search of processed products with excellent nutritional quality.
Berries are a great source of nutrients, vitamins and minerals. A balanced diet characterized by the consumption of red fruits, combined with physical activities, protects the body from many diseases. This habit of consuming red fruits can prevent many types of diseases, such as cardiovascular, stroke, cancerous, stomach, cystitis, prevents premature skin aging, and antiinflammatory properties. Their consumption improve the immune system, making the body more resistant to these diseases. The redder the fruits are, the richer in phenolic and mineral compounds.
Also a source of calcium, phosphorus, potassium, fiber, vitamins A and C, and sources of ascorbic acid.
The term berry fruits refers to fruits such as strawberries, raspberries, blueberries and blackberries. They have antioxidant power, conferred by the phenolic compounds present in them, in amounts that vary from species to species.
The main characteristic, the color, varies from red to blue, due to the presence of natural pigments known as anthocyanins, which are soluble in water and are distributed in plant tissues (GIUSTI;JING, 2007).
The strawberry tree is a perennial, creeping plant, belonging to the Rosacea family and of the genus Fragaria (Gomes 2007). Its fruits are considered a pseudo-fruit, non-climacteric, of bright red color, due to the presence of anthocyanins; the slightly acidified flavor corresponds to citric and malic acids (Silva, 2006). Strawberries are rich in vitamin C, a very important vitamin for the human organism, as it plays a fundamental role in the development and regeneration of muscles, skin, teeth and bones, formation of collagen, regulation of body temperature, production of hormones, metabolism in general (Andrade et al., 2002).
The active substances present in the fruits act in the prevention and / or cure of many diseases, and mainly: its diuretic effect and its anti-inflammatory activity in rheumatism and gout, antioxidant action (by phenolic compounds), and the ability to reduce susceptibility to infections (LIMA, 2014).
The raspberry belongs to the family Rosaceae, genus Rubus. Raspberry is among the main foods with functional properties that have already been experimentally related to beneficial effects on cardiovascular diseases, atherosclerosis, and certain types of cancer, obesity, aging and neurodegenerative diseases (Santos et al., 2011).
Phenolic compounds, and anthocyanins, present in raspberries, vary according to their color, the darker the more phenolic compounds; these contribute to the protection against degenerative diseases. The antioxidant activity is responsible for combating free radicals, which are produced in abundance by physiological processes, and resulting from external factors (MARCHI, 2015).
Ginger, whose scientific name is Zingiber Officinale Roscoe, is an herbaceous plant, with Asian origin, which reaches 1.50 meters in height. Traditional and contemporary medicine uses ginger, it is a spice whose rhizome is widely marketed due to its industrial and food use, especially as a raw material for the manufacture of drinks, perfumes and confectionery products, such as breads, cakes, cookies and jams (ELPO, 2004).
Ginger has in its chemical composition volatile (terpenes), non-volatile (phenolic compounds and alkaloids), extractable oleoresins, fats, waxes, carbohydrates, vitamins and minerals. Rhizomes contain a potent proteolytic enzyme called zigibain (SILVA NETO, 2012). It has several active components, 115 described, within these, the phenolic compounds, gingerol and shoagol have been widely studied with different properties, such as antipyretic, analgesic, angiogenesis inhibitor and immunomodulatory activities.
Several studies indicate that the compounds found in ginger are highly effective in relieving the symptoms of various diseases. Ginger has been used for centuries due to its anti-inflammatory properties.
The health benefits of ginger, preferably fresh, are mainly due to the presence of phenolic compounds, responsible for the pungent flavor. These are the gingerols (BENZIEL; WACHTEL -GALOR, 2011).
Given the above, the objective of this work was to develop a formulation of gourmet jam made of red fruits and ginger, characterize it in terms of microbiological, physical-chemical and sensory parameters, assessing its potential in the Brazilian consumer market, in order to provide the consumer more and more Clean Label food alternatives.
In this way, a gourmet red fruit jam was developed to serve this new market niche, which is concerned with offering products with clean labels, paying special attention to current consumer requirements.
MATERIAL AND METHODS
The jam was made at the company Sweet Stuff -gourmet jams-located in the city of Xanxerê -SC (Brazil), and the physical-chemical, microbiological and sensory analyzes in the didactic laboratories of the Faculty of Food Technology of SENAI -Chapecó -SC (Brazil). Fruit, the raw material used in the preparation of red fruit jam, was acquired in the organic garden owned by the company.
Important steps in the processing of red fruit jam with ginger were followed as shown in Figure 1. The jam was made in the proportion of 68% / 32% (pulp / sucrose). The sugar used was demerara, as it has greater sweetening power, offering more health benefits from the nutrients that are present in it. For this, it was cooked in a stainless steel pan, with continuous manual stirring until the concentration of soluble solids of 50º Brix, measured in a refractometer. The jams were filled in glass containers with a capacity of 320 grams. Table 1 shows the formulation of red fruit jam with ginger.
Lemon (mL) 30
Ginger (mL) 5 Source: Leteller, Laetitia (2014) The jam pH was analyzed by using the potentiometric method, previously calibrated (standard solutions 4 and 7), moisture due to the loss of mass of the sample in an oven at 105 ± 2 whose water and volatile substances were removed, total soluble solids (ºBrix) measured in portable refractometer 58⁓92% ºBrix, Aw in Aqualab 4TEV apparatus, ash by muffle incineration at a temperature of 500ºC -600ºC. All analyzes were performed in duplicate according to the methods recommended by Adolfo Lutz Institute (2008). The standards for microbiological analysis were based on RDC No. 12 (January 2, 2001), for molds and yeasts. For the sensory acceptance, the 9-point hedonic scale method was used ("1" = dislike extremely, "5" = "neither like nor dislike", "9" = "like extremely"), with the purchase intention on the five-point scale. The sensory evaluation was tested with 50 people of both sexes, aged between 16 and 50 years, among professors and students of the institution Senai -SC (Brazil). The jam sample was served in a disposable cup, coded with random three-digit numbers.
These were served with accompaniment of a toast, and a 100 mL glass of water to clean the palate in the sample evaluation.
PHYSICOCHEMICAL ANALYSIS
The results of the physical-chemical analysis of the red fruit jam are shown in table 2.
The minimum total soluble solids content, recommended by law, for extra jam (% w/w) must be 62% and 65%. In the formulation of gourmet jam it was 50º Brix. Close value found by Jorge, et al., which was 49.90% for chili jam with pepper, destined for the "gourmet" market.
In the manufacture of red fruit jam, sucrose (in the proportion of 35%) was used, which in an acidic environment, by the addition of lemon, undergoes a hydrolysis process, being partially broken down into glucose and fructose (inversion), this helps to avoid the crystallization that can occur during storage (TORREZAN, 1998). However, when a final concentration above 65% of total soluble solids is made, it is necessary to replace part of the sucrose to avoid crystallization, using corn glucose or inverted sugar. In order for there to be an increase in soluble solids, it will be necessary to increase the proportion of sucrose, without raising the sweetness attribute too much, which de-characterizes the product.
The pH value was 3.39, which means regular acidity for the gelation to occur; adequate consistency of the jam, without the addition of acidulates. In order to obtain a better gelation, the final pH must be between 3.0 to 3.2, with the optimum acidity value. For most fruits, this pH is not reached in the fruit, pectin and sugar system, requiring acidification, preferably using organic acids, natural constituents of fruits, such as citrus. However, the pH slightly above the recommended did not affect the final quality of the product. This formulation obtained good sensory acceptance.
Other authors have also found pH values different from this optimal value. Caetano et al. They In the preparation of the jam, the legislation establishes a maximum humidity of 38% (w/w).
According to the result of the physical-chemical evaluation (table 2), this higher value is observed.
The presence of sugar increases the osmotic pressure of the medium and, consequently, decreases the water activity of the food, also removing the water layer that protects the pectin molecules, enabling the formation of gel. By increasing the content of soluble solids, water activity would decrease, thus obtaining greater stability (TORREZAN, 1998).
MICROBIOLOGICAL ANALYSIS
The results of the microbiological analysis for molds and yeasts (Incubation at 25 + 1ºC for 5 days), meeting the standards of RDC N. º 12 (January 2, 2001) the sample was within the standards established by current legislation.
SENSORY ANALYSIS
In the acceptability test, 50 people were interviewed, 36% men and 64% women. The frequency with which participants include jam in their diet is shown in Figure 1, which shows regular consumption. The most consumed flavors are, preferably, grape with 42%, followed by strawberry 30%. The acceptability index (AI) of red fruit jam with ginger was over 70% (Figure 2). According to Dutcosk (2011) for a product, it has good acceptability it is necessary that the AI is equal to or greater than 70%. Thus, it is observed that the formulation of the jam elaborated in the present work presents values higher than the The purchase intention (Figure 3), presented an average between "possibly buy" and "certainly buy", showing that the product was well accepted by consumers and that it is an alternative to a new product in the Brazilian market. The results indicate high acceptability of the product, a positive point, showing that it can be produced on an industrial scale.
CONCLUSION
The good results, following the objective of the work, with the obtainment of a jam with differentiated sensory characteristics, aimed the gourmet market and the fact that the product was well-accepted, demonstrated great market potential.
For the physicochemical requirements of pH, soluble solids and humidity, which presented higher values in accordance with the established standards, factors that should be reviewed, perhaps with the elaboration of new formulations, trying not to mischaracterize the jam as a gourmet product, with different characteristics.
The sample met the microbiological standards established by current legislation. The product under study can be produced on an industrial scale, making it an alternative in the clean label product line. | 2020-07-09T09:11:41.704Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "1e4f6796f094b06605cbf16de793c18af8fcd713",
"oa_license": null,
"oa_url": "https://www.brazilianjournals.com/index.php/BRJD/article/download/11050/9268",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7ac6249f260405f3e3078446972ea15e548dc3c9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
125805456 | pes2o/s2orc | v3-fos-license | Optimal estimation of parameters of an entangled quantum state
Two-photon entangled quantum states are a fundamental tool for quantum information and quantum cryptography. A complete description of a generic quantum state is provided by its density matrix: the technique allowing experimental reconstruction of the density matrix is called quantum state tomography. Entangled states density matrix reconstruction requires a large number of measurements on many identical copies of the quantum state. An alternative way of certifying the amount of entanglement in two-photon states is represented by the estimation of specific parameters, e.g., negativity and concurrence. If we have a priori partial knowledge of our state, it’s possible to develop several estimators for these parameters that require lower amount of measurements with respect to full density matrix reconstruction. The aim of this work is to introduce and test different estimators for negativity and concurrence for a specific class of two-photon states.
Introduction
The amount of entanglement plays a crucial role in quantum information [1 -3]. Therefore, characterization and quantification of entanglement in a quantum system is a crucial issue for development of quantum technologies.
There exist two ways for measuring the amount of entanglement: the first one is performing a complete quantum state tomography [2] and evaluating parameters like negativity [4] or concurrence [5] by reconstructed density matrix [1,2], while the second one is based on estimating such parameters with an algorithm based on optimal measurements [6] exploiting some a priori knowledge of quantum state.
In our case we implemented a quantum optical system that, by means of spontaneous Parametric Down-Conversion (PDC) [7,8], generates two entangled photons in the singlet state |ψ − = 1 √ 2 (|HV − |V H ), where H is the horizontal component of polarization and V is the vertical one: the notation |XY is a shortcut to indicate the tensorial product |X ⊗ |Y . With this a priori knowledge we can perform the estimation of the above mentioned parameters. We test two estimators for each parameter: an optimal one and a nonoptimal one. An estimator is optimal if the smallest statistical uncertainty associated to it coincides with the limit imposed by the Cramér-Rao bound [9], corresponding to the minimum uncertainty available.
Procedure
The experimental setup for singlet-state preparation is shown in Fig. 1: it hosts a 9 W Figure 1: Experimental setup for the two-photon maximally entangled state CW laser 1 at 532 nm pumping a Ti:Sapphire crystal in an optical cavity. At the exit of this cavity the mode-locked laser generated has a wavelenght of 808 nm with a FWHM 2 of 7 nm. Such laser is frequency-doubled by means of second harmonic generation (SHG) [10] and then injected into a BBO 3 crystal where Type-II PDC occurs. After this outcoming photons make two cones: one with horizontal polarization and the other with vertical polarization. We are interested in photons belonging to the intersections of these two cones, because these photon pairs are in a superposition of singlet state |ψ − and triplet state |ψ + 4 . In order to compensate the walk-off between the two polarizations induced by the birefringence of the BBO and select only the singlet state without any unwanted relative phase, we put another BBO crystal in both photon paths. In the second part of the setup there are in both optical branches a quarter wave plate (λ/4), a half wave plate (λ/2), a polarizing beam splitter and a Fiber Coupler (FC) preceded by a bandpass filter: after being projected onto different polarization bases the photons are filtered by the bandpass filters (3 nm and 20 nm FWHM) and fiber coupled.
Fiber-coupled photons are addressed to two Silicon Single-Photon Avalanche Diodes (Si-SPADs), whose outputs are sent to coincidence electronics. In order to test the fidelity of the generated entangled state with respect to the expected one, we perform a quantum state tomography and we calculate the Uhlmann's Fidelity [2] between the reconstructed and the theoretical density matrices of the two-photon state.
We also prepare and test the completely decoherent mixture ρ mix = 1 2 (|HV HV | + |V H V H|), obtained by adding a birefringent crystal on one of the two photon paths as shown in Fig. 2. This birefringent crystal has the optical axis orthogonal to the Figure 2: Experimental setup for the two-photon mixed state. We introduced temporal decoherence by putting on one photon path a 2.7 mm thick birefringent crystal with optical axis orthogonal to the photon propagation direction. photon propagation direction, and its thickness is 2.7 mm, much greater than the photons wavelength, in order to introduce only temporal decoherence into the two-photon state.
By using a fraction p of singlet-state data and a 1 − p portion of the decoherent-state data, we create a statistical mix of data simulating a generic partially-decoherent state described by the density matrix: where p ∈ [0, 1].
Quantum tomography validates our a priori knowledge of the state: being aware that our state density matrix is in the form (1), we can implement estimator algorithms for the parameters that we want to measure. We choose the {|+ , |− } 5 polarization basis for the measurement sets of the estimators, because all the estimators are based on projections onto such basis. With the same setups of Fig. 1 and Fig. 2 we perform coincidences measurements in this polarization basis.
Data Analysis
The preliminary results of the density matrix reconstruction with quantum state tomography are shown in Fig. 3 and Fig. 4. The Uhlmann's Fidelity is defined as: and the values 6 for ρ |ψ − and ρ mix are respectively F = 0.975 and F = 0.985.
After these tomographies, because the values of the two fidelities are close enough to 1, we are sure that we have a priori knowledge of our state, so we perform measurements in {|+ , |− } basis in order to estimate negativity and concurrence both for maximallyentangled state and decoherent state. For each parameter we use two estimator algorithms: a non-optimal one and an optimal one. Optimal means that the theoretical variance of the estimated parameter saturates the Quantum Cramér-Rao bound: Here we introduce the parameters that we want to estimate, both with the corresponding optimal and non-optimal estimators.
Negativity is defined by: where: ρ T A is the partial transpose of ρ with respect to the subsystem A and ||X|| 1 = T r √ X † X is the trace norm of the operator X. When ρ describes a completely separable state like the one in Fig. 4.c, N (ρ) is equal to 0, while for a maximally entangled state the negativity is 1.
We define the non-optimal estimator ǫN 1 : when P (•) is the probability of the event •, and the optimal estimator ǫN 2 : Concurrence is defined by: where λ i are eigenvalues of the matrix R = √ ρ(σ y ⊗ σ y )ρ * (σ y ⊗ σ y ) √ ρ, σ y is the y Pauli's matrix.
Because Concurrence and Negativity have the same theoretical value, we can use the same estimators previously introducted in Eqns (4) and (5).
The results for estimation of ǫN 1 are still in evaluation, while the ones for ǫN 2 are shown in Fig. 5. The parameter p is defined by Eq. (1): at p = 0 we have the mixing state, while at p = 1 we have the maximally entangled singlet state. This last state is not properly at p = 1 in our graphics, because of small decoherence due to experimental imperfections, but, as previously explained, we can ignore this discrepancy because the Uhlmann's fidelity of ρ exp is close enough to 1.
We are now working in order to evaluate intermediate points with 0 < p < 1, when we have a partially-decoherent two-photon state with their uncertainties.
Conclusions
We performed an experiment comparing two different parameters (Negativity and Concurrence) able to quantify the amount of entanglement in specific two-photon states. Both Negativity and Concurrence have the same estimators and we computed two of these: an optimal one and a non-optimal one. In all cases, with our preliminary data, the optimal estimator values and their experimental uncertainties are in good agreement with the theoretical predictions. The optimal estimator shows an uncertainty that is compatible with the minimum uncertainties allowed by the Cramér-Rao bound in the case of the maximally-entangled state, while the uncertainty appears to be slightly bigger for the separable one. The uncertainties on the non-optimal estimator are currently bring evaluated. Furthermore, we are evaluating estimators and uncertainties for statistical mixtures of the singlet state and the decoherent two-photon state, in order to have a good sampling of states with density matrix of the form in Eq. (1). We are working to apply the same technique to other parameters like Log-Negativity [4] and Quantum Geometric Discord [11], and also to extend it to the case of non-maximally-entangled states, i.e. states described by: Actually we are able to prepare experimentally this non-maximally-entangled state: with Uhlmann's fidelity F = 0.935. Even though the work is still in progress, we believe that our technique, being able to discriminate between optimal and non-optimal entanglemet parameters estimators, will be of widespread use in the quantum communication and computation frameworks, as well as for quantum metrology and, in general, all entanglement-based quantum technologies. | 2019-04-22T13:06:36.367Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "2538e558c4b21684ab7ca9de52cc924327245a71",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/841/1/012033",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e00cf7191fa2fda41407fa766adfd423f0f6979c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
267333913 | pes2o/s2orc | v3-fos-license | DeepCORE: An interpretable multi-view deep neural network model to detect co-operative regulatory elements
Gene transcription is an essential process involved in all aspects of cellular functions with significant impact on biological traits and diseases. This process is tightly regulated by multiple elements that co-operate to jointly modulate the transcription levels of target genes. To decipher the complicated regulatory network, we present a novel multi-view attention-based deep neural network that models the relationship between genetic, epigenetic, and transcriptional patterns and identifies co-operative regulatory elements (COREs). We applied this new method, named DeepCORE, to predict transcriptomes in various tissues and cell lines, which outperformed the state-of-the-art algorithms. Furthermore, DeepCORE contains an interpreter that extracts the attention values embedded in the deep neural network, maps the attended regions to putative regulatory elements, and infers COREs based on correlated attentions. The identified COREs are significantly enriched with known promoters and enhancers. Novel regulatory elements discovered by DeepCORE showed epigenetic signatures consistent with the status of histone modification marks.
Introduction
Gene transcription displays complicated spatiotemporal patterns that vary between tissue and cell types, developmental stages, disease phenotypes, and environmental exposures [1,2].Such variations are regulated by a set of mechanisms that induce or repress gene transcription as part of a large network [3,4].Many factors attribute to gene transcription regulation, such as genetic alterations [5,6], epigenetic changes [7,8], and chromatin structure [9][10][11].Deciphering and cataloging these regulatory codes are a grand challenge.
Computational mining of multi-omics data is a promising approach to investigate the mechanisms of gene transcriptional regulation.As early attempts, several models used genetic sequence information such as transcription factor binding sites (TFBS) to predict gene transcription levels.[12][13][14][15][16][17][18] However, relying on genetic sequences that cannot capture tissue-specific information is a major limitation of these models.Epigenetic features, such as histone post-translational modifications (hPTMs), are introduced to address this issue.DeepChrome [19] is one of the early deep learning method that models the relationship between epigenetic and transcriptional profiles.It retrieves the hPTM signals in the ± 5kbps region around the transcription start site (TSS) of a gene, uses a convolutional neural network (CNN) to extract local features, and passes these features to a feedforward neural network (FNN) to make binary prediction of gene transcription levels.ExPecto [20] expands the TSS-flanking region to 40kbps, includes ChIP-seq data of hundreds of TFs as input, and predicts gene expression in continuous values.Although these models reported high accuracy of predicting gene transcription levels, they do not identify regulatory elements (REs) in the genome that are essential to understanding the regulatory mechanisms.
Because deep neural networks (DNN) are often considered a black box, extracting biological meanings from these models can be challenging.Recently, several algorithms have been developed to interpret and visualize patterns learned in DNN [21][22][23].DeepChrome summarized the hPTM patterns coded in the CNN model, which were consistent with known active and repressive marks.However, a high-level summary cannot identify and locate REs.Furthermore, epigenetic signals highly depend on genetic features.For example, chromatin structure changes involving TFBS will have a larger impact on gene transcription than those outside TFBS.In this study, we present a novel method to address this knowledge gap and model co-operative regulatory elements (COREs).
This new method, named DeepCORE, uses a multi-view architecture to integrate genetic and epigenetic profiles in a DNN.It captures shortrange and long-range interactions between REs through bidirectional Long short-term memory (BiLSTM).It leverages the attention mechanism to pinpoint the most informative regions harboring REs and enhance the interpretability of the model.The output of DeepCORE includes prediction of gene transcription level, locations of REs in the genome, and correlations of multiple REs.We applied DeepCORE on various tissues and cell lines and showed that DeepCORE has significantly higher accuracies than existing state-of-the-art methods.Deep-CORE model has good generalizability across samples and identifies COREs in high resolution.These putative REs are enriched with known promoters and enhancers.
Overall design and data sets
In the ENCODE [24] project and the RoadMap Epigenomics project (REMC) [25], we searched for samples that had transcriptomic data (RNA-seq) and epigenetic data (ChIP-seq of H3K4me, H3K4me3, H2K27ac, H3K9me3, and H3K27me3).We randomly selected two of these samples (E061 and E071) and used them for algorithm development and parameter tuning.We randomly selected additional 23 samples to systematically evaluate the performance of DeepCORE, DeepChrome, and ExPecto.To compare with Xpresso [26] predictions, we trained DeepCORE on two more samples (E116 and E123) that were tested in the Xpresso study.These samples included cancer cell lines, embryonic stem cell-derived cell lines, primary cell cultures, and primary tissues (Supplementary Table 1).
DeepCORE has two components (Fig. 1A).The first component is a deep neural network (DNN) that predicts transcription level of a gene based on its genetic and epigenetic features.The second component is an interpreter that analyzes the attention matrices in the DNN to discover COREs.
For a given gene, DeepCORE focuses on the ± 5kbps region surrounding the TSS.To derive genetic features, we extracted the ± 5kbps nucleotide sequences for each gene and converted into a one hot encoding format.This gives us the genetic feature matrix with four rows corresponding to the nucleotides and 10,000 columns corresponding to genomic regions with the value in each cell corresponding to the presence or absence of a specific nucleotide.It is worth noting that the human reference genome sequence is used, and genetic alterations are not considered.
To derive epigenetic features, we obtained ChIP-seq data of 25 tissue or cell line samples from the ENCODE [24] project and the REMC [25] project.The ChIP-seq data contained normalized read counts measuring five hPTMs including known transcription activator marks (H3K4me, H3K4me3 and H2K27ac) and repressor marks (H3K9me3 and H3K27me3) [27].Given a gene, we examined the ± 5kbps TSS-flanking region and recorded position-specific normalized read count for each histone modification mark.These data from each sample were organized into an epigenetic feature matrix with five rows corresponding to hPTMs and 10,000 columns corresponding to genomic positions.
The ENCODE and REMC projects also contained RNA-seq data.For each sample, we obtained the Reads Per Kilobase of transcript per Million mapped reads (RPKM) value for each gene.These data were organized into a single-column vector with rows corresponding to genes.For each sample, we removed genes with missing values of RNA-seq data and missing values of ChIP-seq data across all five histone marks.
Multi-view attention-based DNN
The DNN architecture consists of two separate paths representing the genetic view and the epigenetic view, respectively (Fig. 1B).Each path starts with a CNN layer consisting of N CNN filters with size k and stride 1.The output of the CNN layer is passed to a ReLU function connected to max pooling over non-overlapping interval of length p.These steps produce a vector of size (N C − K +1)/p for filter f i , i ∈ {1, …, N CNN } encoding sequence motifs and another vector of equal size encoding combinations of histone modification patterns.To avoid overfitting, dropout [28] with a probability of 0.5 is applied to the max-pooled vector.While CNN captures local patterns within a genomic region, it does not consider interactions between regions.Since enhancers and promoters separated by thousands of base pairs can interact to regulate gene transcription, DeepCORE uses bi-directional long short-term memory (BiLSTM) networks [29] to capture short-range and long-range dependencies.
As the input sequences to the BiLSTM get longer, it becomes more difficult for the hidden states to capture the context, leading to decreased performance [30,31].Furthermore, epigenetic signals are abundantly distributed throughout the human genome, but not all epigenetically modified regions have regulatory functions.To pinpoint the most functionally important elements and capture their local and distal interactions, DeepCORE employs an encoder-decoder [32] with attention mechanism [33].The encoder is the BiLSTM model, and the decoder predicts the importance score of the next genomic region based on importance scores of the regions it has already predicted.This allows the prediction to be made based on a series of important hidden states from the encoder instead of only the last state.Furthermore, DeepCORE highlights the most informative regions contributing to gene transcription regulation by replacing the default softmax function in the attention model with a sparsemax function [34] that introduces sparsity of probability distribution.The learnt attention is a vector of length equal to the number of output nodes from the CNN layer containing importance score of each genomic region.DeepCORE then gives the decoder outputs to a fully connected FNN to predict continuous gene transcription levels.
Training DNN
We trained a multi-view attention-based DNN model for each sample.Given a sample, the data were randomly split into disjoint training, validation, and test sets, each comprised of 80 %, 10 %, 10 % of all genes, respectively.The test set was kept hidden and was used only after hyperparameter tuning and parameter learning were completed to avoid information leak.Mean Squared Error (MSE) was computed as the loss function and fed back to the network through backpropagation.We used Adam (Adaptive Moment Estimation) optimizer [35] to train the model for 100 epochs.Early stopping criteria (training is stopped if the model performance on the validation set does not improve over 5 epochs) is employed to avoid overfitting.We noted that the performance of all models stabilized before reaching 50 epochs, after which the training was terminated.
The optimization was performed in two stages.,In the first stage, the hyperparameters in the DNN model were optimized via grid search (Supplementary Table 2).The optimal configuration was selected based on the performance on the validation set.The second stage of optimization is done on the attention mechanism to achieve sparsity.The parameters in the DNN model for both stages were jointly learned.The model was trained on Titan Xp GPU donated by the NVIDIA Corporation.The total runtime was recorded by varying the sequence length from 500 bps to 10kbps (Supplementary Fig. 1).
Interpreting attention matrices to discover COREs
The DeepCORE model trained on each sample contains an attention probability matrix with rows corresponding to genes and columns corresponding to 50 bps windows (bins).For each gene, we extracted the tissue-specific attention probabilities of the bins and computed the cumulative distribution (CDF) of the attention probabilities.We then calculated empirical p-values based on the CDF and applied correction of multiple comparison to derive the false discover rate (FDR).Bins with FDR< 0.05 indicated genomic regions with significant regulatory function.
After extracting the significant bins for each gene across all tissues, we obtained a matrix with rows representing different tissue or cell line samples and columns representing the bin probabilities.Pearson's pairwise correlation [36] was then applied to this matrix to estimate correlations between bins to infer interactions of different genomic regions.Blocks of bins that have significantly correlated attention probabilities and are at least 1kbps apart are putative COREs, i.e., regulatory elements that co-operatively modulate gene transcription.
For each of the 25 cell-line and tissue samples, we built a DeepCORE multi-view model.As expected, samples of similar origins shared similar epigenetic profiles and those of distinct origins showed different epigenetic profiles (Supplementary Fig. 2).Using these samples, we evaluated the performance of DeepCORE, ExPecto, and DeepChrome in predicting gene expression levels.Across all sample, DeepCORE consistently reported a lower error rate (RMSE) than ExPecto on predicting continuous gene expression levels (Fig. 2C).The best performance of DeepCORE was observed in the E084 sample with an RMSE of 1.65, and the lowest performance was observed in the E006 sample with an RMSE of 2.06.Similarly, DeepCORE consistently reported a higher accuracy than DeepChrome on binary classification (Fig. 2D).On average DeepCORE outperformed ExPecto and Deep-Chrome with an improvement of over 10 % in most samples.
Recently, another method, Xpresso [26], reported good performance of predicting gene expression levels based solely on genetic sequence.It provided sample-specific predictions for two ENCODE samples, namely E116 and E123.We trained DeepCORE on each of the two samples.Given that the gene expression values used in Xpresso and DeepCORE were in different scales, their RMSEs were not comparable.We therefore evaluated the performance based on r [2] value, Pearson correlation coefficient (PCC), and Spearman correlation coefficient (SCC) that measure the correlation between true and predicted gene expression.For both samples, DeepCORE outperformed Xpresso (r 2 = 0.67 vs. 0.46, PCC=0.82 vs. 0.68, SCC=0.79 vs. 0.68 for E116 sample, and r 2 = 0.71 vs. 0.52, PCC=0.84 vs. 0.72, SCC = 0.81 vs. 0.72 for E123 sample, Supplementary Fig. 3).
Following the success of DeepCORE on predicting within-sample gene transcription, we tested if a DNN model trained on one sample performed well on other samples, which indicate the generalizability of the model.We chose four tissues representing very different cell types (E058: keratinocyte, E061: melanocyte, E071: brain hippocampus middle, and E100: psoas muscle).In this analysis, we trained a DNN model using data from one sample and tested it using data from the remaining three samples.We first compared DeepCORE and ExPecto on predicting continuous values of gene expression levels.The results from our analysis indicated that in general, cross-sample predictions had lower performance than within-sample predictions for all three methods (Fig. 2E).The only exceptions were ExPecto models trained for E071 sample, where cross-sample prediction is better than within-sample prediction.Nevertheless, the RMSE error rate of DeepCORE on average was 27.5 % lower than that of ExPecto in cross-sample predictions (mean RMSE=2.06 vs. 2.85).We then compared DeepCORE and DeepChrome on binary classification (Fig. 2F).The F1-score was 18 % higher in DeepCORE than in DeepChrome (mean f1-score =0.79 vs. 0.671).Overall, the performance of DeepCORE decreased only slightly by 6 % for cross-sample predictions, while the ExPecto and DeepChrome showed a huge performance reduction by more than 15 % and 10 % respectively.These results implied that the patterns captured by Deep-CORE likely represented general relationships between genetic, epigenetic, and transcriptional changes.
Using the E061 cell line, we experimented with training a model without BiLSTM.This reduced model reported a lower predictive accuracy than the full model that contained BiLSTM (RMSE = 3.4 vs. 1.8).The attended bins in the reduced model were closer to TSS compared to the full model (mean distance = 1835 bps vs. 3063 bps).These results demonstrated that BiLSTM detected long-range dependencies which in turn helped improve the prediction accuracy.
We also tested pooling data from multiple samples, which gave rise to models with higher errors.For example, we trained a model using pooled data from the E096 lung sample and the E071 brain sample.The RMSE of this model was 2.09 based on cross-validation.Conversely, the models trained on each of these two samples separately reported RMSE of 1.81 and 1.76, respectively.
Large language models based on the transformer architecture have reported unprecedent successes [37][38][39].We also experimented with building a transformer model to predict gene expression in the E061 sample.Unfortunately, this model exhibited suboptimal performance compared to DeepCORE (RMSE=2.4 vs. 1.8,F1-score=0.74 vs. 0.85, Supplementary Fig. 4).A potential reason might be insufficient training data that fell several orders of magnitude below the scale used in large language models.
DeepCORE identifies regions with biologically meaningful histone markers
Attended bins receiving non-zero attention probabilities corresponded to genomic regions that contributed to the prediction of gene expression values.We found that hPTMs were present in most attended bins.Using genes from the held-out test set, we found that each gene on average had 33 attended bins containing hPTMs, but only 0.42 attended bins containing no hPTMs (Mann-Whitney test p-value = 9.1 ×10 − 25 , Fig. 3A).We then randomly selected 25 test genes that were transcribed above the median cutoff value and 25 genes transcribed below the median cutoff in the E071 sample.We extracted bins with highly significant attentions and counted the presence of hPTMs in the corresponding regions (Fig. 3B).Among genes with high transcription level, the attended genomic regions were enriched with H3K4me3 and H3K27ac that are known marks of active promoters and enhancers to enhance transcription [40,41].Conversely, the enrichment of H3K9me3 and H3K27me3 in the attended regions near low-transcription genes were consistent with their known roles in formation of heterochromatins to repress transcription [42].
Further analysis of the attended regions of the CYFIP2 gene in two samples revealed interesting patterns.In the E007 sample where CYFIP2 gene was highly expressed, DeepCORE paid attention to regions that were close to the TSS and were occupied with the active histone mark H3K4me3 (Fig. 3C).In contrast, in the E058 sample where this gene was lowly expressed, DeepCORE paid attention to regions that were downstream of the TSS and were occupied with the repressor histone mark H3K27me3 marker and avoided regions with the activator histone mark H3K4me3 around the TSS (Fig. 3D).These results provided evidence that DeepCORE selects regions that are biologically relevant and reflect the underlying mechanisms of transcription regulation.
As no model can learn and explain all the features, we examined false positive attentions in the DeepCORE model trained on the E065 sample.Out of a total of 597,094 bins that contained no epigenetic signals, only 3291 received attention, indicating a very low false positive rate of 0.6 %.Our examination of false positive attentions revealed two distinct types of occurrences.In the first type, epigenetic signals were abundant in the ± 5kbps TSS-flanking regions, and bins receiving false positive attentions were next to bins receiving true positive attentions (Supplementary Fig. 5A).This phenomenon can be attributed to the CNN layer, which convolves epigenetic signals across positions, causing the spread of signals between neighboring bins.Addressing this issue might involve reducing the filter size and increasing the stride size of the CNN layer and increasing the interval of max-pooling.However, such changes will impact the bin length and subsequently the resolution of predicted.In the second type, epigenetic signals were scarce in the ± 5kbps TSSflanking region, and bins at the leftmost or rightmost borders received false positive attentions (Supplementary Fig. 5B).This occurrence can be attributed to the BiLSTM layer, which carries epigenetic signals over an extended distance, leading to accumulation at the two ends.Potential solutions to this issue may include increasing the forget bias or the dropout rate in the BiLSTM layer.Nevertheless, considering the already very low false positive rate and the possibility that adjustments to these parameters may compromise performance, we have chosen to retain the original configuration of the DeepCORE models.
DeepCORE can identify and fine map regulatory elements
The Eukaryotic Promoter Database (EPD) [43] contains a comprehensive list of 29,598 experimentally validated human promoters.The GeneHancer [44] database annotated 250,512 candidate enhancers in the human genome.We then scanned our attended regions to identify the presence of these known promoters or enhancers.To match the attended regions with the promoters, we restricted the attended regions to be within 1kbps around the TSS.No such restrictions were applied for matching enhancers.
P.B. Chandrashekar et al.
We hypothesize that the attended regions identified by DeepCORE were enriched with known REs.To test this hypothesis, we considered promoters and enhancers annotated in the EPD and GeneHancer databases as known REs, although many of these annotations were not experimentally validated.We then calculated the frequency of the attended regions containing known REs across all samples and the frequency of the remaining regions.On average, each gene had 23 attended bins containing known promoters and 10 attended bins containing known enhancers, but only 5 attended bins containing no known REs (Mann-Whitney test p-value = 4.0 ×10 − 25 and 6.1 ×10 − 18 respectively, Fig. 4A).
The TMEM88 gene is a representative example in which the attended bins matched known promoters and enhancers.TMEM88 was highly expressed in the E004 sample.The ± 5kbps TSS-flanking region was occupied with various active and repressive hPTMs.The EPD database annotated three promoters for this gene, one immediately upstream of the TSS and the other two towards the downstream.The enhancer annotated in the GeneHancer database spans a wide range starting at 1200 bps upstream of the TSS to 4000 bps downstream of the TSS.These REs all matched to the attended bins identified by DeepCORE (Fig. 4B).Furthermore, despite the repressive hPTM (H3K27me3) had high read counts, the DeepCORE model did not pay attention to it.Instead, activating hPTMs received attention, which was consistent with the high transcription level of TMEM88 in this sample.Interestingly, inside the annotated enhancer region that spans more than 5kbps, only 30 bins covering 1500 bps received attention.Because only attended bins were used to predict gene transcription level, they likely were more relevant to transcription regulation than the unattended bins.
Another interesting example is the ARF5 gene.Signals from hPTMs in three samples (E095, E065, and E100) consistently highlighted two regions (Fig. 4C-E).The right region corresponded to the promoter of this gene and received DeepCORE attention.The left region was 2500 bps upstream of the TSS and corresponded to the promoter of another gene GCC1.DeepCORE correctly identified the histone signals corresponding to ARF5 gene and did not pay attention to the left peak.These results demonstrate that DeepCORE can identify and fine-map REs at a resolution of 50 bps that corresponds to the bin size of the model.
Concordant attentions identify putative COREs
The interpreter of DeepCORE includes correlation analysis of attention probabilities across samples to discover COREs.As an example, we examined the PSMD8 gene that was consistently and highly expressed across all samples.We retrieved the attention vectors of this gene from 25 samples and calculated their pair-wise correlations (Fig. 5A).At FDR rate < 0.05, we found two blocks for which DeepCORE attentions were highly correlated (Fig. 5B).The first block is centered around the TSS and the second block is 3kbps downstream of the TSS.These two blocks received concordant attention across samples, implying that they jointly regulate transcription of the PSMD8 gene.Indeed, these two blocks corresponded to the promoter and the enhancer of this gene.
We validated these COREs using established annotations and experimental data.As an illustrative example, we investigated the TMUB1 gene and analyzed its attention vectors across all samples.While the EPD and GeneHancer annotations hinted at the presence of promoters and other cis-regulatory elements in this region, they offered limited resolutions (Fig. 6).DeepCORE attentions revealed three distinct blocks that finely mapped the REs in this region.The first block was located approximately 2.5kbps upstream of the TSS, the second block encompassed the TSS, and the third block was located approximately 2.5kbps downstream of the TSS.The correlated attentions observed among these three blocks strongly suggest their coordinated regulation of the gene's transcription and function as COREs.Furthermore, it is plausible that the distal interactions between the first block and the other two blocks occur via chromatin-chromatin loops, a phenomenon confirmed in a Hi-TrAC study [45].
Discussion
Multiple REs interact to regulate gene transcription.We designed the DeepCORE architecture to consider such inter-dependences in multiple aspects.In the input step, it uses two views to capture genetic and epigenetic features.In the DNN modeling step, it uses BiLSTM to allow short-range and long-range interactions, In the interpretation step, it detects correlated attentions between genomic regions.By training the DNN to predict gene transcription levels based on genetic and epigenetic features within the ± 5kbps TSS-flanking region, DeepCORE learns the most informative regions that are relevant to gene transcription regulation.
We evaluated the performance of DeepCORE and other methods on predicting gene transcription in diverse tissues and cell lines, although the assessment did not include all existing models such as Enformer [46] and Borzoi [39] for practical reasons.In these comparisons, DeepCORE was consistently the top performer.The high accuracies support that the DNN model captures informative features relevant to transcriptional regulation.It builds the foundation for subsequent analysis to further interpret the results, specifically attentions paid to each genomic region, to help mapping promoters, enhancers, and other REs.We further introduce COREs that are REs receiving concordant attentions across multiple samples.
DeepCORE uses only five hPTMs as epigenetic features.However, many other types of epigenetic signals, such as DNA methylation and transcription factor binding, provide complementary information to hPTM.Including these additional features may further increase the prediction accuracy and enhance the RE identification.Currently, DeepCORE examines ± 5kbps TSS-flanking region where promoters and proximal enhancers reside.Expanding the range to 2000kbps will allow us to detect distal REs.Furthermore, as enhancers are often clustered and selective activation of different enhancers in the same cluster is tissue-specific [47][48][49], concurrent modeling of multiple tissues is promising to capture the boundary between these enhancers and subsequently increase the resolution.This will also identify tissue-specific gene-promoter and gene-enhancer interactions, which is valuable knowledge that has not been annotated in existing databases.
In silico mutagenesis can be combined with gene expression prediction models to identify functional elements.For example, ExPecto [20] supports in silico mutagenesis by introducing DNA alterations into a genomic position, predicting the expression level of the target gene with and without the DNA alterations, and comparing the difference.Assuming mutations disrupting a regulatory element will be predicted with significant impact on gene expression, this approach can help identify regulatory elements.However, in silico mutagenesis usually tests simple alterations, such as single nucleotide variants, most of which have neutral or nearly neutral functional impact.Identification of regulatory elements based on these predictions may lead to many false negatives.Furthermore, DeepCORE is designed based on the rationale that epigenetic alterations can lead to gene expression changes with or without genetic alterations.However, in silico mutagenesis does not perform epigenetic alterations.The epigenetic features in our model include five quantitatively measured histone modification marks, which vary in intensity and cover different lengths of genomic regions.Epigenetic changes may involve alterations in one or several histone modification marks, entail varying extent of intensity changes, and affect different lengths of genomic regions.Given such high variability of potential epigenetic changes, it is challenging to simulate them computationally.The DeepCORE algorithm can identify regulatory elements without performing in silico genetic or epigenetic alterations, which is complementary to existing methods.In the future, we will explore if DeepCORE models combined with in silico mutagenesis can improve fine mapping of regulatory elements.
In summary, DeepCORE is a novel method to catalog cis-acting REs and COREs that influence gene transcription in tissue and cell line specific context.This knowledge can be used to discover novel REs and prioritize existing REs, which will help improve our understanding of transcription regulatory mechanisms.To facilitate evaluation and further analysis, we created an interactive web server at https://liliulab.shinyapps.io/deepcore/,which allows users to query, view, and download of DeepCORE predictions and attended bins in 27 human tissue and cell line sample.The DeepCORE source code is available at the GitHub site https://github.com/liliulab/DeepCORE.
Fig. 1 .
Fig. 1.The DeepCORE architecture.(A) DeepCORE consists of two components.In the DNN component, genetic and epigenetic signals go through a CNN layer, a BiLSTM layer, an Attention layer, and an FNN layer to predict transcript abundance of a gene.In the Interpreter component, attention scores extracted from the output of the Attention layer is analyzed to identify informative and correlated regions as COREs.(B) The DNN has a genetic view and an epigenetic view, each consisting of a CNN layer and a BiLSTM layer.These two views are joint before fed into an attention layer and subsequently an FNN layer to predict gene transcription level.Nc = 10,000 in both genetic and epigenetic feature matrices.
The DeepCORE DNN hyperparameters selected via grid search are K= 50, N CNN = 50, p = 50, N LSTM = 15, N ATTN = 25, and N FNN = 100.With this setting, genetic sequences and epigenetic signals in each 50 bps window are convolved separately.BiLSTM with attention layer produces 200 bins (50 bps long), each receiving attention probabilities before being fed to the FNN.
Fig. 2 .
Fig. 2. Performance of DeepCORE and other methods.(A, B) Evaluated on two samples, E061 and E071, the boxplots of RMSE (A) and F1-score (B) show DeepCORE has the lowest error rate and the highest accuracy in predicting gene transcription levels, as compared to single-view DNN, ExPecto, and DeepChrome models.(C, D) Evaluated on 25 samples, DeepCORE has consistently the lowest error rate of predicting continuous gene transcription levels (C) and consistently the highest accuracy of predicting binary gene transcription classes (D).(E, F) Evaluated on cross-sample predictions in which a model trained the source sample is applied to predict gene transcription in different target samples, DeepCORE shows consistently lower error rate than ExPecto (E) and higher accuracy than DeepChrome (F).(b) Gray lines denote performance in source samples.
Fig. 3 .
Fig. 3. Distribution of hPTMs in attended regions.(A) Density plots show distribution of attended bins with hPTMs vs. distribution of attended bins with no hPTMs.(B) Counts of bins with specific hPTMs in attended regions.Data were from randomly selected 25 highly expressed genes and 25 low expression genes.Transcription activating hPTMs are in orange background.Transcription repressing hPTMs are in blue background.(C, D) Heatmaps show the raw hPTM read counts and DeepCORE attention probabilities for the CYFIP2 gene.Transcription of this gene was low in the E007 sample (C) and high in the E058 sample (D).The ± 5kbps TSSflanking region is encoded into 200 bins each with an attention probability.
Fig. 4 .
Fig. 4. Attention analysis for regulatory elements (A) Density plot of the frequency of attended bins with known promoters or enhancers across 25 samples in comparison to random bins with high attention scores.(B) In the TMEM88 gene, attended bins matched to known enhancers and promoters.Signals from repressing hPTMs did not receive attention.(C-E) In the ARF5 gene, hPTM signals form two clusters (indicated with blue boxes).The right cluster mapped to the promoter of the ARF5 received attentions.The left cluster mapped to the promoter of another gene GCC1 did not receive attention.
Fig. 5 .
Fig. 5. COREs in the PSMD8 gene: (A) Heatmaps show attentions in 5 cell lines.(B) Correlation plot shows two blocks (A and B) with significant correlated attentions.
Fig. 6 .
Fig. 6.CORE in the TMUB1 Gene.The ± 5 kb TSS-flanking region is displayed, which has cis-regulatory roles as annotated in the EPD and GeneHancer databases.DeepCORE identified three blocks (A, B, and C) as putative REs, which received attentions in multiple samples.The correlation matrix of attentions revealed local interaction between B and C, and distal interaction between A and the other two elements.The distal interaction is confirmed in an Hi-TrAC study showing these REs are inside chromatin-chromatin loops. | 2023-12-31T16:11:30.178Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "c6508e0a971f97360c84fb5b9bff39f1304ab057",
"oa_license": "CCBYNCND",
"oa_url": "http://www.csbj.org/article/S2001037023005196/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f2c7450a57e61703eb6d5e7d4ea2d5f170896e3f",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237936780 | pes2o/s2orc | v3-fos-license | Extraction Procedure, Characteristics, and Feasibility of Caulerpa microphysa (Chlorophyta) Polysaccharide Extract as a Cosmetic Ingredient
The green alga Caulerpa microphysa, which is native to Taiwan, has a relatively high economic value and a well-developed culture technique, and is used mainly as a foodstuff. Its extract has been shown to exhibit antitumor properties, but the polysaccharide content of the extract and its anti-inflammatory and wound-healing effects and moisture-absorption and -retention capacity remain unknown. Hence, the objective of this study was to evaluate the potential of the polysaccharides in C. microphysa extract (CME) for use in cosmetics. The overall polysaccharide yield from the CME was 73.93% w/w, with four molecular weight fractions. The polysaccharides comprised 59.36 mol% mannose, 27.16 mol% glucose, and 13.48 mol% galactose. In addition, the CME exhibited strong antiallergic, wound-healing, transdermal-delivery, and moisture-absorption and -retention effects. In conclusion, the results suggested that CME potentially has anti-inflammatory and wound-healing effects and a good moisture capacity, which can be used in cosmetic applications.
Introduction
The commercial production of marine algae has been rapidly increasing over recent decades, whether by harvesting from natural resources or by cultivation. The application of algae is regarded as environmentally friendly, healthy, and sustainable for human beings [1]. Algae contain many compounds that can be used as foodstuffs, cosmetics, medicines, and pharmaceuticals, and they can also be used in aquaculture and agriculture [1]. This is because algae contain biochemical compounds such as pigments, lipids, cellulose, minerals, and polysaccharides, which have anticarcinogenic, anti-pigmentation, anti-dermatitis, emollient, humectant, antioxidant, anti-inflammatory, whitening, and anti-aging properties [2]. Among these compounds, a family of polysaccharides has been regarded as the most potentially effective for anti-inflammatory and wound-healing treatments [3]. Previous studies have revealed that their biological activity may be affected by their molecular weight, monosaccharide composition, polysaccharide dose concentration, and antioxidant content [4][5][6].
Notably, the extraction process is crucial for obtaining polysaccharides [7]. In general, hot water is considered the optimal solvent for the extraction of polysaccharides, and is often combined with autoclaving, microwaving, and ultrasonication [4,7]. These methods and instrumentation may not only affect the final extracted yield but also play a role in the overall economic cost [8,9]. Additionally, the crushing or milling, drying, and conservation process may also influence the extraction efficiency. In spite of the importance of these aspects, however, little information has been reported regarding the optimal extraction strategy for algae [4,7].
Furthermore, the structure of polysaccharides is complicated, which usually consists of various monosaccharides and esters. Thus, clarifying the structures and compositions of polysaccharides is crucial to comprehend the characteristics and potential functions of the objective sample [10]. In current, a series of approaches to methods for analyzing structures and compositions of polysaccharides have been reported. Acidic hydrolysis which involves acids such as HCl and H 2 SO 4 is generally used to release the monosaccharides [10,11]. Liquid chromatography (LC), nuclear magnetic resonance (NMR), gas chromatography (GC), and mass spectrometry (MS) are usually used to analyze monosaccharides [10,12,13]. Notably, 2,3-naphthalene-diamine (NADA) can be used for the derivatization of aldoses and α-ketoacid-type to increase the efficiency for analyses [12]. Fluorescent monosaccharide labeling for NMR can increase the accuracy in identifying the monosaccharides and structure [10]. Those methods above are useful to optimize the qualitative and quantitative analysis of polysaccharides.
Inflammatory responses, also known as allergies, are induced by the degranulation of mast cells, which is an abnormal symptom associated with the overreaction of the immune system [13]. This response is debilitating, causing asthma, rhinitis, dermatitis, and other clinical allergy symptoms [14]. Wound healing involves the regeneration and replacement of connective tissue, and this process is frequently accompanied by inflammatory or other potentially injurious reactions. More specifically, wound healing is defined as the migration and proliferation of dermal and epidermal cells to fill or cover the wound, which is a dynamic process of tissue remodeling [15]. Notably, some people exhibit adverse reactions or need much longer recovery periods as a result of inflammatory inhibition of wound healing because of disease or a weakened immune system. Fortunately, previous studies have demonstrated the curative effects of some anti-inflammatory and woundhealing-accelerating substances extracted from plants, especially the polysaccharides found in marine algae [2,16]. Because of the similar chemical and biological properties of algal polysaccharides and mammalian glycosaminoglycan, the former are considered to contribute to immunoregulation in mammals [17]. However, although algae share similar polysaccharide structures, compound functions are determined by additional features such as chemical composition, molecular weight, and position on the polymer backbone. Thus, each algal species should be evaluated individually, because of their high degree of complexity and differing bioactive compounds [6].
The green alga Caulerpa microphysa (Weber Bosse) Feldmann 1955, also known as sea grape, is native to the intertidal zones in Taiwan, Japan, China, and the Philippines. It has a high economic value and a well-developed culture technique [18]. This alga consists mainly of carbohydrates (up to 70% w/w, data not shown), and is mainly used as a foodstuff, as feed, as aquarium algae, and for water-quality control, and it is popular as a traditional food. Building on its well-established culture technique and stable biomass supply, previous research has confirmed its antitumor properties using polypeptides extracted from it [19]. Extracts of other algal species have also been found to have positive pharmacological effects, such as the anticoagulant and antioxidant effects of polysaccharides extracted from C. cupressoides, the antioxidant effects of polysaccharides extracted from C. prolifera [20], the antiviral effects of the crude extract of C. taxifolia [21], and the antiproliferative effects of the crude extract of C. racemosa [22]. Although multiple functional properties have been demonstrated, there have been no reports on the anti-inflammatory and wound-healing properties of algae, particularly with regard to C. microphysa.
To further develop the pharmacological applications of C. microphysa, the objective of this study was to first develop the optimal polysaccharide extraction conditions for cultured C. microphysa, and then to identify and analyze the polysaccharides. Next, we hypothesized that the polysaccharide-rich extract of C. microphysa (CME) contains potentially useful bioactive compounds, and investigated the anti-inflammatory, wound-healing, and moisture-retention effects of various doses of CME. We expected to find that these polysaccharides could provide natural, healthy, safe, and effective raw materials for cosmetics. It is crucial to optimize extraction procedures based on the objectives of production [7]. In general, polysaccharide yield is positively correlated with both temperature and reaction period, but increasing these factors increases costs in terms of time and energy. However, the current situation may be changing as a result of the microwave-ultrasound extraction method, which has been shown to increase extraction efficiency for the microalga Scenedesmus obliquus, using lipid extraction, and for the red alga Porphyra haitanensis, using water-based extraction, but its efficiency with regard to green macroalgae remains unknown [4,7,23]. In addition, identifying the best drying and cell-disruption methods is crucial for optimizing the extraction process and decreasing the cost in terms of energy and time to achieve large-scale productivity [23,24].
Results and Discussion
The effects of the drying, milling, and extraction procedure used in this study on the polysaccharide yield are presented in Figure 1. Three factors significantly affected polysaccharide yield. The autoclave method was significantly more efficient than the microwave-ultrasound method. Both drying and milling improved extraction efficiency when using the microwave-ultrasound method, but only drying had any effect when using the autoclave method. Our result was different from those obtained for Scenedesmus obliquus and Porphyra haitanensis in previous studies, since we found that the extraction efficiency of the microwave-ultrasound method was lower than that of the autoclave method. This may be due to the use of different pretreatments and solvents [4,7]. Ansari et al., (2015) used lipid as a solvent to extract microalgae S. obliquus after cellulose was hydrolyzed by using H 2 SO 4 , and found that compared with autoclave, ultrasonication can obtain more reduced sugar [7]. The result may be due to the hydrolyzed cell wall, and the better thermal conductivity of lipid than water. Thus, lipid solvent can penetrate the sample and heat production to extract reduced sugar under ultrasonication adequately [8]. Whereas C. microphysa possessed an intact cell wall, which meant that the water solvent was hard to contact the algae sufficiently. Simultaneously, the heat production could not sufficiently destroy the inner cell wall tissue and release the polysaccharides under microwave-ultrasound extraction method. By comparison, the autoclave method was more effective in destroying cell walls than microwave-ultrasound extraction method by providing high thermal and pressure directly and then releasing polysaccharides from cells in this study.
Drying and milling improved the polysaccharide yield when followed by the microwaveultrasound procedure, which suggests that the cell disruption resulting from this step aided the reaction of the solvent and heat transfer. However, drying resulted in a lower polysaccharide yield than just milling fresh samples when the autoclave method was used. It is thus likely that a loss of polysaccharides occurred during the drying process. In consideration of these findings, we prepared the CME from milled fresh algae, using the autoclave method, for use in further experiments with respect to freeze-drying preservation of the extract, the polysaccharide and polyphenol yield from samples kept with both types of cover was lower than that from the control samples. The polysaccharide recovery rate when using the aluminum foil and parafilm was 76.38 ± 7.06% and 53.64 ± 4.07% of the yield from the control samples, respectively, and the polyphenol recovery rate was 88.70 ± 2.07% and 86.88 ± 2.13%, respectively. In addition, after the freeze-drying and restoration processes, the pH of the samples covered with aluminum foil was 7.38 ± 0.61 and of those covered with parafilm was 7.66 ± 0.15, while that of the control was 5.81 ± 0.1 ( Figure 2). Due to the parafilm cover having better resiliency than aluminum foil, thus we could make smaller holes by pin to avoid dried powder spraying during vacuum relief. These results illustrate that using the best freeze-drying procedure and material to cover the extract is important for preserving the polysaccharides. 0.61 and of those covered with parafilm was 7.66 ± 0.15, while that of the control was 5.81 ± 0.1 ( Figure 2). Due to the parafilm cover having better resiliency than aluminum foil, thus we could make smaller holes by pin to avoid dried powder spraying during vacuum relief. These results illustrate that using the best freeze-drying procedure and material to cover the extract is important for preserving the polysaccharides. 0.61 and of those covered with parafilm was 7.66 ± 0.15, while that of the control was 5.81 ± 0.1 ( Figure 2). Due to the parafilm cover having better resiliency than aluminum foil, thus we could make smaller holes by pin to avoid dried powder spraying during vacuum relief. These results illustrate that using the best freeze-drying procedure and material to cover the extract is important for preserving the polysaccharides.
Monosaccharide, Bioactive Ingredient, and Molecular Composition of CME
Previous studies have shown that biochemical composition, molecular weight, and chemical structure can affect biological activity [25]. Therefore, a deeper investigation of biochemical compounds is imperative. In this study, the in-depth analysis of the CME showed that it provided a total polysaccharide yield of 1457.17 ± 48.25 µg g −1 , comprising 73.4% w/w of the biomass weight. These saccharides mainly comprised mannose (59.36 mol%), glucose (27.16 mol%), and galactose (13.48 mol%) (Table 1). Additionally, the extract provided a polyphenol yield of 16.62 ± 5.39 µg g −1 ( Table 2). The molecular weight of the CME was determined via gel filtration chromatography using a refractive index detector system, which revealed four fractions (B1-B4) in the CME. Based on calibration with molecular-weight markers, the apparent molecular weight of B1 was 50-100 kDa, while that of the B2 fraction was similar to that of glucose, with an approximate molecular weight of 180 Da. The molecular weights of the B3 and B4 fractions were less than that of glucose and beyond the range of the molecular-weight markers ( Figure 3). Table 1. Polysaccharide composition of Caulerpa microphysa polysaccharide-rich extract (CME) analyzed using a nuclear magnetic resonance spectrometer.
Monosaccharide, Bioactive Ingredient, and Molecular Composition of CME
Previous studies have shown that biochemical composition, molecular weight, and chemical structure can affect biological activity [25]. Therefore, a deeper investigation of biochemical compounds is imperative. In this study, the in-depth analysis of the CME showed that it provided a total polysaccharide yield of 1457.17 ± 48.25 μg g −1 , comprising 73.4% w/w of the biomass weight. These saccharides mainly comprised mannose (59.36 mol%), glucose (27.16 mol%), and galactose (13.48 mol%) (Table 1). Additionally, the extract provided a polyphenol yield of 16.62 ± 5.39 μg g −1 ( Table 2). The molecular weight of the CME was determined via gel filtration chromatography using a refractive index detector system, which revealed four fractions (B1-B4) in the CME. Based on calibration with molecular-weight markers, the apparent molecular weight of B1 was 50-100 kDa, while that of the B2 fraction was similar to that of glucose, with an approximate molecular weight of 180 Da. The molecular weights of the B3 and B4 fractions were less than that of glucose and beyond the range of the molecular-weight markers ( Figure 3).
Polysaccharides (mg g −1 )
Polyphenols (mg g −1 ) 1457.17 ± 48.25 16.62 ± 5.39 Comparing our findings with previous research, the total polysaccharide yield with respect to biomass in this study was higher than that of most other water and organic extracts, such as Nannochloropsis oculata at 19% w/w [6]; Ulva lactuca at 26.01-28.29% w/w [5]; Parachlorella spp. at 73.8% w/w [26]; Chondrus verrucosus at 46.2-65.9% w/w [27]; and Nostoc commune at 82.2-84.6% w/w [28]. The results of this study suggest the feasibility of using C. microphysa as a cash crop to produce polysaccharides. Furthermore, the monosaccharide composition of the extract obtained in this study was different from that reported previously for species in the Caulerpa genus. For example, C. brachypus consists mainly of rhamnose, xylose, and glucose [11]; C. racemose consists of glucose (56.8 mol%) and galactose (31.8 mol%) [29]; and C. cupressoides consists mainly of galactose, mannose, and xylose [2]. Although all of these are in the same genus, they differ in their monosaccharide compositions. We also observed that the analysis systems used on the above Caulerpa genus are completely different, but all of them showed the monosaccharide compositions efficiently, e.g., the monosaccharides of C. brachypus hydrolyzed using H 2 SO 4 were analyzed by GLC [11]; the monosaccharides of C. racemose hydrolyzed using HCl were analyzed by GC [29]; the monosaccharides of C. cupressoides hydrolyzed using HCl were analyzed by HPLC-RI [20] (Table 3). Notably, the monosaccharides in our study were obtained using HCl hydrolysis, and analyzed by both 1 H NMR and HPLC-UV after Sugar-NAIM derivatization, and these two instruments showed the same monosaccharides composition (Figures 4 and 5). The mentions above illustrated the diversity of the techniques which are beneficial to develop the studies in the carbohydrates field. Table 3. Comparison of analysis systems and monosaccharides compositions in the Caulerpa genus.
Caulerpa microphysa
HCl hydrolysis, Sugar-NAIM* derivatization, 1 Previous studies have been summarized reported that the glycosidic bonds of Caulerpa genus including 4-Linked xylose, 6-linked galactose, 4-linked mannose, 6-linked α-Dmannopyranose, and 4-linked and 2-linked α-D-mannopyranose [30]. However, due to the information on the structure of the polysaccharides are not enough, the critical mechanism and function are not clarified [31]. Our result found the most monosaccharide in C. microphysa was mannose and followed were glucose and galatose. Although the monosaccharide composition and has been determined, its detailed glycosidic bonds were not clear. Hence, our further study will focus on investigating the potential glycosidic bond's effect on the bioactivity of C. microphysa.
In Vitro β-Hexosaminidase Secretion Inhibition Assay
Allergies are a common health problem, affecting approximately 20% of the global population [32,33]. Antiallergic compounds usually act on mast cells, which are known to play a major role in the immediate type of allergic reaction. The signaling pathway involves binding of the IgE receptor to antigens, followed by the induction of degranulation in mast cells, which results in the release of chemical mediators including histamine, leukotrienes, and prostaglandins [34]. To evaluate the degranulation process resulting from an allergic reaction, RHB-2H3 cells, which are mucosal mast cells, can be used [35]. Moreover, β-hexosaminidase functions as a critical inflammatory mediator and is released along with histamine upon mast cell degranulation [36]. Thus, analysis of β-hexosaminidase secretion is widely used to evaluate the level of mast cell degranulation. tose (31.8 mol%) [29]; and C. cupressoides consists mainly of galactose, mannose, and xylose [2]. Although all of these are in the same genus, they differ in their monosaccharide compositions. We also observed that the analysis systems used on the above Caulerpa genus are completely different, but all of them showed the monosaccharide compositions efficiently, e.g., the monosaccharides of C. brachypus hydrolyzed using H2SO4 were analyzed by GLC [11]; the monosaccharides of C. racemose hydrolyzed using HCl were analyzed by GC [29]; the monosaccharides of C. cupressoides hydrolyzed using HCl were analyzed by HPLC-RI [20] (Table 3). Notably, the monosaccharides in our study were obtained using HCl hydrolysis, and analyzed by both 1 H NMR and HPLC-UV after Sugar-NAIM derivatization, and these two instruments showed the same monosaccharides composition (Figures 4 and 5). The mentions above illustrated the diversity of the techniques which are beneficial to develop the studies in the carbohydrates field. As shown in Figure 6, antigen-mediated signaling causes a critical increase in the secretion of β-hexosaminidase. However, CME showed a superior concentration-dependent inhibitory. When 0.25% of CME were added, the β-hexosaminidase inhibition rates were more than 50%. When dose of CME more than 0.5%, β-hexosaminidase release was almost suppressed completely. Previous in vitro studies reported that the hot water extract of Ecklonia cava and Chrymenia wrightii have more than 50% degranulation inhibit at a dose of 100 µg mL −1 , in addition, MeOH extract of Petalonia binghamiae, Scytosiphone lomentaria, Undaria pinnatifida, Porphyra dentata, Codium fragile, and Ulva japonica have more than 50% degranulation inhibit at a dose of 200 µg mL −1 , illustrate that extract solvent significantly effect on degranulation inhibit [14]. Maruyamam et al., (2005) use in vivo study to show that mekabu fucoidan at a dose of 50 µg mL −1 significantly decreased the type 2 T-helper cytokines, IL-4, IL-5, and IL-13 level, which are the chemical mediators to induce degranulation [37]. Overall, our results suggest that effective degranulation inhibitory activity occurs. Compared to CICA extract, which possesses well-known antiallergic activity and is widely used in commercially available products, the antiallergic potential of CME was better at the same concentration.
volves binding of the IgE receptor to antigens, followed by the induction of degranulation in mast cells, which results in the release of chemical mediators including histamine, leukotrienes, and prostaglandins [34]. To evaluate the degranulation process resulting from an allergic reaction, RHB-2H3 cells, which are mucosal mast cells, can be used [35]. Moreover, β-hexosaminidase functions as a critical inflammatory mediator and is released along with histamine upon mast cell degranulation [36]. Thus, analysis of β-hexosaminidase secretion is widely used to evaluate the level of mast cell degranulation.
As shown in Figure 6, antigen-mediated signaling causes a critical increase in the secretion of β-hexosaminidase. However, CME showed a superior concentration-dependent inhibitory. When 0.25% of CME were added, the β-hexosaminidase inhibition rates were more than 50%. When dose of CME more than 0.5%, β-hexosaminidase release was almost suppressed completely. Previous in vitro studies reported that the hot water extract of Ecklonia cava and Chrymenia wrightii have more than 50% degranulation inhibit at a dose of 100 μg ml −1 , in addition, MeOH extract of Petalonia binghamiae, Scytosiphone lomentaria, Undaria pinnatifida, Porphyra dentata, Codium fragile, and Ulva japonica have more than 50% degranulation inhibit at a dose of 200 μg ml −1 , illustrate that extract solvent significantly effect on degranulation inhibit [14]. Maruyamam et al., (2005) use in vivo study to show that mekabu fucoidan at a dose of 50 μg ml −1 significantly decreased the type 2 Thelper cytokines, IL-4, IL-5, and IL-13 level, which are the chemical mediators to induce degranulation [37]. Overall, our results suggest that effective degranulation inhibitory activity occurs. Compared to CICA extract, which possesses well-known antiallergic activity and is widely used in commercially available products, the antiallergic potential of CME was better at the same concentration.
In Vitro Wound-Healing Activity Assay
Healthy, intact, wound-free skin can prevent dehydration, microorganisms and irritants as an important protective way of individual. The in vivo wound healing process is tightly controlled by multiple growth factors released at the wound site, such as Plateletderived growth factor (PDGF), Transforming Growth Factor (TGF), and Epidermal growth factor (EGF). Some natural products have been used extensively in wound care with excellent effects [38] To determine the wound re-epithelialization potential of CME, an artificial and uniform cellular scar was created. TGF-β, a growth factor, stimulates matrix protein production and induces faster healing, and was therefore included as a positive control [39].
The cellular wound healing after 8 h of treatment is illustrated in Figure 7A. Compared to the untreated medium control, treatment of 3T3-L1 cells with 0.5% CME for 8 h significantly improved the wound-healing response: wound repair following the 0.5% and 1% CME treatments increased by approximately 25 and 39%, respectively ( Figure 7B). CICA exhibited significant wound-healing activity at a 1% concentration. Previous study report that aqueous extract of Spirulina platensis can significantly induce cell migration of HDF cells, which is better than methanolic, ethanolic and allantoin extract [15]. The similar result also observes to methanolic extract of Moringa oleifera inhibit wound healing [40].
an artificial and uniform cellular scar was created. TGF-β, a growth factor, stimulates matrix protein production and induces faster healing, and was therefore included as a positive control [39].
The cellular wound healing after 8 h of treatment is illustrated in Figure 7A. Compared to the untreated medium control, treatment of 3T3-L1 cells with 0.5% CME for 8 h significantly improved the wound-healing response: wound repair following the 0.5% and 1% CME treatments increased by approximately 25 and 39%, respectively ( Figure 7B). CICA exhibited significant wound-healing activity at a 1% concentration. Previous study report that aqueous extract of Spirulina platensis can significantly induce cell migration of HDF cells, which is better than methanolic, ethanolic and allantoin extract [15]. The similar result also observes to methanolic extract of Moringa oleifera inhibit wound healing [40]. Above reports explain that the solvent may the key factor effect on the proliferation of migrated cell and to confirm the feasibility by using algae extract as resource for wound healing. The potential function of wound healing may be due to the glycosaminoglycan (GAG) extract from algae mediates cellular interaction and growth factor signaling pathways associated with wound recovery process [41,42]. Although not compare multiple Above reports explain that the solvent may the key factor effect on the proliferation of migrated cell and to confirm the feasibility by using algae extract as resource for wound healing. The potential function of wound healing may be due to the glycosaminoglycan (GAG) extract from algae mediates cellular interaction and growth factor signaling pathways associated with wound recovery process [41,42]. Although not compare multiple solvent extracts on the effect of wound-healing here, however, we clearly demonstrated the feasibility of using hot-water extract of C. microphysa to induce the wound-healing activity of 3T3-L1 fibroblasts. Overall, these results suggest that CME had comparable healing effects.
Hydroxyproline Production and In Vitro Permeation Assay
The process of skin aging is mainly due to the slowing down of the proliferation of keratinocytes in the epidermis and fibroblasts in the dermis, resulting in the reduction of extracellular matrix of the dermis, including collagen, elastin, proteoglycans, glycosaminoglycans. Among these molecules, collagen is considered the most important [43]. Currently, mechanisms underlying the loss of collagen in aging or damaged skin have not been fully delineated. It is known that collagen synthesis is highly regulated by genes. Previous studies have found the plant-derived compounds with anti-oxidant properties can restore fibroblast function through modulation of signaling pathways like MAPKs and NF-κB [3].
Hydroxyproline is the compound produced upon digestion of collagen and elastin, and it has been suggested that it may be able to improve aged or damaged skin. In order to evaluate the potential anti-wrinkle properties of CME, an in vitro assay to determine hydroxyproline production was performed. Hydroxyproline concentrations increased in CME-treated hydrolyzed cellular lysates ( Figure 8A) relative to the medium control. Cell viability was not affected by treatment condition.
Moisture Absorption and Retention Assay
It's well known that bio-polysaccharides in cosmetics function as gelling agents, viscosity adjuster, thicker as well as water-holding by means of its swelling capacity [42]. These are due to the polysaccharides can solubilize in water and can also react with skin fibrin to form an extracellular gel matrix, result in moisturizing. Moisturizing is the most important function of skincare products. Effective moisturizing facilitates active moisture To further evaluate the potential of CME for use as a cosmetic raw material, its transdermal-delivery efficacy was measured using a synthetic membrane engineered to mimic human skin. This assay aids in predicting diffusion efficiency into human skin for a wide range of materials and compounds. The cumulative total membrane permeation by the CME at various time points is shown in Figure 8B. As can be seen, CME clearly shows potential in terms of cumulative skin permeation.
Few studies have been conducted to evaluate the penetration of raw cosmetics materials, mostly focused on the potential functional proteins on wound response [44]. However, the collagen-producing fibroblasts are located in the skin dermis, thus, effective cosmetics ingredients should be positively correlative with skin penetration. Strat-M ® synthetic membrane has similar function on either human cadaver skin or Strat-M ® membrane, which has been regarded as an ideal material for skin penetration investigating.
The factors such as permeability coefficient (Kp), lag time, skin deposition, and molecular size are reported to be relative to the epidermis penetration [45]. Skin tissue plays a crucial role in filtering the entry of substances. As illustrated in a clinical research, the permeability of the epidermis restricted the potential of chemical compounds as drugs, since only compounds with molecules of less than 500 Da can penetrate the skin [46]. According to Figure 3B, fraction B2 of the main polysaccharides of CME, with molecules of 180 Da, can pass through the epidermis theoretically. In addition, compared to the lipophilic substances, the hydrophilic substance is more efficient when passing through the stratum corneum barrier. Therefore, we assumed that the water-soluble CME may mainly penetrate the stratum corneum through channels including sweat glands and hair follicles. But, since the two pathways mentioned (sweat glands and hair follicles) only occupy a small ratio of the skin surface area [47], we believe that CME may also penetrate the skin through other channels. In conclusion, CME possessed excellent skin penetration properties, and the critical mechanism worths further exploration.
Moisture Absorption and Retention Assay
It's well known that bio-polysaccharides in cosmetics function as gelling agents, viscosity adjuster, thicker as well as water-holding by means of its swelling capacity [42]. These are due to the polysaccharides can solubilize in water and can also react with skin fibrin to form an extracellular gel matrix, result in moisturizing. Moisturizing is the most important function of skincare products. Effective moisturizing facilitates active moisture absorption and retention by the skin [48]. The moisture absorption and moisture retention properties of CME were therefore assessed and compared with those of several cosmetics on the market in this study.
The moisture absorption results at various time points and humidity levels are shown in Table 4. In terms of water-absorption capacity, CME was better than collagen, similar to hyaluronic acid, and poorer than urea. The moisture-retention capacity over 24 h is shown in Figure 9, and that of CME was excellent; far better than that of collagen and hyaluronic acid, and similar to that of urea.
Due to the moisture-absorption/retention abilities are effect by complicated biochemistry compounds especially in molecular weight and sulfated content [52]. Thus, further analyzed the molecular weight among the above species, the molecular weight of N. sphaeroides extract was 199-99 kDa [49], S. horneri extract was 179 kDa to 21.42 kDa [51], E. prolifera extract was 147 kDa to 44.8 kDa [50], and C. microphysa extract was 100 kDa to <50 kDa. Although the different molecular weight levels in each species, but seem no correlation with the moisture-absorption/retention abilities. Despite, our result demon-strated that CME had high potential and product applicability and could be expected to become a novel multifunctional moisturizer.
Cultivation Conditions
The C. microphysa used in this study was isolated from the intertidal zone in northeastern Taiwan. The algae were washed with 1.5% povidone-iodine and 2 μm-filtered UV-irradiated sterilized seawater to remove any adhering debris or epiphytic organisms. They were then cultivated in a 1 t fiberglass tank under a 12:12 h light:dark regime. The tank was aerated and maintained at an irradiance level of 80-100 μmol photons m −2 s −1 , which was measured using a Lighting Passport spectrometer (ALP-01, Asensetek, New Taipei City, Taiwan). The seawater was refreshed every three days, and 20 g t −1 ammonium sulfate was added to ensure healthy growth conditions. When the mass of the algae reached 2 kg, they were harvested for the experiments.
Extraction of Polysaccharides
To detect the effect of pretreatment on the polysaccharide content, the biomass of algae was divided into four treatments as follows: (1) fresh algae; (2) milled fresh algae; (3) oven-dried algae; and (4) milled oven-dried algae. Two extraction methods were then
Cultivation Conditions
The C. microphysa used in this study was isolated from the intertidal zone in northeastern Taiwan. The algae were washed with 1.5% povidone-iodine and 2 µm-filtered UV-irradiated sterilized seawater to remove any adhering debris or epiphytic organisms. They were then cultivated in a 1 t fiberglass tank under a 12:12 h light:dark regime. The tank was aerated and maintained at an irradiance level of 80-100 µmol photons m −2 s −1 , which was measured using a Lighting Passport spectrometer (ALP-01, Asensetek, New Taipei City, Taiwan). The seawater was refreshed every three days, and 20 g t −1 ammonium sulfate was added to ensure healthy growth conditions. When the mass of the algae reached 2 kg, they were harvested for the experiments.
Extraction of Polysaccharides
To detect the effect of pretreatment on the polysaccharide content, the biomass of algae was divided into four treatments as follows: (1) fresh algae; (2) milled fresh algae; (3) oven-dried algae; and (4) milled oven-dried algae. Two extraction methods were then investigated: (a) autoclave extraction (SS320, Tominaga, Taipei City, Taiwan), and (b) microwave-ultrasound extraction (EXTRACTOR 200, IDCO, Marseille, France). The fresh and dry treatments were performed using a 1:1 and a 1:0.06 (v/v) mixture, respectively, of the algae with distilled water in a 1 L serum bottle, the fresh and dry treatments were performed using a 1:1 and a 1:0.06 (v/v, taking moisture loss into consideration) mixture, respectively, of the algae with distilled water in a 1 L serum bottle. at 40 • C for 2 d. The fresh and dried algae were milled with a blender (Blendtec, Orem, UT, USA) or a pulverizer at room temperature, respectively. Before extraction, treatments (1) and (2) were stored at −20 • C in a freezer, while (3) and (4) were placed in a Moisture-Proof Box (EDRY, Taichung, Taiwan).
At the extraction step, the autoclaved samples were treated at 121 • C, 1.5 lbs for 60 min. The microwave-ultrasound samples were treated with sonication at 100 mv and microwaves at 1000 W and mixed in the microwave-ultrasound extractor for 60 min, and 20 mL samples were extracted every 10 min. After extraction, all samples were centrifuged at 15,000× g for 10 min (CR21G, Hitachi, Tokyo, Japan), after which the supernatant was filtered through a sterile 0.22 µm filter membrane (Sartorius, Chöttingen, Germany). The filtered supernatant was freeze-dried into powder (FD10/-80, FIRSTEK, New Taipei, Taiwan), then analyzed for total polysaccharide yield. The optimal extraction conditions were then used for further analysis and in vitro studies.
Analysis of Total Polysaccharide Content
To determine the total polysaccharide content, 1 mg of lyophilized powder was dissolved in 20 mL of ultrapure water and analyzed using the phenol-sulfuric acid method [53].
Analysis of Sugar Composition
We assessed the polysaccharide content and sugar composition of the extracted polysaccharide powder using a polysaccharide component assay kit from SugarLight (New Taipei City, Taiwan) following the method of Lin et al. (2010) [54]. A 1 mg sample of purified and dried polysaccharides was added to 1.0 mL of hydrolysis solution and the resulting mixture was stirred for 2 h at 80 • C, then dried using a vacuum pump. The resulting powder was mixed well with 2 mg 2,3-naphthalenediamine, 1 mg iodine, and 1 mL acetic acid, then stirred at room temperature for 1 h to achieve fluorescent monosaccharide labeling. After drying the solvent, we quantified and qualified the monosaccharidenaphthylimidazole via 1 H NMR spectrometry (Bruker AV600, Rheinstetten, Germany) and via HPLC-UV (Hitachi L2130 pump with UV L2420).
Analysis of Molecular Weight
The molecular weight of the polysaccharides was determined via high-performance liquid chromatography on a high-resolution gel filtration column (HiPrep 16/60 Sephacryl-S-200 HR column, Merck, Darmstadt, Germany) with ultrapure water at a flow rate of 0.6 mL min −1 , detected using a refractive index detector, and visualized using Chromatography Workstation software (EChrom Data System v1.0, Lixing Technology, Hsinchu city, Taiwan).
Analysis of Polyphenols
To determine the polyphenol content, 1 mg of lyophilized powder was dissolved in 20 mL of ultrapure water and analyzed using the Folin-Ciocalteu method [55].
Analysis of Preservation Losses
To analyze ingredient loss during the freeze-drying process, we evaluated the effects of two different covers on the total polysaccharide and polyphenol content of the extracts. Briefly, 20 mL of the extraction mixture was added to sterile 50 mL centrifuge tubes and stored at −80 • C for 48 h. Next, aluminum foil or parafilm was used to cover the tube mouth, and the samples were freeze-dried immediately. After 72 h, the samples were taken out and injected with an equal volume of distilled water by weight. The total polysaccharide and polyphenol content were then analyzed and compared to that of the control samples.
Analysis of MTT Cytotoxicity
Cytotoxicity was evaluated via MTT assay according to ISO10993-5. RBL-2H3 cell lines were grown in the required medium and seeded onto a 96-well plate at 5 × 10 3 cells per well until adherence. Thereafter, the medium was removed, treated with the indicated samples at the indicated concentrations, and further incubated for a range of durations. The MTT was then added and cleaved with mitochondrial reductase to form formazan crystals. The purple formazan was solubilized by adding dimethyl sulfoxide (DMSO), and the optical density (OD) was read at 570 nm, with a reference wavelength of 690 nm, using a microreader (Thermo Scientific, Waltham, MA, USA).
Analysis of Sensitization and Stimulation for Degranulation
RBL-2H3 cells were grown in Eagle's minimal essential medium (MEM) containing 4 mM L-glutamine, 1.5 g L −1 sodium bicarbonate, 0.1 mM nonessential amino acids, 1 mM sodium pyruvate, and 15% heat-inactivated fetal bovine serum (FBS). Cells were seeded at 10 5 cells per well in a 24-well plate. After adherence for 24 h, the cells were sensitized by adding 0.5 µg mL −1 anti-DNP IgE (Sigma) for 24 h, washed twice with Siraganian buffer, and incubated with Siraganian buffer for 10 min. They were then treated with CME or CICA and incubated for 2 h. Subsequently, 10 µg mL −1 of antigen DNP-BSA was added, and the samples were incubated for 20 min to stimulate degranulation. Quercetin was used as a positive control by followed the method of Mlcek et al., (2016) [56]. The β-hexosaminidase activity was quantified via a colorimetric reaction using substrate 4-nitrophenyl N-acetylβ-D-glucosaminide (Sigma) according to the method used by Quah et al., (2020) [57].
Analysis of Wound Healing
This experiment was performed by the Industrial Technology Research Institute (Hsinchu, Taiwan). Mouse embryo fibroblast 3T3-L1 (BCRC60159) cells were cultured in Dulbecco's modified Eagle medium (DMEM) containing 4 mM L-glutamine, 1.5 g L −1 sodium bicarbonate, 4.5 g L −1 glucose, and 10% calf serum. Cells were seeded onto a SPLScar Block (SPL Life Sciences, Seongnam, Korea) at a density of 2 × 10 5 cells well −1 on a 24-well plate. After adherence for 24 h, the blocks were removed and incubated with medium containing either a test sample or the positive control TGF-β for an additional 8 h. The wound area was photographed and the percentage wound-healing rate was calculated as (A 0 − A 8 )/A 0 , where A 0 was the wound area at 0 h and A 8 was the wound area at 8 h. Image analysis was performed using ImageJ software v1.8.0_112.
Analysis of Hydroxyproline
Human skin fibroblast CCD966SK cells (BCRC60153) were grown in MEM in Earle's balanced salt solution (BSS) containing 0.1 mM nonessential amino acids, 1.5 g L −1 sodium bicarbonate, 1 mM sodium pyruvate, and 10% FBS. The cells were seeded on a 24-well plate at 2 × 10 5 cells well −1 overnight to ensure adherence, then incubated with either CME at various concentrations or the positive control, TGF-β. Hydroxyproline was measured using a commercially available kit (Biovision, Milpitas, CA, USA) according to the manufacturer's instructions. Absorbance was determined at 560 nm using an ELISA reader (Thermo Scientific).
In Vitro Permeation Studies
In vitro percutaneous absorption was measured using a manual diffusion system (PermeGear, Riegelsville, PA, USA) equipped with Strat-M membrane (Merck), which is a well-established synthetic model for transdermal diffusion testing. The 25 mm Strat-M membrane was mounted between the donor and receptor compartments and secured tightly with clamps. The available area of the membrane was 0.635 cm 2 . A 20 mg mL −1 solution of CME was loaded in the donor compartment, and the receptor compartment was filled with phosphate-buffered saline (PBS). The diffusion cells were placed on a magnetic stirring block, and the receptor compartment was maintained at 37 • C using a circulating water bath. Aliquots of 200 µL were withdrawn from the receptor compartment at various time points up to 48 h and analyzed using a total carbohydrate assay kit (Biovision) to determine the amount of CME that had permeated through the Strat-M membrane.
Analysis of Moisture Absorption and Retention Capacity
Both moisture absorption and moisture retention capacity were analyzed according to the method used by Song et al., (2019) [58].
Statistical Analysis
Data were analyzed using Microsoft Excel 2010 and IBM SPSS Statistics 22.0 (IBM, USA). One-way analyses of variance (ANOVAs) were used to test for the significance of differences between pretreatments, water extraction procedures, and moisture retention. Student's t-test was used to analyze freeze-drying efficiency, β-hexosaminidase inhibition, wound healing, hydroxyproline production, and cell viability. Where significant differences were identified by the ANOVAs, we used Scheff'e Test to compare the means across the treatment conditions. All data are presented as the means ± standard deviation (SD) of three independent experiments, with each experiment performed at least in triplicate. A p-value < 0.05 was considered statistically significant.
Conclusions
In this study, we described an effective extraction and preservation strategy for CME and performed an analysis of its polysaccharide compositions and molecular weights. We then demonstrated that CME has sufficient safety, antiallergic, and wound-repair properties; enhances hydroxyproline production; and is able to penetrate a Strat-M membrane and accumulate over time. Furthermore, CME possesses excellent moisture-absorption and -retention properties and can aid in the prevention of skin aging in multiple ways. Overall, Caulerpa microphysa has high potential for use in the cosmetics industry. | 2021-09-28T05:30:04.582Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "1b6de3381f7ade9e8c572959a74cf782ad08fe68",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/19/9/524/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b6de3381f7ade9e8c572959a74cf782ad08fe68",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232234307 | pes2o/s2orc | v3-fos-license | A Real-Time PCR Assay for the Diagnosis of Intestinal Schistosomiasis and Cure Assessment After the Treatment of Individuals With Low Parasite Burden
The laboratorial diagnosis of the intestinal schistosomiasis is always performed using Kato-Katz technique. However, this technique presents low sensitivity for diagnosis of individuals with low parasite burden, which constitutes the majority in low endemicity Brazilian locations for the disease. The objective of this study was developed and to validate a real-time PCR assay (qPCR) targeting 121 bp sequence to detect Schistosoma spp. DNA for the diagnosis of intestinal schistosomiasis and a sequence of the human β-actin gene as internal control. Firstly, the qPCR was standardized and next it was evaluated for diagnosis and cure assessment of intestinal schistosomiasis in the resident individuals in Tabuas and Estreito de Miralta, two locations in Brazil endemic for intestinal schistosomiasis. The qPCR assay results were compared with those of the Kato-Katz (KK) test, examining 2 or 24 slides, Saline Gradient (SG) and “reference test” (24 KK slides + SG). The cure assessment was measured by these diagnostic techniques at 30, 90, and 180 days post-treatment. In Tabuas, the positivity rates obtained by the qPCR was 30.4% (45/148) and by “reference test” was of 31.0% (46/148), with no statistical difference (p = 0.91). The presumed cure rates at 30, 90, and 180 days post-treatment were 100, 94.4, and 78.4% by the analysis of 24 KK slides, 100, 94.4, and 78.4% by the SG, and 100, 83.3, and 62.1% by the qPCR assay. In Estreito de Miralta, the positivity obtained by qPCR was 18.3% (26/142) and with “reference test” was 24.6% (35/142), with no statistical difference (p = 0.20). The presumed cure rates were 93.3, 96.9, and 96.5% by the KK, 93.3, 96.9, and 100% by the SG, and 93.3, 93.9, and 96.5% by the qPCR at 30, 90, and 180 days post-treatment, respectively. This study showed that the diagnostic techniques presented different performance in the populations from the two districts (Tabuas and Estreito de Miralta) and reinforces the need of combining techniques to improve diagnosis accuracy, increasing the detection of individuals with low parasite burden. This combination of techniques consists an important strategy for controlling the disease transmission.
INTRODUCTION
In Brazil, intestinal schistosomiasis is caused by Schistosoma mansoni, the only species with established transmission. Despite the prevalence and parasite burden having decreased over the years after implanting the preventive measurements of the Schistosomiasis Control Program, in 1975, the disease still occurs in Brazil. Intestinal schistosomiasis is currently found in low, moderate, and high endemicity areas of 19 Brazilian federal units (1). The last national prevalence survey (INPEG 2010-2015) estimated 1,500,000 positive individuals for intestinal schistosomiasis in Brazil, which remains an important public health issue (2).
This situation could be partly attributed to the lack of accurate diagnostic techniques to detect intestinal schistosomiasis in endemic areas. The use of the Kato-Katz technique to detect S. mansoni eggs (3) with one or two slides from a single fecal sample per individual is extensively employed in prevalence surveys and individual diagnosis due to its practicability and low cost (4). This technique is sensitive to the diagnosis of S. mansoni infection when applied in fecal samples from individuals with moderate and high parasite burden. However, the lack of sensitivity presented by this technique occurs when it is used to diagnose individuals with low parasite burden, who are mostly present in low endemicity area (5)(6)(7)(8)(9)(10)(11). The S. mansoni infected individuals who were not diagnosed contribute to maintain the local transmission or to establish new outbreak when they migrate to a non-endemic area, hindering the efficacy of the control measures.
Serological assays have been used for schistosomiasis diagnosis by detecting antibodies against schistosomal antigens. However, they are unable to discriminate between active infections and past exposures, especially in individuals living in regions endemic for schistosomiasis (12). A lateral flow cassette assay was developed to overcome the limitation of parasitological and serological techniques to detect circulating cathodic antigen in urine from the Schistosoma infected individuals (POC-CCA ® , Rapid Medical Diagnostics, Pretoria, South Africa). This test became available in 2003 and seems to be more sensitive than the Kato-Katz technique when applied in areas highly endemic for S. mansoni (13). However, there are controversies regarding the sensitivity of the POC-CCA when applied in individuals from low endemicity areas. In these cases, the POC-CCA has showed larger sensitivity only in patients with moderate or high parasite burden (14,15).
Alternatively, the detection of schistosome DNA through DNA amplification techniques provides advantages compared to the many parasitological techniques and serological tests, due their high sensitivity, specificity, and accuracy. Furthermore, DNA amplification techniques can detect early pre-patent infections. Although the PCR assay is widely used for laboratory diagnosis of many infectious and parasitic diseases, its application for schistosomiasis was reported for the first time for our research group. We showed that PCR targeting 121 bp, described by Hamburger et al. (16) achieved a limit of detection (LOD) of 1 fg of S. mansoni egg template DNA and absence of amplification of the DNA from Ascaris lumbricoides, Ancylostoma duodenale, Taenia solium, and Trichiuris trichiuria, helminths commonly found in the same endemic areas.
Since then, we have extensively worked with this 121 bp sequence as a target in the PCR assays for diagnosing intestinal schistosomiasis. The 121 bp sequence was used successfully in conventional PCR (17,18), PCR-ELISA (19,20), obtaining consistent results. Furthermore, other studies show the 121 bp sequence as a target in a real-time PCR and oligochromatographypolymerase chain reaction with higher sensitivity than the Kato-Katz technique for diagnosing intestinal schistosomiasis (21,22). Moreover, the 121 bp DNA sequence was targeted to detect Schistosoma DNA in plasma (23) and urine samples (24,25) using conventional PCR.
Thus, the main goal of this study was to develop a qPCR assay targeting 121 bp sequence to detect S. mansoni DNA in fecal samples to diagnose intestinal schistosomiasis and assess the posttreatment cure for individuals with low parasite burden. In addition, a 92 bp sequence from the human b-actin gene too was amplified in the same reaction as internal control for ensure the efficiency of DNA extraction and PCR-amplification.
MATERIAL AND METHODS qPCR Assay Standardization
Extraction of S. mansoni DNA In this study we tried contaminate negative fecal samples with S. mansoni eggs and we did not have success. The S. mansoni eggs are relatively big ones and we had difficulties to count the S. mansoni eggs in Neubauer chamber and then recover it to contaminate negative fecal samples.
To contorn this limitation, genomic DNA was extracted from adult S. mansoni worms (BH strain) obtained from the liver of Swiss albino mice 60 days after infection with 150 cercariae using QlAamp DNA Mini and Blood Mini Handbook (QIAGEN, GmbH, Hilden, Germany), following the manufacturer`s protocol.
As negative controls we used DNA extracted from three negative S. mansoni fecal samples collected from children resident in non-endemic area who had negative results by Kato-Katz technique. The total DNA was extracted using the QIAamp DNA Stools Mini Kit (Qiagen GmbH, Hilden, Germany), according to the manufacturer's recommendations and following the protocols of DNA Isolation from Stool for Pathogen Detection and DNA Isolation from Large Amounts of Stool. The DNA concentration and A260/A280 absorbance ratio was measured in a Nanodrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA) to ensure the efficiency of the DNA extraction and to verify the purity of the DNA obtained.
Primers and Probes
A forward 5′-CCG ACC AAC CGT TCT ATG A-3′ and reverse 5′-CAC GCT CTC GCA AAT AAT CTA AA-3′ primers and a 5′-6[FAM]/TCG TTG TAT CTC CGA AACCAC TGG ACG/ [3BHQ1] probe were designed to amplify and detect a 90 bp fragment of a highly repetitive 121 bp sequence of S. mansoni (GenBank: M61098). A forward 5'-CCA TCT ACG AGG GGT ATG-3' and reverse 3'-GGT GAG GAT CTT CAT GAG GTA-5' primers, and the 56-JOE/CCT GCG TCT GGA CCT GGC TG/ [3BHQ1] probe were designed to amplify and detect a 92 pb of the human b-actin gene (GenBank: AY582799.1) as internal control ( Figure 1). All primers and probes were designed in the Primer3-web program 0.4.0 (26) and submitted to homology searches on the National Center for Biotechnology Information website with nucleotide BLAST program using the Nucleotide collection and Megablast option database. The primers and probes were purchased from Integrated DNA Technologies Inc. (Coralville, IO, USA). Initially, we tried S. mansoni primers at 0.1, 0.2, 0.3 mM and S. mansoni probe at 0.1, 0.25, and 0.5 mM in different combinations in the simplex qPCR assay using 38 ng, 3.8 ng, 380 pg, 38 pg, 3.8 pg, 380 fg, 0.38 fg, and 0.038 fg genomic DNA of S. mansoni diluted 1:5 in linear acrylamide solution [30 mg/ml (w/v) in DEPC treat H2O]. Next, we tried the human b-actin gene primers at 0.1, 0.15, and 0.2 mM and human b-actin gene probe at 0.1, 0.25, and 0.5 mM in a simplex qPCR assay using DNA extracted from negative S. mansoni fecal samples diluted 1:5 in linear acrylamide solution. In this way, the best qPCR protocol was defined as: The reaction was performed with a final volume of 25 ml probe at 0.25 mM, BSA 0.01 mg/ml, MgCl 2 at 2 mM and 4 ml of DNA diluted 1:5 in linear acrylamide solution. Two controls were used for each reaction, a positive control (PCR mix plus DNA extracted from adult worms) and a negative control consisting of PCR mix (No Template Control). The assays were performed in duplicate using microplates (MicroAmp ® Fast Optical/Applied Biosystems Foster City, CA, USA) sealed with adhesive film (Optical Adhesive Covers/Applied Biosystems) on the StepOnePlus ™ Real-Time PCR System (Thermo Fisher Scientific Inc., USA) under the universal cycling program with 45 cycles and annealing temperature of 60°C. Based on a standard curve produced with serial dilutions of S. mansoni DNA, samples presenting Ct ⩽ 42 were classified as positives. Samples that did not presented internal control JOE (b-actin Probe) amplification were retested and a new DNA sample was reextracted when necessary.
Extraction and amplification protocols were performed in different rooms to minimize the possibility of contamination. All experiments were performed in a laminar flow chamber, previously irradiated with ultraviolet light, and employing only sterile disposable products, including barrier tips.
Analytical Sensitivity (Limit of Detection)
The lower LOD of the qPCR was defined by the amplification curve of a positive control containing 38 ng, 3.8 ng, 380 pg, 38 pg, 3.8 pg, 380 fg, 0.38 fg, and 0.038 fg of genomic DNA of adult worms diluted 1:5 in linear acrylamide solution, in triplicate. The mean of Ct from the triplicates was used to define the point in the amplification curve. The amplification efficiency assay was analyzed according to the amplification efficiency (E), Slope, and R 2 , following recommendations of Johnson et al. (27).
Analytical Specificity
DNA from Ancylostoma duodenale, Ascaris lumbricoides, and Fasciola hepatica, ceded by professors from the Department of Parasitology, Biology Institute, of the Universidade Federal de Minas Gerais, was used in the qPCR assay to evaluate the analytical specificity. A. duodenale and A. lumbricoides are frequently found in S. mansoni co-infections and F. hepatica is a worm phylogenetically next to the S. mansoni.
Precision Tests
The repeatability test was carried out using six DNA samples extracted from human feces (three negatives and three positives for the presence of S. mansoni eggs), according to the Kato-Katz technique. The repeatability test was measured by the coefficient of variation (CV) by retesting four times the same samples in a single assay (intra-assay test). The reproducibility test was measured by the coefficient of variation (CV) of retesting positive control containing 38 ng, 3.8 ng, 380 pg, 38 pg, 3.8 pg, 380 fg and 0.38 fg of genomic DNA of adult worms diluted 1:5 in linear acrylamide solution in three different days.
qPCR Validation
The validation of the qPCR was performed through a crosssectional-based study carried out in the Tabuas and Estreito de Miralta districts (Figure 2), two communities endemic for schistosomiasis from the rural area of the municipality of Montes Claros, in the northern region of the state of Minas Gerais, Brazil. The prevalence in Tabuas in 2010 was estimated in 29.1% using two slides in the Kato-Katz technique in the Zoonozis Control Center of Montes Claros. There were no prevalence data from the Estreito de Miralta. However, this community was close to Tabuas and no control has been placed in these communities in the 2 years prior to the current study.
All residents of the mentioned locations with over 1 year of age, of both gender, and who agreed to participate in the study and signed the informed consent form were included. Furthermore, the diagnostic tests were applied to assess the cure after specific treatment of low parasite burden individuals.
Stool Samples
The stools samples were provided by the participants at day 0 and examined using the Kato-Katz and Saline Gradient techniques in both locations. The participants who presented S. mansoni eggs in their stools were treated with 60 mg/kg of praziquantel for children and 50 mg/kg for adults. New stools samples were collected at 30, 90, and 180 days post-treatment, totalizing four samples by participant until the study end. The participants who presented eggs or cyst of other parasites were treated with 400 mg of albendazole (single oral dose) as recommended by the Brazilian Ministry of Health.
Kato-Katz Technique
The fecal samples from the residents of Tabuas and Estreito de Miralta were submitted to the Kato-Katz technique using the Helm-Test ® produced by BioManguinhos-Fiocruz (Rio de Janeiro, RJ, Brazil). Twenty-four slides of the same stools sample, which correspond to the 1,000 mg of feces, were examined to perform a quantitative comparison between the parasitological tests. Infection intensity was calculated by the number of S. mansoni eggs found in 24 slides, resulting in eggs per gram of feces (epg). According to WHO (4), the S. mansoni infection intensity is classified as light (1-100 epg), moderate (101-400 epg), and high (>400).
Saline Gradient Technique
The Saline Gradient technique was performed according to the protocol published by Coelho et al. (28). Fecal samples were filtered through nylon screen (150 μm) and two portions of 500 mg were quantified using a metal plate. The portions were subjected to a slow flow of a 3% saline solution for 1 h. Subsequently, the system was closed and all remaining material transferred to a Falcon ® tube (15 ml), after which 20% formaldehyde was added to the sediment obtained (approximately 2 ml of sediment). The final solution was examined in an optical microscope. All sediment was examined, the helminth eggs were counted, and the S. mansoni eggs were separated in two preparations (500 mg + 500 mg) representing eggs per gram of feces (epg).
qPCR Assay
The DNA of 1,000 mg fecal samples obtained from residents of Tabuas and Estreito de Miralta was extracted using the QIAamp DNA Stools Mini Kit (Qiagen GmbH, Hilden, Germany), according to the manufacturer's recommendations and following the protocols of DNA Isolation from Stool for Pathogen Detection and DNA Isolation from Large Amounts of Stool. The DNA concentration was measured by absorbance at 260 nm in a Nanodrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA). The A260/A280 absorbance ratio was analyzed to verify the purity of the DNA obtained. The qPCR assay was performed using fecal samples according to the conditions standardized. The DNA samples that did not present amplification for the human b-actin gene were retested for ensure the efficiency of DNA extraction and PCR-amplification.
Data Analysis
The database was built in Microsoft Office Excel 2007 spreadsheets and analyzed using GraphPad Prism version 6.0 (San Diego, CA, USA) or Open Epi software version 3.0 (29). The positivity, sensitivity, specificity, and accuracy rates of the parasitological and molecular tests were calculated using the Open Epi software. The chi-square test was used for comparisons between proportions considering a 5% significance level (30). The degree of agreement between diagnostic tests was determined by the Kappa index and interpreted according to Landis & Koch (31). Correlations between epg from the KK and Ct from the qPCR test results were tested using the Spearman's coefficiency of correlation.
Ethical Approval
The use of human samples was approved following the standards of the Ethical Review Committee of the IRR/FIOCRUZ, Brazil (CEPSH 03/2008) and National Committee of Ethical Research (784/2008, CONEP 14886) in accordance with the Brazilian legislation (RDC 466/2012). The written informed consent was obtained from all the participants/parents or guardians before collecting the samples.
RESULTS qPCR Assay Standardization
A standard curve was constructed and the analytical sensitivity assay showed that the S. mansoni DNA was detected up to the seventh dilution, which corresponds to 0.38 fg. Moreover, repeatability and linearity in the standard curve were obtained up to Ct 41 (CV ranging from 0.05 to 2.7%; Slope: −3.222; E: 104%, and R²: 0.98).
The analytical specificity was assessed using DNA from Ancylostoma duodenale, Ascaris lumbricoides, and Fasciola hepatica adult worms. The results showed no unspecific amplifications when using genomic DNA in the qPCR assay. The repeatability test presented acceptable Ct variations in the four replicates of three S. mansoni negative and three S. mansoni positive samples. In this assay the coefficient of variation (CV) were 1.74, 2.18, and 2.33% for the target (FAM probe) and 0.69, 0.48, and 0.40% for internal control (JOE probe). Likewise, the reproducibility test presented Ct consistent, resulting CV ranging from 1.6 to 5.4%.
qPCR VALIDATION IN TABUAS Positivity Rates for S. mansoni and Other Parasites
In Tabuas, 84.5% (148/175) of the population participated of this study. From these, 73 were females and 75 males, aged between 1 and 86 years. Ninety-six individuals were residents in the Tabuas district and 52 in the Ribeirão de Tabuas, an adjacent location. The reasons for non-participation in the study were: 1) refusal; 2) insufficient biological sample for performing all techniques, and 3) health reasons.
The SG technique detected 43/148 participants with S. mansoni eggs in their stools, producing a positivity rate of 29.0%. Likewise, the SG technique showed higher positivity rates in the age range from 10 to 19 (57.1) followed by 20-29 years (50%) (Figure 3). Of the 43 positive participants, 40 had low parasite load and only three presented moderate loads.
To create a "reference test," the results obtained by KK (24 slides) and SG were combined and the positivity increased to 31.0% (46/148), with a statistical difference regarding the previous KK positivity (p = 0.04), although without statistical difference compared to SG (p = 0.70). The positivity rate obtained by the qPCR assay was 30.4%, represented by 45/148 positive participants. All samples (148/148) presented amplification of the internal control. The highest positivity rates for this assay occurred in the age ranges from 10 to 19 (52%) and 30 to 39 years (50%) (Figure 3).
Besides S. mansoni, the two combined techniques detected 37 positive participants for other parasites, 16 Table 2). Table 3 shows the agreement ratios between the parasitological techniques and qPCR results. Among the 45 participants positive to S. mansoni by the qPCR assay, 22 were consistent with the KK technique (two slides), one presented positive KK and negative qPCR results (with positive amplification of human b-actin gene), and 23 were positive only by the qPCR (Kappa index: 0.56). On the other hand, 30 participants were consistent with the KK technique (24 slides), one presented positive KK and negative qPCR results (with positive amplification of human b-actin gene), and 15 were positive only by qPCR (Kappa ıńdice: 0.72). Furthermore, there is a negative correlation between the microscopic egg counts (epg) and Ct (r: −0.404) obtained by the KK technique (24 slides) and qPCR assay, respectively.
In the crosstabulation between SG and qPCR, eight participants presented positive results with the SG technique and negative with the qPCR (with positive amplification of human b-actin gene). On the other hand, 10 participants presented negative results with the SG technique but were positive with the qPCR. Thirty-five positive and 95 negative results were consistent between the SG technique and qPCR assay (Kappa index: 0.71). Forty-five individuals were positive by the qPCR, of which seven cases were not detected by the "reference test." In contrast, eight cases were positive by the "reference test" and were not detected by the qPCR (with positive amplification of human b-actin gene), resulting in a Kappa index of 0.76. Tabuas Table 4 shows the follow-up for cure assessment. Of the 46 positive participants treated with praziquantel, 39 participated of Table 5). The higher positivity rates of the KK (24 slides) technique occurred from 10 to 19 years (44.4%) and 50 to 59 years (30%) (Figure 4). All positive participants presented low parasitic load (1-100 epg). The SG technique detected 26/142 participants positive for S. mansoni eggs, with a positivity rate of 18.3% concentrated at ages from 10 to 19 (51.9%) and 20 to 29 years (50%) (Figure 4).
Cure Assessment in
Likewise, "reference test" was created with the results obtained by the KK (24 slides) and SG techniques. The positivity increased to 24.6% (35/142), with no statistical difference regarding the previous KK (p = 0.32) and SG (p = 0.20) positivity rates. The positivity rate obtained by the qPCR assay was equal to the one obtained with the SG technique (18.3%), represented by 26/142 positive participants. The higher positivity rates with qPCR occurred in the age ranges from 10 to 19 years (44.4%) (Figure 4). Other intestinal parasites were also detected by both parasitological techniques. The KK and SG techniques detected 14 (9.9%) individuals positive for hookworms, eight (5.6%) for Enterobius vermicularis, one (0.7%) for Ascaris lumbricoides, and three (2.1%) for Hymenolepis nana. Five participants presented co-infection with S. mansoni and hookworms and two with S. mansoni and E. vermicularis. Table 6).
qPCR Performance in Different Scenarios
The qPCR results were crosstabulated with the parasitological techniques and the results are presented in the Table 7. Of the 142 participants, 12 were co-positive and 113 were co-negative with the KK technique (two slides) and qPCR assay, three were positive by the KK technique and negative by the qPCR assay (with positive amplification for the human b-actin gene), and 14 were positive only by the qPCR assay (Kappa index: 0.52). Twenty-six participants were positive for the qPCR assay, of which eight were not detected by the KK 24 slides. On other hand, there were 10 positive cases detected by the KK technique that the qPCR assay could not detect (Kappa index: 0.59). Moreover, is important highlight that the results of the Ct from the qPCR assay showed negative correlation (r: −0.427) with the microscopic egg counts (epg) obtained by KK technique (24 slides).
The crosstabulation of the SG and qPCR results showed that 126 were consistent and 16 discordants. Among the discordant results, eight were qPCR positive and SG negative and eight were qPCR negative (with positive amplification for the human bactin gene) and SG positive (Kappa index: 0.62). There were 21/ 142 qPCR and "reference test" discordant results, of which six presented positive qPCR and negative "reference test" results. In contrast, 15 individuals presented negative qPCR (with positive amplification of human b-actin gene) and positive "reference test" results, resulting in a Kappa index of 0.56 ( Table 7).
Cure Assessment in Estreito de Miralta
In the follow-up for cure assessment 30 days post-treatment, new stool samples were collected from 30/35 positive participants.
DISCUSSION
Despite the parasitological technique presenting the best costbenefit conditions, the assessment of more sensitive techniques is essential for an efficient diagnosis in endemic areas. Current scenarios show that molecular tests are a promising tool for diagnosis and cure assessment of intestinal schistosomiasis in individuals of low parasite burden (17-20, 32, 33). One of the advantages of the qPCR assay is its potential for high throughput, elimination of post-PCR handling, and possible quantification. Moreover, the qPCR can be multiplexed to detect other parasites in the feces using primers highly specific for each parasite of interest (33). In this study, the qPCR assay was duplexed to detect S. mansoni and the human b-actin gene to diagnose the intestinal schistosomiasis and to secure the optimal conditions of amplification, respectively.
In the conditions defined in this study, the qPCR was extremally sensitive and capable of detecting 0.38 fg of S. mansoni DNA, which corresponds to approximately 0.00065 times its genome, that contains~580 fg of DNA (34). Thus, the LOD defined in this study (0.38 fg) corresponds to less than a single cell of this multi-cellular parasite. Among the PCR assays described in the literature, those targeting 121 bp have presented lower LOD, ranging from 1 to 3 fg of total DNA (18)(19)(20)35).
The qPCR assay was highly specific for detecting S. mansoni DNA. The primers used in the qPCR are genus-specific and did not amplify the DNA of Ancylostoma duodenale, Ascaris lumbricoides, Fasciola hepatica, and Ancylostoma duodenale. Furthermore, there were no false-positive results in the stool samples collected from the participants infected with Enterobius vermicularis, Giardia sp., Entamoeba coli, Trichuris trichiura, Taenia sp., and Hymenolepis nana.
In Tabuas, the positivity rate presented by the KK technique increased until six slides and kept relatively constant from 6 to 24 slides examined of the same fecal sample ( Table 1). This behavior was also observed by other authors, who emphasize that the positivity rate is directly proportional to the number of slides and fecal samples examined. Enk et al. (7) showed that the positivity rate of schistosomiasis in an experimental group of 305 participants increased from 13.8 to 19% when one and six KK slides were examined. Moreover, an increase of 20.7 to 27.2% in the prevalence of schistosomiasis occurred when these authors examined three stool samples. Likewise, Siqueira et al. (8) found an expressive increase of the positivity rate from 8 to 9.5% and from 12.4 to 14.8.9% when one, three, six, and 12 KK slides were examined in individuals from the Buriti Seco and Morro Grande communities from Pedra Preta, a small village located in the rural area of Montes Claros, state of Minas Gerais, Brazil. These findings demonstrate the importance of evaluating a larger number of samples and slides to reduce the number of false-negative results, given that there is consensus on the limitation of the parasitological technique in detecting individuals with low parasitic burden. Nevertheless, this approach is not applicable in the epidemiological inquiry due to the lack of operability in field.
The positivity of qPCR was higher than that of the KK (24 slides, p = 0.053) and SG techniques (p = 0.79) and like the "reference test," with no statistical difference (p = 0.91). Espıŕito-Santo et al. (21) also reported a qPCR positivity rate 6.8 times greater than that obtained by the results of the Kato-Katz and Spontaneous Sedimentation (HPJ) techniques combined (0.9%), in a study performed with 572 residents of a low endemicity area. These discrepant positivity rates were also described in other studies using conventional PCR (17) and PCR-ELISA (19,20), as well as LAMP targeting 121 bp (36).
The sensitivity of the qPCR was high (96.7%) but the specificity was low (87.2%) when the KK (24 slides) results were taken as reference. Only one participant positive with eggs diagnosed by the Kato-Katz technique was not identified by the qPCR assay, which can be explained by the absence of eggs in the sample examined. The qPCR detected 15 positive participants not identified by the Kato-Katz, by examining 24 slides. This discordance was probably due to the limitation of parasitological technique for detecting parasite eggs in stools samples from residents of low endemicity areas, where most of the carriers present low parasite burden (<100 epg). In these cases, the PCR assay detects more cases of infection than the evaluations of many slides by the Kato-Katz technique, suggesting that it can be a useful diagnostic tool. In contrast, the sensitivity rates were lower (81.4 and 82.6%) and the specificity rates were high (90.4 and 93.4%) when the SG or "reference test" were considered as a reference. Thus, the accuracy ranged from 87.8 to 89.8%, with no statistical difference (p = 0.59).
In a cross-sectional population-based study, the qPCR targeting 121 bp was compared with POC-CCA ® , KK (18 slides), Saline Gradient, and Helmintex techniques. The qPCR assay presented sensitivity of 91.4%, specificity of 86.9%, and Kappa index of 0.71, when the results of the three parasitological techniques were considered as a "reference test." Moreover, the qPCR assay diagnosed 86.9% of the participants with very low parasite burden (<12 epg) while the POC-CCA ® diagnosed 50.8% (37). Other studies with qPCR targeting the Schistosoma cytochrome oxidase gene (38), internal transcriberspacer-2 sequence (ITS2) (39), SSU rRNA from S. mansoni (34), and 28S ribosomal RNA (40) have showed better performance of the qPCR compared to the parasitological techniques. Schistosoma spp. 28S ribosomal RNA can be quantitatively detected in stool, serum, and urine (40) with higher sensitivity than the Kato-Katz technique.
Furthermore, retrotransposon (SjR2), a portion of a mitochondrial gene (nad1) and cell-free parasite DNA (cfDNA) detection by Droplet Digital PCR (ddPCR) has shown to be applicable to the diagnosis of schistosomiasis (41,42). Also, the authors highlight that the capacity to measure infection intensity have important implications for schistosomiasis control.
It is difficult to obtain a valid comparison between parasitological techniques and PCR assays since they are methodologies with different principles. These discrepant results may be related to irregular distribution of eggs in the feces when the number of eggs per gram of feces (epg) is small (43). Although the Kato-Katz technique is considered the choice test to diagnose schistosomiasis in fecal samples, it is not characteristic of a "reference test." A study showed that the qPCR targeting Schistosoma ITS2 applied in a population from Senegal (n = 197) and Kenia (n = 760), high and low endemicity areas, respectively, presented 13-15% more positivity regarding the KK technique (two slides) of a single stool sample (39). Moreover, the authors reported that the positivity of the qPCR assay was very similar in both areas.
The presumed cure rate of 100% post-treatment was expected. However, we observed that 5.6, 21.6 and 16.7, 37.9% of the individuals from Tabuas were positive for S. mansoni eggs in the stool or with qPCR at 90 and 180 days post-treatment. Similar data were found in a study performed in the residents from the Pedra Preta community, in the municipality of Montes Claros, Minas Gerais, Brazil (44). An explanation for these findings might be the possible reinfection by S. mansoni. Moreover, one must consider the possibility of therapeutic failure caused by an incomplete cure due the sub-curative effect of praziquantel when used at usual doses (45). In this study, the sequential qPCR assay from praziquantel-treated participants showed a long persistence of S. mansoni circulating DNA, with a negative correlation between the microscopic egg counts (epg) using the KK technique (24 slides) and Ct obtained by the qPCR assay. In contrast, a qPCR assay for the quantitative detection of S. mansoni and S. haematobium DNA in stool samples in the Senegal population showed significant correlation between the qPCR Ct values and microscopic eggs counts for both Schistosoma species (38). In this case, we believe that the high positivity rate of 79.5% found by the microscopic egg counts performed on duplicate stool samples favored the positive correlation.
There are insufficient data regarding the clearance of Schistosoma DNA post-treatment. However, is necessary to consider the possibility of unisexual Schistosome infection. In this case, male or female worms will be able to release antigens and DNA that could be detected by immunological and molecular techniques, respectively (46). It is possible that Schistosoma DNA could continue to be released from eggs or killed worms that are withheld in tissue granulomes. Thus, circulating DNA from schistosomiasis patients is not entirely cleared and might be detected by qPCR assays. Wichman et al. (23,47) proposed that circulating free DNA may be detected in more than 1 year since inactive eggs may release DNA very slowly. In some patients with chronic schistosomiasis, presenting a higher number of Schistosoma eggs, circulating free DNA may remain for considerably longer (48). Moreover, the authors highlight that the decrease of Schistosoma circulating free DNA pre-and posttreatment may be useful for monitoring patients under therapy.
In Estreito de Miralta, the positivity obtained by the KK technique (24 slides) was high than the SG technique (p = 0.76) and qPCR assay (p = 0.76). In this district the positivity rate increases constantly according to number of slides examined, disagreeing with the behavior shown in Tabuas. In Estreito de Miralta all participants presented low parasite load and possibly this fact influenced the correlation between the number of slides and positivity rates ( Table 5).
In both districts (Tabuas and Estreito de Miralta), the high positivity rates for schistosomiasis were found in participants aged from 10 to 19 years, followed by participants aged between 20 and 29 (Figures 3 and 4). Burlandy-Soares et al. (49) also found high positivity when using the KK technique in these age ranges for the population of Pedro de Toledo, a low endemicity area of the state of São Paulo, Brazil. These findings clearly show the relevance of these age groups for the disease epidemiology.
Despite the low parasite load presented by the infected participants in Estreito de Miralta, the qPCR presented a positivity rate of 18.3%, approximately twice as high as that obtained by the Kato-Katz technique (two slides), as is performed in the Brazilian Schistosomiasis Program Control. Thus, approximately 50% of the individuals infected with S. mansoni continue to eliminate eggs and contribute for maintaining the transmission of the disease in the area. This evidence emphasizes the urgent need for a more sensitive diagnostic method for surveilling schistosomiasis cases in low transmission areas (25).
Apparently, the sensitivity rates of the qPCR assay (64.3, 69.2, 57.1%) were impaired and the specificity (92.9, 93.1, 94.4%) were favored when considering the KK (24 slides) and SG techniques or "reference test" as true results. These data indicate the influence of individual parasite burden in the performance of the diagnostic techniques used. Some authors reported that false negative results obtained by PCR may be due to a few factors such as inhibition of the amplification reaction by fecal compounds or DNA degradation during transportation of the sample from the field (19,43). However, all samples from Estreito de Miralta presented amplification of the human bactin gene, ensuring that negative results correspond to true negative samples for the Schistosoma DNA obtained with the qPCR assay. It is a consensus that the specificity of any diagnostic test benefits when it is applied in individuals from low endemicity areas. Also, it is well established that the sensitivity of a diagnostic test benefits when it is applied in individuals from high endemicity areas.
In contrast to the presumed cure rate observed in the population from Tabuas, the presumed cure rates in Estreito de Miralta were high and more consistent in the follow up at 30, 90, and 180 days post-treatment, with no statistical difference ( Table 8). Surprisingly, the presumed cure rate in the population of Estreito de Miralta district at 30 days post-treatment measured by parasitological techniques and qPCR was only 93.3%. It is opportune to highlight that none of these participants received treatment during the acute phase of the disease, where the juvenile schistosomes are less susceptible to praziquantel (50). Furthermore, an experimental study using the qPCR assay showed that adult schistosomes with 6 weeks post-infection were susceptible to the praziquantel and those juvenile schistosomes with 4 weeks post-infection were not (51).
In conclusion, the diagnostic techniques presented different performance in the populations from two districts (Tabuas and Estreito de Miralta). In Tabuas, the positivity rate was higher, with the participants presenting low, moderate, and high parasite burdens and a considerable percentage of participants positive with S. mansoni eggs evaluated by the qPCR assay after undergoing treatment with the recommended dose of praziquantel. In contrast, all participants in Estreito de Miralta were classified as low parasite burden carriers, with less therapeutic failure during the cure assessment performed in this study. Based in these data, we can to suggest that the transmission force of the parasite in Tabuas is higher than that in Estreito de Miralta.
The qPCR was an acceptable diagnostic tool, with added value to microscopy in low endemicity areas for the diagnosis of intestinal schistosomiasis in fecal samples, which makes it particularly useful in low transmission areas, and consequently, in post-treatment settings. Moreover, the qPCR can be multiplexed for the diagnosis of other intestinal parasites making the assay more useful. On other hand, because of the relative high costs, the qPCR assay is not cost-effective for the routine diagnosis of schistosomiasis in endemic countries. However, the qPCR assay has become less expensive over time, with increase of the number of research centers with qPCR infrastructure. Thus, the qPCR assay can be used in the routine diagnosis of helminth infections in countries of low economic power.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
LMVS aided with the field work and parasitological techniques, and performed the molecular techniques and analyzed the results. CS assisted with the qPCR assay and analyzed the results. ÁAO aided with the enrollment of the participants, assisted with the parasitological techniques, and assisted with the field work. NFFC assisted with the enrollment of the participants and field work.
LG assisted with the qPCR assay and critically reviewed the manuscript for intellectual content. AR supported with the study design and critically reviewed the manuscript for intellectual content. PMZC supported with the study design and critically reviewed the manuscript for intellectual content. EO assisted with the study design, data analysis, drafted the manuscript, and critically reviewed the manuscript for intellectual content. All authors contributed to the article and approved the submitted version. | 2021-03-16T13:13:50.577Z | 2021-03-16T00:00:00.000 | {
"year": 2020,
"sha1": "90f651d4e5e329f254e4df3b7047e8de32223b5c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.620417/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90f651d4e5e329f254e4df3b7047e8de32223b5c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255794586 | pes2o/s2orc | v3-fos-license | Self-Compassion and Bedtime Procrastination: an Emotion Regulation Perspective
The current study extended previous research on self-compassion and health behaviours by examining the associations of self-compassion to bedtime procrastination, an important sleep-related behaviour. We hypothesized that lower negative affect and adaptive emotion regulation would explain the proposed links between self-compassion and less bedtime procrastination. Two cross-sectional online studies were conducted. Study 1 included 134 healthy individuals from the community (mean age 30.22, 77.4% female). Study 2 included 646 individuals from the community (mean age 30.74, 68.9% female) who were screened for the absence of clinical insomnia. Participants in both studies completed measures of self-compassion, positive and negative affect and bedtime procrastination. Participants in study 2 also completed a measure of cognitive reappraisal. Multiple mediation analysis in study 1 revealed the expected indirect effects of self-compassion on less bedtime procrastination through lower negative affect [b = − .09, 95% CI = (− .20, − .02), but not higher positive affect. Path analysis in study 2 replicated these findings and further demonstrated that cognitive reappraisal explained the lower negative affect linked to self-compassion [b = − .011; 95% CI = (− .025; − .003)]. The direct effect of self-compassion on less bedtime procrastination remained significant. Our novel findings provide preliminary evidence that self-compassionate people are less likely to engage in bedtime procrastination, due in part to their use of healthy emotion regulation strategies that downregulate negative mood.
Introduction
Whether conceived of as a momentary mindset or as an enduring dispositional tendency, self-compassion involves taking a kind, accepting and non-judgmental stance towards oneself in times of failure or difficulty, responding mindfully to the negative emotions arising from difficulties, and acknowledging that one is not alone in suffering (Neff 2003b). A growing evidence base indicates that being self-compassionate when facing one's flaws and failures is linked to an array of beneficial healthrelated outcomes. Self-compassion is associated with fewer physical symptoms (Dunne et al. 2016;Terry et al. 2013), lower perceived stress (Allen and Leary 2010;Homan and Sirois 2017;Sirois et al. 2015b) and attenuated physiological responses to stress (Arch et al. 2014;Breines et al. 2014). Selfcompassion has also been identified as an important predictor of a range of different health behaviours, including healthy eating (Adams and Leary 2007;Schoenefeld and Webb 2013), exercise (Magnus et al. 2010), smoking cessation (Kelly et al. 2010), medical adherence (Dowd and Jung 2017;Sirois and Hirsch 2018), seeking medical care (Terry et al. 2013) and general health-promoting behaviours (Dunne et al. 2016;Sirois et al. 2015a). Importantly, there is emerging evidence that the link between self-compassion and better physical health is explained in part by better practice of health behaviours (Dunne et al. 2016;Homan and Sirois 2017). Although self-compassion is often examined as a dispositional quality in relation to health, several studies have now demonstrated that self-compassion levels can be increased and maintained with simple interventions (e.g. Neff and Germer 2013). Understanding the ways in which self-compassion can be beneficial for promoting key health behaviours, as well as the underlying processes involved, is an important aim for promoting health. Terry and Leary (2011) proposed that self-compassion enhances the self-regulation of health behaviours through its links to adaptive self-regulatory processes, such as setting goals, taking action, monitoring ongoing behaviour and regulating emotions. However, evidence supporting the links between self-compassion and global measures of self-regulation is scant and inconclusive. In one of the few studies that tested self-compassion in relation to self-regulation, self-compassion was positively correlated with six separate self-regulatory processes, but the link between self-compassion and intentions to seek prompt medical care was not explained by the overall self-regulation score (Terry et al. 2013).
Theory and evidence suggests that self-compassion has strong links to one specific aspect of self-regulation, emotion regulation, and that this may be an important explanatory pathway linking self-compassion and health behaviours. The three main components of self-compassion-self-kindness (vs. self-judgment), common humanity (vs. isolation) and mindfulness (vs. over-identification) (Neff 2003b)-each have consequences for emotion regulation, that is the upregulation of positive affect and the downregulation of negative affect (Gross and John 2003). For example, responding selfcompassionately to the inevitable failures that occur while trying to stay on track with health behaviour goals means being kind and accepting of one's failings (self-kindness), acknowledging that making mistakes while trying to get healthier is part of the human condition (common humanity), and not becoming over-identified with feelings of guilt or frustration (mindfulness). In short, having compassion for oneself while struggling to maintain healthy behaviours translates into healthy emotion regulation. Results from a narrative review support this proposition, and specifically suggest that self-compassion is associated with an emotion regulation style that is typified by higher levels of positive affect, faster recovery from and lower reactivity to stress, and the use of adaptive emotion regulation capacities and strategies (Finlay-Jones 2017). For example, research has demonstrated that selfcompassion is associated with emotion-focused coping strategies that increase positive affect, such as acceptance and positive reappraisal, and is negatively related to coping strategies that increase negative affect, such as self-blame (Allen and Leary 2010;Sirois et al. 2015a, b). There is also evidence that trait self-compassion is associated with physiological measures of emotion regulation, such as heart rate variability (Svendsen et al. 2016). Cognitive reappraisal, reframing a situation to change the way it is emotionally responded to (Gross 1998), is one emotion regulation strategy that is effective for downregulating negative emotions (Gross and John 2003). Self-compassion has been linked to cognitive reappraisal for decreasing negative mood (Diedrich et al. 2016), suggesting that cognitive reappraisal is an emotion regulation strategy with particular relevance for understanding how self-compassion may be beneficial for regulating emotions.
With respect to health behaviours, a nascent body of research indicates that the adaptive emotional responses associated with self-compassion may account for why selfcompassionate people engage in better health behaviours. In one study of emerging adults, the link between selfcompassion and intentions to engage in health behaviours was explained in part by low levels of negative affect (Sirois 2015d). There is also evidence that the associations between self-compassion and the practice of a range of health behaviours, including healthy eating, exercise, stress reduction and getting proper sleep, can be explained in part by the healthy affective balance associated with self-compassion. Across eight samples, the indirect effects of self-compassion on health behaviours were significant for both positive and negative affect (Sirois et al. 2015a, b), suggesting that emotion regulation may play a key role in facilitating positive health behaviours for self-compassionate individuals.
This research is consistent with the temporal-affective selfregulation resource (TASRR) model of health behaviours (Sirois 2015b(Sirois , d, 2017, which highlights the central role of affective and temporal internal resources, and the qualities which deplete these resources, for regulating health behaviour. The TASRR model builds on theory and research indicating that positive affect can be viewed as a self-regulatory resource that bolsters the practice of health behaviours (Pressman and Cohen 2005;Sirois 2015d;Tice et al. 2007), and posits that individual differences characterized by positive affect will therefore be linked to successful self-regulation of health behaviours. In contrast, negative affect can interfere with effective self-regulation (Sirois 2015b;Wagner and Heatherton 2015), and individual differences that are characterized by negative affect will therefore be prone to misregulation of health behaviours. The TASRR model has been previously validated across several individual differences, including self-compassion (Sirois 2015d), perfectionism (Sirois 2015b), procrastination (Sirois 2015c) and the big five traits (Sirois and Hirsch 2015). It is therefore reasonable to expect, as Terry and Leary (2011) proposed, that self-compassion may equip individuals with the necessary (affective) selfregulatory resources to engage in important health behaviours.
One important health behaviour that has received little empirical study with respect to self-compassion is sleep behaviour. Sleep deprivation increases people's risk of getting seriously ill through infectious diseases, cancer, cardiovascular problems or depression (Irwin et al. 2016;Strine and Chapman 2005). Behavioural factors strongly affect people's sleep quality and quantity (Barber et al. 2013;Brown et al. 2002;Nauts and Kroese 2017), for example, because people unnecessarily go to bed too late, despite expecting to be worse off as a result of doing so (bedtime procrastination; Kroese et al. 2016). Bedtime procrastination is highly prevalent, with 74% of people in a representative Dutch sample reporting that they unnecessarily delay going to bed at least once a week (Kroese et al. 2014b), and moderate levels of bedtime procrastination reported among a sample from the USA (Kroese et al. 2014a). As a result of delaying their bedtime, people feel more fatigued and report shorter sleep times (Kroese et al. 2014b). Not surprisingly, bedtime procrastination is related to general procrastination.
Like other types of procrastination, bedtime procrastination is related to trait self-control (Kroese et al. 2014b), suggesting that people need to exert effort to hit the pillow at a reasonable hour. In line with this view, in a qualitative study, bedtime procrastinators indicated that it often requires effort to quit leisurely, rewarding activities to go to sleep (Nauts et al. Forthcoming). Moreover, many bedtime procrastinators in this study indicated that they delayed their bedtimes after tiring or stressful days to allow themselves time to watch TV or play video games to help them unwind. Thus, bedtime procrastination may serve as a form of short-term mood repair: after a long and stressful day, people want to watch a movie, play video games or engage in other leisure activities because doing so makes them feel good. In this respect, bedtime procrastination may be similar to procrastination in general: rather than working on a potentially stressful task, procrastinators often Bgive in to feel good^, because doing so provides shortterm mood repair (Sirois and Pychyl 2013).
Research into the activities that bedtime procrastinators engage in rather than sleeping reveals a range of different activities. A large majority of participants indicates that they engage in media use: they watch Bjust one more^episode of their favourite TV show, surf the web for funny videos of cats or play video games until the break of dawn (Kroese et al. 2014a, b;Kroese et al. 2016). Although media use can be replenishing with respect to self-regulation (Reinecke 2009), it can also make people feel worse when used as a means to avoid important tasks (Myrick 2015). When people have little selfregulatory resources at their disposal, they are more likely to feel guilty as a result of using media (Reinecke et al. 2014), which may further increase the need for short-term mood repair.
Accordingly, we propose that bedtime procrastination can function as an inadequate emotion regulation strategy to regulate immediate mood. This view is consistent with theory and research on general procrastination, which posits that affect regulation is a key factor in the maintenance of procrastination (Pychyl and Sirois 2016;Sirois and Pychyl 2013), and that bolstering emotion regulation skills reduces procrastination (Eckert et al. 2016). Procrastination can instigate a vicious cycle in which people procrastinate to regulate mood, but feel bad as a result of doing so, which increases their propensity to procrastinate (Sirois and Giguère 2018;Sirois and Kitner 2015). In the present research, we propose that these findings may extend to bedtime procrastination. Specifically, bedtime procrastinators delay going to bed to help themselves cope with their negative mood. As a result of doing so, they are more likely to become sleep deprived, which further limits their capacity to cope with stress, making them increasingly likely to engage in bedtime procrastination.
Current evidence suggests that self-compassion is associated with the practice of health behaviours, and that adaptive emotional responses to personal flaws and failures that upregulate positive affect and downregulate negative affect might explain this association ). Yet to date, there is little (if any) research directly testing whether emotion regulation, and cognitive reappraisal in particular, accounts for the adaptive levels of positive and negative affect associated with self-compassion that are proposed to facilitate health behaviours. In addition, self-compassion has not been examined in relation to bedtime procrastination, an important sleeprelated behaviour that has links to poor self-regulation (Kroese et al. 2014b), and possibly poor emotion regulation (Sirois and Pychyl 2013). Consistent with research demonstrating that self-compassion is associated with lower levels of general procrastination (Sirois 2014), we expected that selfcompassionate people would engage in less bedtime procrastination. We also hypothesized that the high positive affect and low negative affect associated with being self-compassionate would account for less bedtime procrastination. These hypotheses were tested in two non-clinical samples with respect to sleep disorders. In people with sleep disorders, it is difficult to ascertain whether late bedtimes are a result of a sleep disorder (e.g. because people are unable to sleep at an earlier time) or bedtime procrastination (e.g. because people could have gone to bed earlier, but choose to watch television instead).
Study 1
In study 1, we tested the linkages of dispositional selfcompassion to sleep quality and behaviours, with the expectation that self-compassion would be associated with better sleep quality, less difficulty falling asleep and less bedtime procrastination. We also tested the mediating roles of positive and negative affect for linking self-compassion to bedtime procrastination.
Method
Participants A sample of 134 people (mean age = 30.22, SD = 13.5; 77.4% female) participated in the study. Just over half (52.2%) of the participants were not students, and among those who were students, the majority were undergraduate students (41.0% of the sample).
Procedure
Ethical clearance for the data collection was obtained through the Institutional Review Board prior to recruitment and data collection. A convenience sample of participants was recruited via emails sent to a university student volunteer list, through notices placed on social media, and via online psychology web site ads. Participants completed an online survey after clicking BI agree^to provide consent. Data were collected between December and January.
Measures
Participants completed basic demographic questions (age, gender and student status) and the following measures.
Self-Compassion
Participants completed the 26-item Self-Compassion Scale (SCS; Neff 2003a), which assesses the three main components of self-compassion and their negative counterparts, self-kindness (self-judgment), common humanity (isolation) and Mindfulness (over-identification). The SCS includes equal numbers of items worded positively (BI try to be loving towards myself when I'm feeling emotional pain^) and negatively (BI'm disapproving and judgmental about my own flaws and inadequacies^). The subscales are best explained by a single higher order factor of self-compassion (Neff 2003a). Items prefaced with the statement BHow I typically act towards myself during difficult times^are rated on a five-point scale from 1 (almost never) to 5 (almost always), with the average of the items (after reverse coding) yielding a score reflecting higher levels of self-compassion. The SCS demonstrates good validity, both convergent and discriminate, and excellent testretest reliability in previous research (α = .93) (Neff 2003a;Neff and Pommier 2013). In the current study, the SCS demonstrated very good reliability (see Table 1).
Bedtime Procrastination Bedtime procrastination was measured using a nine-item Bedtime Procrastination Scale (BPS; Kroese et al. 2014a, b), which assesses participants' propensity to unnecessarily delay their bedtimes. The BPS contains four positively worded items (BI go to bed later than I intended^) and five negatively worded items (BI can easily stop with my activities when it is time to go to bed^), that are rated on a five-point scale (almost never-almost always). The average of these items (after the negatively worded items have been reverse coded) reflects the extent to which people engage in bedtime procrastination, with higher scores reflecting more procrastination. The BPS has good internal and external validity and acceptable test-retest reliability (r = .79; Kroese et al. 2014a, b). In the current study, reliability for the BPS was very good (see Table 1).
Sleep Behaviour Sleep behaviour was assessed with two questions adapted from the Pittsburg Sleep Quality Index (PSQI; Buysse et al. 1989), a widely used and well-validated measure of seven different aspects of sleep quality. For the purposes of this study, and as a descriptive assessment of participants' sleep behaviours, only the PSQI global sleep quality and the sleep difficulty items were used. Participants rated their sleep quality by responding to the question BDuring the past week, how would you rate your sleep quality overall?^on scale from 1 (very good) to 4 (very bad), with higher scores reflecting poor sleep quality. How often they had difficulty getting to sleep within 30 min within the past week was rated on a scale from 1 (not during the past week) to 4 (three or more times a week), with higher scores reflecting more frequent sleep difficulties.
Positive and Negative Affect The positive and negative affect subscales of the Positive and Negative Affect Schedule (PANAS; Watson et al. 1988) consists of 20 items consisting of words describing different affective states (e.g. happy, upset), with 10 items for each of the positive and negative affect scales. Respondents rated the extent to which they are currently experiencing each of these affect states on a five-point Likert scale ranging from 1 (very slightly or not at all) to 5 (extremely). The PANAS has demonstrated good psychometric properties, including good discriminant validity compared to measures of anxiety and depression, and good internal reliability (α = .88) (Crawford and Henry 2004). In the current study, reliabilities for the PANAS subscales were very good (see Table 1).
Data Availability Statement All data are available upon request from the authors.
Results
The results of the correlation analysis among the study variables are presented in Table 1. Consistent with our hypotheses, selfcompassion was negatively associated with bedtime procrastination, trouble falling asleep within 30 min, poor sleep quality and negative affect and positively associated with positive affect. However, only negative affect (but not positive affect) was significantly associated with bedtime procrastination. Given that positive affect was not related to bedtime procrastination, the planned multiple mediation model was abandoned and simple mediation model with negative affect was tested. The significance of the indirect effects (mediation) of self-compassion on bedtime procrastination through negative affect was evaluated using the SPSS macro PROCESS (Hayes 2013), which employs a bootstrapping resampling procedure that draws k-bootstrapped samples from the data to estimate the indirect effect and its confidence interval (CI). The current analyses used 5000 bootstrapping resamples and bias corrected 95% confidence intervals. The extent to which negative affect explained the variance in the relationship between self-compassion and bedtime procrastination was estimated with the kappa 2 statistic (k 2 ). This statistic is independent of sample size and provides an estimate of the proportion of the maximum indirect effect available for explanation by indirect effects, that is explained by a mediator (Preacher and Kelley 2011).
The analyses revealed the expected significant indirect effects of self-compassion on bedtime procrastination through negative affect, with the overall model explaining 8% of the variance in bedtime procrastination (see Table 2). Of the variance in bedtime procrastination that was explainable by selfcompassion, 6% was explained by negative affect (k 2 = .06).
Study 2
In study 2, we expanded this model to include cognitive reappraisal as a precursor of positive and negative affect, to directly test the proposition that healthy emotion regulation explains why self-compassionate people engage in less bedtime procrastination (see Fig. 1). We chose cognitive reappraisal as the emotion regulation strategy given evidence that cognitive reappraisal effectively downregulates negative affect and, to a lesser extent, upregulates positive affect (Engen and Singer 2015), and that self-compassion is linked to cognitive reappraisal for decreasing negative mood (Diedrich et al. 2016).
Method
Participants A sample of 810 people completed an online survey. Of this sample, 103 participants who met clinical criteria for insomnia and 61 participants with missing responses on the measure of insomnia were removed, leaving a final sample of 646 (mean age = 30.74, SD = 12.2; 68.9% female) that was analysed for this study.
Procedure
Ethical clearance for the data collection was obtained from the Institutional Review Boards prior to recruitment and data collection. Participants were recruited via a notice sent out to a university volunteer list, through noticed placed on social media, and via online psychology web site ads. All participants were offered a chance to win a gift voucher worth £25. Data were collected between January and March.
Measures
Participants completed demographic questions (age and gender), and the Bedtime Procrastination Scale (Kroese et al. 2014a, b) that was completed in study 1, along with the following measures.
Self-Compassion and Affect
Participants completed the short 12-item version of the self-compassion scale (SCS-12; Raes et al. 2011), and short 10-item version of the PANAS-X (Watson et al. 1988) presented as a visual analogue scale with six negative affect (distressed, upset, guilty, angry, dissatisfied with self, ashamed) and four positive affect (inspired, proud, thankful, hopeful) adjectives scored on an eight-point rating scale (1 = not at all to 8 = extremely). For negative affect, items were chosen from the PANAS-X to reflect distress and guilt, two affective states known to be linked to procrastination (Blunt and Pychyl 2005;Flett et al. 2012). For positive affect, items were chosen to reflect positive self-conscious (pride) and future oriented (inspired, proud) affective states, as these have been linked to lower procrastination behaviour (Blouin-Hudon and Pychyl 2015; Giguère et al. 2016). An additional item was chosen to reflect gratitude, a positive affective state associated with healthy sleep behaviour (Wood et al. 2009). Participants additionally completed the following new measures for this study. Scale descriptives are presented in Table 3.
Insomnia The insomnia severity index (ISI; Bastien et al. 2001) is a seven-item measure for screening insomnia and evaluating sleep difficulties. Items are rated on a five-point scale ranging from 0 to 4, in two sets. The first set of three questions asks about the severity of insomnia symptoms (none to very severe), and the next four questions focus on perceptions of sleep problems, including dissatisfaction with, noticeability, distress and the interference of sleep problems. The ISI has demonstrated excellent psychometric properties in previous research (Morin et al. 2011). In the current study, the ISI was used for the purpose of screening out participants meeting clinical criteria for insomnia, as indicated by scores above the recommended subclinical cut-off score of 14 (Morin et al. 2011).
Emotion Regulation Individual differences in the use of emotion regulation strategies were assessed with the Emotion Regulation Questionnaire (ERQ; Gross and John 2003), a 10-item measure of two strategies: cognitive reappraisal (6 items) and expressive suppression (4 items). Items are rated on a scale ranging from 1 (strongly disagree) to 7 (strongly agree), and subscale scores are averaged with higher sores indicating greater use of the emotion regulation strategy. The subscales of the ERQ have demonstrated good psychometric properties in previous research including good predictive validity and internal consistency (Gross and John 2003). For the current study, only the cognitive reappraisal subscale was analysed, as previous research indicates that it has associations with self-compassion, and both positive and negative affect (Diedrich et al. 2016;Engen and Singer 2015).
Data Analyses
Correlation analyses tested the proposed links between selfcompassion, reappraisal, positive and negative affect and bedtime procrastination. In multivariate analyses, we tested the hypothesis that self-compassion was associated with greater reappraisal, and in turn, to greater positive and less negative affect, and then to less bedtime procrastination (see Fig. 1), using path analysis via Mplus 7.4 (Muthen and Muthen 2013). Age and respondents' sex were statistically accounted for in the model (see Fig. 1). Full-information maximum likelihood (FIML; Arbuckle 1996) estimation procedures were employed. The total, direct and indirect effects of selfcompassion on bedtime procrastination were estimated, along with direct effects of self-compassion on reappraisal and on positive and negative affect, and direct effects of each of these latter variables on bedtime procrastination. Since the path model was fully saturated (i.e. df = 0), fit indices were uninformative. Consequently, our primary interest was decomposition of the total predictive effects of self-compassion on bedtime procrastination into direct and indirect effects (see MacKinnon et al. 2004). Indirect effects were tested using the biased-corrected bootstrap method with 10,000 resamples and the 95% bias-corrected confidence intervals (CIs). This method provides a more accurate balance between type 1 and type 2 errors compared to other methods used to test indirect effects (MacKinnon et al. 2004).
Results
All variables were normally distributed (i.e. skewness and kurtosis values all between − 1 and + 1). Four outliers were identified for negative affect, and four outliers were identified for reappraisal. To minimize the influence of these cases on analyses, we replaced any values larger than 3 SDs above the group mean with the value equal to the group mean plus 3 SDs. Descriptive information along with correlations between all study variables is presented in Table 3. Results indicated that self-compassion was positively associated with positive affect, reappraisal and age; self-compassion was negatively associated with negative affect and bedtime procrastination. Reappraisal was positively associated with positive affect and negatively associated with negative affect and bedtime procrastination. Positive affect was related to less negative affect and bedtime procrastination and greater age, whereas negative affect was associated with greater bedtime procrastination along with being younger. Finally, bedtime procrastination was negatively associated with age. As can be seen in Table 4, the overall model accounted for 17% of the variance in bedtime procrastination. Examination of the correlations revealed that self-compassion accounted for 9.61% of the variance in bedtime procrastination, which was consistent with the findings from study 1. Cognitive reappraisal and negative affect accounted for 24.01 and 23.04% of the variance in bedtime procrastination, respectively, whereas positive affect accounted for 19.36% of the variance in bedtime procrastination.
The results from the path analyses are presented in Table 4. As expected, after accounting for the effects of age and respondents' sex, self-compassion shared a direct and positive relationship with reappraisal and was also linked with higher levels of positive affect and lower levels of negative affect. Self-compassion was also directly and negatively associated with bedtime procrastination. Reappraisal was associated with higher levels of positive affect and with lower levels of Table 4 Path model results of the effects of self-compassion on bedtime procrastination through cognitive reappraisal and negative affect in study 2 negative affect. Positive affect was not associated with bedtime procrastination, whereas negative affect was associated with greater bedtime procrastination. With respect to indirect effects, self-compassion shared significant and negative indirect associations with bedtime procrastination via reappraisal and negative affect [b = − .011; 95% CI = (− .025; − .003)] and through negative affect [b = − .062; 95% CI = (− .112; − .020)]. Following recommendations made by Simmons et al. (2011), we also conducted the path analyses without any covariates in the model and there were no meaningful differences in the results.
Discussion
Across two studies, individuals high in self-compassion reported high levels of positive affect and low levels of negative affect, and engaged in less bedtime procrastination. Overall, the findings were somewhat consistent with our proposed emotion regulation perspective: low negative affect, but not high positive affect, explained why self-compassionate people reported less bedtime procrastination. Importantly, the path analysis from study 2 indicated that the effects of selfcompassion on bedtime procrastination operate partly through the use of cognitive reappraisal, an adaptive emotion regulation strategy (Gross and John 2003), which was in turn linked to low negative affect. However, the indirect effects solely through negative affect, and the direct effect on bedtime procrastination, were also significant, suggesting that there are other reasons, beyond having an effective emotion regulation style (Finlay-Jones 2017), why self-compassionate people tend to not stay up later than they intended. Our findings confirm and extend theory and research on self-compassion and health behaviours in several important ways. This research tested and found that self-compassion is related to less bedtime procrastination, an important sleeprelated behaviour. In doing so, the current research adds to a growing body of literature indicating that being selfcompassionate can not only have benefits for the practice of important health behaviours (Dunne et al. 2016;Sirois et al. 2015a;Terry et al. 2013) but can also be beneficial for reducing health-compromising behaviours (e.g. Kelly et al. 2010). If we consider the consequences of bedtime procrastination for sleep duration and daily fatigue (Kroese et al. 2014a, b), and the consequences of poor sleep for health (e.g. Irwin et al. 2016), then bedtime procrastination can be viewed as a sleep behaviour that may compromise health.
The current study suggests that low negative affect rather than high positive affect is implicated in the association between self-compassion and bedtime procrastination. Recent controversy regarding the factor structure of the selfcompassion construct suggests one possible explanation for this finding. Some researchers have contended that self-compassion as measured by the self-compassion scale (Neff 2003a) reflects positive (self-compassion) and negative (selfcriticism) poles that should be assessed separately rather than as whole (Costa et al. 2015;Montero-Marín et al. 2016). For example, a meta-analysis of 18 studies found that the negatively scored items on the self-compassion scale (self-criticism, isolation and over-identification) were more strongly related to psychopathology than the positive items (self-kindness, common humanity and mindfulness), prompting the researchers to conclude that using a total scale score will inflate the links of self-compassion to negative affective states (Muris and Petrocchi 2017). However, a recent test of the factor structure of the self-compassions scale across 20 diverse samples comparing six-factor, one-factor and bifactor models found that a bifactor model reflecting compassionate and reduced uncompassionate self-responding had poor fit in all samples, whereas a single-factor model had the best fit, explaining 95% of the item variance (Neff et al. 2018). Accordingly, conceptual reasons related to emotion regulation rather than methodological reasons related to scale validity may explain why negative but not positive affect explains the link between self-compassion and bedtime procrastination.
The current research extends previous findings suggesting that self-compassion may promote healthy behaviours due in part to its links to adaptive emotions, which can serve as a selfregulation resource that facilitates the practice of health behaviours (Sirois 2015d;Sirois et al. 2015a). Whereas previous research has examined the levels of positive and negative affect associated with self-compassion in relation to health behaviours, which are suggestive of healthy emotion regulation, this is an initial study to test whether emotion regulation, and specifically cognitive reappraisal, explains the healthy affective states linking self-compassion to health behaviours. Cognitive reappraisal involves Bconstruing a potentially emotion eliciting situation in a way that changes its emotional impact^ (Gross and John 2003, p. 349). When viewed from an emotion regulation perspective, the current findings are consistent with the notion that selfcompassionate people cognitively reappraise challenges and potential stressors as less stressful and upsetting, and in this respect downregulate their negative emotions (Finlay-Jones 2017). This in turn may reduce the need to engage in bedtime procrastination as a means to repair negative mood.
The current research also extends our understanding of bedtime procrastination by situating it within the context of emotion regulation, and in relation to one particular emotion regulation strategy, cognitive reappraisal. Viewing our findings from the reverse-that people low in self-compassion tend to use less cognitive reappraisal, and experience higher levels of negative mood, and engage in more bedtime procrastination-provides some support for the proposition that bedtime procrastination can serve a mood regulating function. Research has found that engaging in pleasurable activities, such as watching funny kitten videos, can serve a hedonic function, by replenishing positive emotions and providing stress relief (e.g. Myrick 2015). But when the motivation for doing so is to avoid important tasks, such as going to bed on time, then feelings of guilt are likely to replace those of enjoyment (Myrick 2015). In this respect, bedtime procrastination can be viewed as a short-term mood regulation strategy that, similar to general procrastination, comes with a cost to health (Sirois 2015a;Sirois et al. 2003;Sirois and Pychyl 2013).
Given that the direct effects of self-compassion remained significant, our findings also indicate that self-compassionate people are less likely to procrastinate on their bedtime for reasons outside of affective states and emotion regulation. For example, it is possible that going to bed on time could simply be a demonstration of self-kindness and making sure that one gets enough sleep at night. Research on mindfulness, a component of self-compassion, further suggests that effective sleep regulation may account for the direct effects. In a study of undergraduate students, mindfulness was associated with a range of healthy sleep-regulating behaviours and indicators, including less pre-sleep arousal (Howell et al. 2010). Similarly, self-compassionate people may simply be better at preparing themselves to get a good night sleep, which would include avoiding interacting with potential distractors such as technology, engaging books and other tasks that might promote bedtime procrastination. Further research into these and other possible alternative explanations for the direct effects noted in the current study is needed to gain a more complete understanding of the reasons why self-compassionate people tend to not engage in bedtime procrastination.
Limitations and Future Directions
Though novel, our current findings need to be considered in light of several limitations. In the current study, only cognitive reappraisal was tested. Given the cross-sectional data in both studies, it is not possible to confirm the proposed directionality of the relations among self-compassion, cognitive reappraisal, affect and bedtime procrastination. However, our model follows the recommendations of Kline (2010) and is thus predicated on previous theory on self-compassion and the role of affect in the self-regulation of health behaviours (Sirois 2015d;Sirois et al. 2015a). The results are consistent with our model and with a systematic review which found that selfcompassion interventions were as effective as other health behaviour change approaches (Biber and Ellis 2017). Nonetheless, given the limitations of our study design, it is also possible that people who are sleep deprived may have difficulty being self-compassionate, and may default to being self-critical instead (Campion and Glover 2016). In addition, the links between emotion regulation and sleep-related behaviours can be reciprocal and mutually reinforcing (Gruber and Cassoff 2014;Palmer and Alfano 2017), suggesting the possibility of more complex relations between emotion regulation and bedtime procrastination than those proposed in the current study. Given the limitations of our research design, we encourage future research to rigorously test our model with longitudinal and/or experimental data to establish temporal precedence suggested by our model.
Other issues arising from the cross-sectional design of our studies involves the retrospective reporting of bedtime procrastination and the assessment of affect at a single static time point and not directly before prior to bedtime procrastination behaviour. This approach could be viewed as less than ideal for assessing the proposed roles of affect and emotion regulation in the linkages of self-compassion to bedtime procrastination. However, an underlying assumption of both our analyses and the TASRR model (Sirois 2015b, d) is that affective states associated with personality traits and dispositional qualities, such as self-compassion, are a relatively stable resource that creates risk or resilience for the self-regulation of health behaviours. Accordingly, it could be reasoned that the one-time measure of affect in each study was generally reflective of more enduring affective states associated with self-compassion. Recent evidence demonstrating that daily affect is surprisingly stable over time, and that up to half the variance in daily affect can be attributed to trait level qualities (Hudson et al. 2016), provides support for this proposition. Further research using a daily diary approach would nonetheless provide a more fine-grained perspective on the role of self-compassion in response to daily challenges and the role of affective responses for attenuating a propensity to engage in bedtime procrastination.
Although further research is needed, our findings provide preliminary evidence that self-compassion may be protective against bedtime procrastination. A growing evidence base indicates that self-compassion can be enhanced through training and interventions (e.g. Neff and Germer 2013), and that such methods can be effective for reducing health risk behaviours such as overeating (Adams and Leary 2007), smoking (Kelly et al. 2010), alcohol misuse (Brooks et al. 2012) and other unhealthy behaviours (Biber and Ellis 2017). Although the amount of variance in bedtime procrastination explained by self-compassion in study 1 (6%) and study 2 (9%) may seem negligible, it is nonetheless comparable to other research that found that self-compassion explained on average 6% of the variance in a measure of multiple health behaviours, which included sleep behaviour, across 15 samples . In terms of clinical significance, previous research has found that even small changes in self-compassion were effective for reducing other harmful health behaviours such as alcohol use (Brooks et al. 2012) and smoking (Kelly et al. 2010). Given this, the current findings provide preliminary evidence to suggest that cultivating self-compassion in response to the daily stresses and failures that can compromise mood and contribute to bedtime procrastination could therefore be one way to reduce this unhealthy sleep-related behaviour.
Our findings provide novel evidence suggesting that selfcompassionate people are less likely to engage in bedtime procrastination, due in part to their use of cognitive reappraisal, a healthy emotion regulation strategy that downregulates negative mood. This mood regulation perspective of bedtime procrastination also provides new possibilities for understanding and addressing the harmful health effects of bedtime procrastination, as research to date on the causes of bedtime procrastination has been mainly descriptive rather than explanatory. Further longitudinal and intervention-based research is necessary to provide additional insights into how responding with self-kindness and mindful acceptance to daily challenges can make it easier for people to resist engaging in pleasurable activities, such as watching funny kitten videos and playing video games into the night, and get to bed on time.
Author Contributions FS: designed and executed the studies, ran the data analyses for study 1 and wrote the majority of the paper. SN: collaborated with the design, analysis and writing of the study. DM: analysed the data and wrote the results for study 2 and collaborated in the editing of the final manuscript.
Compliance with Ethical Standards
Conflict of Interest The authors declare that they have no conflict of interest.
Informed Consent Informed consent was obtained from all individual participants included in the study.
Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the Research Ethics Board of the University of Sheffield and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2023-01-14T14:11:08.633Z | 2018-06-22T00:00:00.000 | {
"year": 2018,
"sha1": "665d114ee7b4c0a968a01d57b4f9e6512716aa28",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12671-018-0983-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "665d114ee7b4c0a968a01d57b4f9e6512716aa28",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
232775661 | pes2o/s2orc | v3-fos-license | Chimeric Antigen Receptor Design and Efficacy in Ovarian Cancer Treatment
Our increased understanding of tumour biology gained over the last few years has led to the development of targeted molecular therapies, e.g., vascular endothelial growth factor A (VEGF-A) antagonists, poly[ADP-ribose] polymerase 1 (PARP1) inhibitors in hereditary breast and ovarian cancer syndrome (BRCA1 and BRCA2 mutants), increasing survival and improving the quality of life. However, the majority of ovarian cancer (OC) patients still do not have access to targeted molecular therapies that would be capable of controlling their disease, especially resistant or relapsed. Chimeric antigen receptors (CARs) are recombinant receptor constructs located on T lymphocytes or other immune cells that change its specificity and functions. Therefore, in a search for a successful solid tumour therapy using CARs the specific cell surface antigens identification is crucial. Numerous in vitro and in vivo studies, as well as studies on humans, prove that targeting overexpressed molecules, such as mucin 16 (MUC16), annexin 2 (ANXA2), receptor tyrosine-protein kinase erbB-2 (HER2/neu) causes high tumour cells toxicity and decreased tumour burden. CARs are well tolerated, side effects are minimal and they inhibit disease progression. However, as OC is heterogenic in its nature with high mutation diversity and overexpression of different receptors, there is a need to consider an individual approach to treat this type of cancer. In this publication, we would like to present the history and status of therapies involving the CAR T cells in treatment of OC tumours, suggest potential T cell-intrinsic determinants of response and resistance as well as present extrinsic factors impacting the success of this approach.
Introduction
Ovarian cancer (OC) is the 8th most common form of cancer in women worldwide with an estimated 295,414 new cases and 184,799 deaths annually. It has the worst prognosis and the highest mortality rate among gynaecological cancers [1]. Moreover, it is predicted that by the year 2040 the mortality rate of this specific type of cancer will rise significantly [2,3]. OC develops asymptomatic, and there is no proper screening program that would facilitate early-stage diagnosis [4,5]. OC metastasis occurs remarkably early in the disease development process. Tumour cells extrude from the primary tumour, survive anchorage-independent apoptosis as free-floating cells or form spheroids, then spread across the peritoneal cavity where they proliferate and interact with mesothelial cells and adipocytes of the omentum. Due to the insidious nature of this disease, most patients with OC are diagnosed with advanced stages of the disease mainly due to intraperitoneal spread and often the presence of distant metastases (International Federation of Gynaecology and Obstetrics, FIGO stage III/IV disease) [6]. The early detection of OC remains challenging because clinically apparent symptoms only manifest during the disease's later stages. Metastases are associated with a poor prognosis where the typical overall survival rate ranges from weeks to months if untreated [7]. Only 15% of patients are diagnosed at an early stage, whereas the majority of women are diagnosed with metastatic cancers (92% vs 29% 5-year survival rate) [8]. Patients diagnosed with stage III or IV OC have a 5-year survival rate of less than 25%, including aggressive surgical resection and first-line chemotherapy drugs administration [9]. Therefore, conventional treatments such as debulking surgery and combination chemotherapy are rarely able to control the progression of the tumour, and relapses are frequent. Although up to 75% of patients achieve a good clinical response following initial therapy, almost all will ultimately relapse and eventually develop the chemotherapy-refractory disease. Consequently, the OC survival rate has not changed significantly despite decades of research [10]. Therefore, we need novel and effective therapeutic methods that would ensure beneficial long-term clinical outcomes for patients with OC. Metastasis from OC can occur via the transcoelomic, haematogenous, or lymphatic route. Transcoelomic metastasis being the most common is responsible for the highest morbidity and mortality rates among women with OC [11,12]. Malignant epithelial tumours account for 90% of all OC cases. Histopathology, immunohistochemistry and molecular genetic analysis are used to perform classification [13][14][15]. In samples obtained from patients, it is possible to distinguish high-grade serous carcinomas (HGSC), endometrioid carcinomas (EC), clear cell carcinomas (CCC), mucinous carcinomas (MC) and low-grade serous carcinoma (LGSC) [16,17]. Over two-thirds of OC cases account for HGSC. Immune signatures define a subgroup of HGSCs with a high percentage of infiltrating lymphocytes that have better survival outcomes. On the other hand, reactive stromal signatures with high levels of desmoplasmia, activated myofibroblasts, vascular endothelial cells and extracellular matrix remodelling is an indicator of the poorest prognosis [18][19][20].
OC is responsible for the dysregulation of the immune system in a multistep cooperative process. Stimulating the host to initiate the immune response against tumours requires the following: (1) a sufficient amount of effector T cells must be produced in the body to recognise tumour antigens effectively; (2) these cells must identify, present and infiltrate tumour tissue; (3) must overcome the inhibition of the tumour microenvironment (TME) on the immune network; (4) must directly identify tumour antigens and kill tumour cells; and (5) must maintain the activity of anti-tumour T cells for a long time [21,22]. While the TME tumour-associated immune cells may be initially involved in restricting tumour growth, these cells are also immunosuppressive and contribute to tumour progression due to their ability to block the host anti-tumour responses and drive the angiogenesis of the tumour. [23]. Myeloid leukocytes are the main components of the immune system supporting tumour expansion through secretion of growth factors, inhibition of anti-tumour T cells via the production of arginase and vascularisation [24]. For tumour growth and cancer dissemination tumour fibroblasts are responsible, while regulatory T cells cause immunosuppression of the host's system [25]. Tumour-associated macrophages (TAMs) adopt an alternative phenotype M2, characterised by enhanced tissue regenerative responses and local immune suppression [26]. OC cells secrete large amounts of IL-10, promoting differentiation of dendritic cells DC to CD14+CD1 a macrophage-like cell with reduced T-cell activation properties [27]. Although studies regarding immune cell profiles by histologic subtype are limited, researchers found that HGSC had the highest number of tissue cores stained with the pan leukocyte marker, CD45 and also more frequently FoxP3, CD25 or CD20 compared to other subtypes. Tumours with endometrioid histology (EC) had the second-highest and clear cell (CCC), as well as mucinous (MC), had the lowest percentages with infiltrates overall [28]. One mechanism by which several different types of immune cells are suppressed in the TME is through the production of indoleamine 2,3-dioxygenase (IDO) [29].
Currently, the first-line treatment regimen for OC patients is complete debulking surgery. Despite the fact that this type of surgery constitutes the basis for OC treatment, it is rarely sufficient alone for patients with advanced disease and must be combined with chemotherapy [30]. Increased understanding of OC biology and chemoresistance gained over the last few years led to the development of targeted molecular therapies improving the survival and increasing the quality of life in OC patients (VEGF-A antagonists, PARP inhibitors in BRCA1 and BRCA2 mutants). On the other hand, the majority of OC patients still do not have access to targeted molecular therapies that would be capable of controlling their disease [31]. One of the promising strategies overcoming non-specific activity and disease relapse is immunology engineering. Cell-based cancer immunotherapy represents a promising option for patients without access to treatment alternatives. This approach focuses on the use of the patient's immune system to destroy the OC cells and ideally on triggering an immunological memory response.
What Is CAR?
Chimeric antigen receptors (CARs) are recombinant antigen receptors located on T lymphocytes or other immune cells that redirect their specificity and functions [32]. The moieties used to bind to antigen fall in three general categories: (a) single-chain variable fragment (scFv) derived from antibodies; (b) antigen-binding fragment (Fab) selected from libraries or (c) nature ligands that engage their cognate receptor. The main rationale behind the use of CAR receptors in cancer immunotherapy is the rapid production of tumour-targeting T cells, bypassing the barriers and incremental kinetics of active immunisation [33]. The CAR-modified T cells acquire unique properties and act as 'living drugs' that may result in short-term, as well as long-term effects [34]. There are four generations of CARs used in clinical practice. The core structure of all four generations is an extracellular antigen recognition region with scFv, which is responsible for immunogenicity, affinity and specificity [35]. With scFvs, CARs can target specific cells and trigger downstream signals. Fragments of scFvs derive from an antigen-specific monoclonal antibody (mAb) [36]. The receptor's extracellular domain originates from a cluster of differentiation CD4 and CD8. The transmembrane domain is usually derived from CD8, CD3-Q(zeta), CD28 and intracellular tail including members of the tumour necrosis factor (TNF) receptor family, 4-1BB (CD137), OX-40 and CD27, has been incorporated to second and third generation [37]. The fourth generation of CARs is also called TRUCK T cells and was engineered to induce cytokines production, for example, IL-2, IL-12, IL-15 or granulocyte-macrophage colonystimulating factor (GM-CSF) [38]. The green fluorescent protein (GFP) is a protein that exhibits bright green fluorescence when exposed to light in the blue to ultraviolet range. It can be added to every generation of CAR in term to estimate its specificity to bind target antigen via fluorescence microscope. Figure 1 represents the structure of CARs.
Eshhar et al. designed structures that specifically recognise and respond to the antigen without signalisation of major histocompatibility complex (MHC) [39]. Unfortunately, firstgeneration CARs proved to be of limited clinical benefit because of failure in directing T-cell expansion upon repeated exposure to the antigen [40]. The 4-1BB ligand, CD137L is found on APCs (antigen-presenting cells) and binds to the 4-1BB superfamily, which is expressed on activated T Lymphocytes [41]. Savoldo et al. proposed incorporation of one stimulatory domain CD28 or 41BB to the second-generation CARs [42]. Third-generation CARs were formed by the incorporation of two or more costimulatory domains. On the other hand, their clinical effect in comparison to second-generation remains controversial [43,44]. The fourth-generation was developed to redirect T cells for universal cytokine killing, via the addition of an IL-12 expression cassette. IL-12 can accumulate in the target tissue and recruit a second wave of immune cells, e.g., NK cells, macrophages [45,46]. domain, scFv-a single-chain variable fragment, spacer-protein fragments fused together, CD8transmembrane protein, OX-40-also known as CD134 glycoprotein receptor, tumour necrosis factor receptor superfamily, 4-1BB-glycoprotein receptor tumour necrosis factor receptor superfamily, CD3Ϛ-protein complex and T-cell co-receptor that is involved in activating both the cytotoxic T cell and T helper cells, FcRγ-receptor for inducing phagocytosis, CD28-a protein that provides costimulatory signals, eGFP-enhanced green fluorescent protein, Il-2-interleukin 2 (cytokine), GM-CSF-granulocyte-macrophage colony-stimulating factor.
Eshhar et al. designed structures that specifically recognise and respond to the antigen without signalisation of major histocompatibility complex (MHC) [39]. Unfortunately, first-generation CARs proved to be of limited clinical benefit because of failure in directing T-cell expansion upon repeated exposure to the antigen [40]. The 4-1BB ligand, CD137L is found on APCs (antigen-presenting cells) and binds to the 4-1BB superfamily, which is expressed on activated T Lymphocytes [41]. Savoldo et al. proposed incorporation of one stimulatory domain CD28 or 41BB to the second-generation CARs [42]. Thirdgeneration CARs were formed by the incorporation of two or more costimulatory domains. On the other hand, their clinical effect in comparison to second-generation remains controversial [43,44]. The fourth-generation was developed to redirect T cells for universal cytokine killing, via the addition of an IL-12 expression cassette. IL-12 can accumulate in the target tissue and recruit a second wave of immune cells, e.g., NK cells, macrophages [45,46].
How Are CARs Engineered?
Scientists use several gene transfer methods to insert a specific gene into mice or human T lymphocytes. These methods differ in the expression levels and stability of mentioned CAR-T cells. In general, there are two main approaches in immune engineering: viral and non-viral [47]. Viral vectors have high infection rates; however, their production is costly and laborious. Moreover, there are also other challenges related to immunogenicity, carcinogenicity, low target cell specificity and inability to transfer large size genes. On the other hand, non-viral vectors can be relatively easy and cost-effectively produced. They are safe, can transfer large size genes and are less toxic. Their main disadvantages are low transfection efficiency and poor transgene expression [48]. Having considered the above, in this group, only the Sleeping Beauty (SB) transposon/transposase system with clustered regularly interspaced short palindromic repeats (CRISP/Cas9) has great potential [49]. Table 1 below lists the characteristics of different engineering methods of CARs. domain, scFv-a single-chain variable fragment, spacer-protein fragments fused together, CD8transmembrane protein, OX-40-also known as CD134 glycoprotein receptor, tumour necrosis factor receptor superfamily, 4-1BB-glycoprotein receptor tumour necrosis factor receptor superfamily, CD3Q-protein complex and T-cell co-receptor that is involved in activating both the cytotoxic T cell and T helper cells, FcRγ-receptor for inducing phagocytosis, CD28-a protein that provides costimulatory signals, eGFP-enhanced green fluorescent protein, Il-2-interleukin 2 (cytokine), GM-CSF-granulocyte-macrophage colony-stimulating factor.
How Are CARs Engineered?
Scientists use several gene transfer methods to insert a specific gene into mice or human T lymphocytes. These methods differ in the expression levels and stability of mentioned CAR-T cells. In general, there are two main approaches in immune engineering: viral and non-viral [47]. Viral vectors have high infection rates; however, their production is costly and laborious. Moreover, there are also other challenges related to immunogenicity, carcinogenicity, low target cell specificity and inability to transfer large size genes. On the other hand, non-viral vectors can be relatively easy and cost-effectively produced. They are safe, can transfer large size genes and are less toxic. Their main disadvantages are low transfection efficiency and poor transgene expression [48]. Having considered the above, in this group, only the Sleeping Beauty (SB) transposon/transposase system with clustered regularly interspaced short palindromic repeats (CRISP/Cas9) has great potential [49]. Table 1 below lists the characteristics of different engineering methods of CARs. A CAR intervention example of a mechanism in patients is shown in Figure 2. The SB transposon system requires only two components: transposon DNA and transposase enzyme [57]. The most efficient way to deliver selected components into the target cell is the classical two-plasmid configuration: one for SB transposase and other for The SB transposon system requires only two components: transposon DNA and transposase enzyme [57]. The most efficient way to deliver selected components into the target cell is the classical two-plasmid configuration: one for SB transposase and other for an artificial transposon flanked via terminal inverted repeats (TIR) on both sides of a vector [58]. This system also takes into account the origin of replication component and antibiotic resistance gene of choice. To setup transposon into the target cell, transfection or electroporation can be used [57]. To eliminate toxicity effect and decrease the immunogenic reaction of DNA transfection, it is best to use the current state-of-the-art delivery methods, messengers mRNA or minicircle DNA (MC) [59]. The SB's production disadvantages are poor protein stability, low solubility and aggregation properties; however, incorporation of two mutations I212S and C176S into SB100X transposase improves these features [60]. A recent study indicates that using a catalytically inactive Cas9 (dCas9) with single-guide RNA approach may facilitate genetic material insertion into a genome [61]. Cas9 is a dual RNA-guided DNA endonuclease enzyme that uses base pairing to recognise and cleave target DNA with complementarity to the guide RNA, such as invading bacteriophage DNA or plasmid DNA [62]. Tethering the transposase toward a target that is overexpressed in the human genome dramatically increases the number of possible attach points and thus induces chances of targeted transposition with a flexible and easy-to-use RNA-guided system [63]. Pilot studies have indicated that SB is a safe and effective tool to manufacture therapeutic CAR-T cells in cancers [64][65][66][67].
In Vitro and In Vivo Studies
In recent years, CARs proved to be particularly effective in patients with haematological cancers [68,69]. Solid tumours, however, remain challenging because of their histopathological structure, aberrant vasculature and extensive vascular leakage [70]. In the OC (solid malignancies) therapy with the CAR-T cells, the key issues are lack of target antigen specificity, intrinsic target antigen heterogeneity, an immunosuppressive TME, expression of immune checkpoint molecules, ineffective intracellular trafficking/infiltration and low persistence.
The critical issue related to CAR-T-cell therapy in solid tumours is the identification of corresponding tumour target antigens absent or expressed at remarkably low levels in healthy tissue, most notably in vital organs. This problem is further amplified because each particular CAR-T cell only needs to recognise a few receptors on the target cell for full activation to occur. Selecting an ideal target antigen (i.e., overexpressed on tumour cells and with minimal or no expression on healthy tissues) will eliminate off-target effects and associated toxicity [71][72][73]. Yet another issue relates to the immunosuppressive TME. TME contains various interacting components, including tumour cells, immune cells, stromal cells, chemokines, cytokines and extracellular matrix. In solid tumours, TME exhibits strong immunosuppressive effects due to the recruitment of tumour-associated macrophages (TAMs), cancer-associated fibroblasts (CAFs), myeloid-derived suppressor cells (MDSCs) and regulatory T cells (Treg), and the production of immunosuppressive cytokines and soluble factors (e.g., IL-10, VEGF, TGFβ, indoleamine 2,3-dioxygenase and adenosine) [74].
A hypoxic, low pH intrinsic microenvironment and the activated inhibitory pathways appear to be problematic when it comes to T-cell trafficking and T-cell infiltration into tumour sites [75]. These adaptive survival-oriented cancer changes contribute to the induction of selectively enhanced permeability and retention of lipid particles and macromolecular substances. Therefore, in a search for a successful solid tumour therapy using CARs the specific cell surface antigens identification is crucial. To reduce 'on-target, off-tumour toxicity', an idea of introducing a regulated suicide gene into CARs such as the HSV-TK (herpes simplex virus I-derived thymidine kinase) or iCasp9 (caspase 9) has been proposed. Both, the T cells with HSV-TK and iCasp9 genes prevent alloreactivity, exhibit low potential immunogenicity and no acute toxicity without compromising their functional and phenotypic characteristics [76].
Mesothelin (MSLN), a cell surface glycoprotein, is generally expressed in mesothelial cells lining the pleura, peritoneum (minimally on the epithelial cells of the ovaries and fallopian tubes) and pericardium, however highly expressed in many tumour cells, including OC; its soluble form can also be found in the bloodstream of OC patients [90]. Studies have identified MSLN as a promising tumour antigen in OC as it is overexpressed in over 75% of HGSOC tumours [91]. A number of agents including CAR T-cells targeting MSLN have been developed, and are currently being investigated. There were also other preclinical studies conducted focusing on the use of mesothelin-based CAR-T cells in subcutaneous or in situ mouse models of mesothelioma, ovarian cancer and lung cancer transplantation [77][78][79].
Recent studies reveal that annexin 2 (ANXA2) has been detected in OC. Overexpression of ANXA2 mediates extracellular matrix degradation and neovascularisation by the production of plasmin and correlates with invasion and metastasis [80]. Lately, it has been suggested that natural killer (NK) cells may be better chimeric antigen receptor drivers than T cells because of their favourable innate features, such as direct recognition and elimination of tumour cells [79]. To overcome 'on-target off-tumour' cytotoxicity, the dual-target CARs may be a better choice [92]. It has been shown that dual CARs are related to the longer survival time of mice up to two times when compared to single CAR groups and control group (80 vs. 40 days) [83].
The alpha isoform, folate receptor α (FRα), also known as gene FOLR1 or folate binding protein (FBP), is a glycosylphosphatidylinositol (GPI)-anchored membrane protein that binds folic acid with high affinity and transports folate (vitamin B9) by receptormediated endocytosis. FRα has been reported to be overexpressed in solid tumours such as OC, precisely 90%, but has restricted expression in normal cells. From the perspective of OC, where increasing levels of tissue FR are associated with tumour progression, it is an attractive therapeutic target [93]. Moreover, FRα expression is not affected by any earlier treatment attempts using chemotherapy. Having considered that, folate receptor α is ideal for a tumour antigen in targeted treatments of OC [81]. The first team that constructed CAR-T cells targeting FRα and used the CAR-T cells to treat OC was Kershaw is the G protein-coupled receptor that binds IL-8 with high affinity. The proinflammatory cytokine IL-8 expression produced by tumour tissues to recruit leukocytes is substantially higher in a wide range of tumour types, as well as in OC [94,95]. Whilding et al. showed that IL-8 is actually produced by many αvβ6-positive cancer cell lines, among them SKOV3, and is present in the circulation of mice engrafted with various tumour xenografts expressing this integrin [96]. It has been reported that circulating IL-8 levels correlate with disease severity and prognosis in a number of solid tumours, where it is involved in a wide range of pathological functions, including angiogenesis, support of tumour stem cells survival and immunosuppressive myeloid cells recruitment [97]. CXCR1-and CXCR2-containing CAR-T cells showed increased migration towards IL-8 and conditioned media containing this chemokine. Furthermore, T cells that co-expressed CAR A20-28z and CXCR2 increased tumour control in vivo compared to CAR T cells deprived of this chemokine receptor, without accompanying toxicity [96]. Ng et al. demonstrated that expression of the IL-8 receptor CXCR1 to match CAR-NK cells to a chemokine secreted by the tumour facilitated increased migration and infiltration into the tumour and improved the anti-tumour responses of the immune effector cells in vivo [82]. Having considered various studies results, it seems that PD-1 is another ideal target for CAR T therapy. Programmed cell death-1 (PD-1), also called programmed cell death-ligand 1 inhibitor, is an immune checkpoint immunomodulator highly expressed on antigen-presenting cells, hepatocytes and tumours. Interaction with programmed cell death-1 results in inhibition of antigen-specific responses on T cells, B cells and macrophages. PD-1 belongs to the CD28/cytotoxic T lymphocyte-associated antigen-4(CTLA-4) family [98]. Antibodies that block PD-1/PD-L1 interaction reduce signalling between co-inhibitory molecules. It is also known that T cells are able to secrete cytokines, such as IL-10 and IFN-γ, to induce the generation of a CTLA ligand on OC cells, e.g., PD-1. At the same time, PD-1 induces expression and binds to inhibitory receptors on the surface of T cells. This reduces the anti-activity of effector T cells and directs T cells' movement to sites of inflammation. Sometimes, it results in T cells being unable to avoid the immune response [99]. Yet another experiment (on mice with melanoma) revealed that the T cells escape from immune surveillance is suppressed after upregulation of the PDL-1 expression in the tumour microenvironment (TME). The T-cell infiltration could, however, be made considerably greater. In order to achieve that, intraperitoneal injection of a PD-1 antibody is required as this procedure aims to block the PD-1 pathway [100]. Another study where patients with low PDL-1 expression were compared to patients with high PDL-1 expression revealed that the five-year survival rate is considerably greater in the former group [101].
From a clinical perspective, the most crucial peripheral checkpoint inhibitor pathway exploited by tumour cells within the TME identified to date is the interaction between the PD-1 receptor on T cells with its programmed death-ligand 1 (PD-L1) and programmed death-ligand 2 (PD-L2) on tumour cells. Increased expression of PD-L1 on T4 CAR T cells occurred when these cells were in culture with OC cells. By contrast, EOC cell lines exhibited increased PD-L1 expression after chemotherapy treatment [102].
A particular class of targets that has had limited exploration in CARs against solid tumours are glycoepitopes. External glycosylation in cancer can be initiated via dysregulation of glycosyltransferases, altering both the function and molecular profile of tumour cells. It is generally agreed that abnormal glycosylation of tumour cells leads to creation of new connections with immune cells that actively suppress anti-tumour immunity. Therefore, as tumour-specific glycosylation patterns determine the immune suppressive nature of tumours, their interactions with endogenous carbohydrate-binding proteins (lectins) could be considered as new immune checkpoints to be targeted by immunotherapy [103].
Mucin 16 (MUC16, cancer antigen 125, CA125) is mainly overexpressed in ovarian cancer (above 80%) with the shedding of antigens in a soluble form or membrane-bound form that can suppress humoral immunity, especially antibody-dependent cytotoxicity (ADCC). CA125, encoded by MUC16, is a well-known circulating marker of early stage disease that is monitored in the clinical course of OC patients. MUC16 is a macromolecule transmembrane mucin consisting of a single membrane-spanning domain, a cytoplasmic tail, an extensive N-terminal domain and a tandem repeat sequence, with CA125 antigen in the MUC16 tandem repeat. The interaction between MUC16 and MSLN contributes to peritumoral adhesion and spheroid formation, thus providing a targeting strategy that is being developed to reduce peritumoral metastasis and facilitate other therapies [104]. MUC16-CAR-T cells injected intravenously or intraperitoneally are able to delay OC's progression or altogether remove tumours in mouse tumour-bearing models. Therefore, again it seems that MUC16 is an ideal antigenic target for CAR molecules [83].
L1 cell adhesion molecule (L1-CAM) is a 200-220 kDa transmembrane glycoprotein of the immunoglobulin (Ig) superfamily. It plays a vital role in neuronal cell adhesion and migration, such as neurite outgrowth guidance, axon binding, myelination, synaptogenesis and long-term potentiation. The abnormal expression of L1-CAM protein is strongly correlated with the aggressive behaviour of many human malignancies. Mechanistic studies showed that forcibly altered L1-CAM expression significantly alters cell properties, including invasion, migration, proliferation and chemoresistance [105]. Hong et al. have shown that the L1-CAM is highly over-expressed in ovarian cancer, while absent in normal ovaries [84], and that its expression on tumours is also associated with poor clinical outcome [106]. The same team demonstrated that L1-CAM-specific CAR T cells allow considerable control of solid tumour growth in an in vivo ovarian cancer xenograft model that exhibited clinically significant manifestations of widespread tumour metastasis in the peritoneal cavity and massive ascites.
Human epidermal growth factor receptor 2 (HER2; also called Her-2/neu or ErbB2) is a member of the transmembrane epidermal growth factor receptor family and is one of the most studied TAAs for cancer immunotherapy. HER2 is a proto-oncogene and plays a vital role in the pathogenesis and clinical process of various tumours. In vitro and animal experiments have clearly shown that gene amplification and protein overexpression of HER2/neu play a key role in tumorigenic transformation and development of tumours [107]. Subsequent studies have shown that HER2/neu gene amplification and overexpression are associated with OC while protein expression in normal tissues is negative or very low. Overexpressed HER2/neu proteins make tumours more aggressive and are independent risk factors for poor prognosis in these cancer patients [108]. Sun et al. constructed and evaluated a novel anti-HER2 chA21 scFv-based CAR. The results of this study show that novel chA21 scFv-based, HER2-specific CAR T cells not only recognised and killed HER2+ breast and ovarian cancer cells ex vivo but also induced regression of experimental breast cancer in vivo. The data support further exploration of the HER2 CAR T-cell therapy for HER2-expressing cancers [85]. At present, HER2-specific CAR-T-cell therapy has shown good therapeutic potential in the preclinical stage. However, HER2-CAR-T-cell treatment in OC is still in the clinical experimental stage.
The follicle-stimulating hormone receptor (FSHR) is thought to be selectively expressed in women in ovarian granulosa cells and at low levels in the ovarian endothelium. This surface antigen is expressed in 50-70% of serous OCs, although its expression in other histological types of OC remains unknown. Perales-Puchalt et al. revealed that in immunocompetent mice growing syngeneic, orthotopic and aggressive ovarian tumours, fully murine FSHR-targeted T cells increased the survival without any measurable toxicity. In that study, chimeric receptors enhanced endogenous tumour-reactive T cells' ability to abrogate malignant progression upon adoptive transfer into naive recipients subsequently challenged with the same tumour [87].
There is a promising solution to prevent systemic toxicity-it requires to combine tumour-specific protein, e.g., NKG2D linked to IL-2. Interleukin-2 is a cytokine from the cytokine-receptor γ-chain family with many potentially useful functions including stimulation of T cells, NK cells and immunoglobulins [109]. NKG2D is a transmembrane protein belonging to the NKG2 family of C-type lectin-like receptors. The NKG2D proteins are stress-induced self-proteins entirely absent or present only at low levels on normal cells' surface. Still, they become overexpressed by infected, transformed, senescent and stressed cells [110]. Kang et al. showed that TC-1 tumour-bearing mice treated with a therapeutic HPV type 16E7 DNA vaccine and then given the DNA construct encoding the chimeric NKG2D-Fc-IL2 protein demonstrated reduced tumour mass growth and prolonged survival. Specific delivery of IL-2 with the NKG2D-Fc system led to the expansion of tumour antigen-specific CD8+ T cells at the tumour loci and an improved therapeutic anti-tumour effect generated by the therapeutic DNA HPV vaccine [111]. Other molecules that combine with the NKG2D-Fc system could be IL-12, IL-15 or GM-CSF [112].
Wang et al. designed a novel anti-uPAR CAR consisting of antigen recognition domain using a natural amino-terminal fragment, a part of the A chain of uPA instead of scFv to construct the third-generation CAR (ATF-CAR) T cells against OC cells in vitro [113]. uPAR (urokinase plasminogen activator receptor) is a receptor for uPA involved in the conversion of plasminogen to plasmin, which degrades the extracellular matrix (ECM) during tumour migration and metastasis. uPAR also affects other signals which induce tumorigenesis, tumour proliferation and adhesion and tumour dormancy and reactivation in OC [114]. It is worth mentioning that uPAR expression in healthy cells is relatively rare and focuses upon healing and tissue remodelling process and inflammatory response in some macrophages, endothelial cells and respiratory cells, which make this receptor an excellence choice for CAR development [115]. At a ratio of 10:1 ATF-CAR T cells exhibited significant lysis cytotoxicity against uPAR-positive cells SKOV3, HO8910, C13K and ES-2. Moreover, they were shown to produce higher levels of Th1 cytokines [113].
The 5T4 oncofoetal antigen was first identified during a search for surface molecules shared between human trophoblasts and cancer cells with the rationale that they may function to allow survival of the foetus as a semiallograft in the mother or a tumour in its host. The 5T4 is a 72-kDa transmembrane protein expressed on the placenta and a wide range of human carcinomas [116]. The 5T4 is known to be highly expressed in OC, and its expression correlates with more advanced stages of disease (FIGO stages III and IV) and with poorly differentiated tumours. Patients whose tumours express 5T4 seem to have a worse progression-free and overall survival [117]. Owens et al. has shown that polyclonal lymphocytes isolated from the peripheral blood of patients with OC, can be redirected to target tumour cells expressing 5T4 effectively. Co-culture of CAR T cells with matched autologous tumour disaggregates resulted in antigen-specific secretion of IFN-γ. Assessment of anti-5T4 CAR T cells' efficacy in a mouse model allowed to discover a therapeutic benefit against the established ovarian tumours [88].
Several researchers attempted to combine CAR intervention with other therapies. Wahba et al. showed that in vivo paclitaxel synergises with ErbB-targeted CAR T cells (T4) [102]. The ErbB family of proteins contains four receptor tyrosine kinases, structurally related to the epidermal growth factor receptor (EGFR) and via PI3-K/AKT pathway leads to increased cell proliferation and inhibition of apoptosis. Paclitaxel binds to the h-tubulin subunit and stabilises the microtubules, resulting in disruption of normal microtubule dynamics during cell division. Failure of microtubule separation during the G2/M phase blocks cell mitosis and results in apoptosis. DNA damage caused by chemotherapy leads to cleavage and activation of intracellular caspases, initiating a proteolytic cascade and eventually cell death. Reversal of apoptosis can be achieved using the pan-caspase inhibitor carbobenzoxy-valyl-alanyl-aspartyl-[O-methyl]-fluoromethylketone (Z-VAD), which binds to caspase proteases irreversibly, preventing the initiation of the proteolytic cascade. In their study treating ovarian tumour cells with chemotherapy and Z-VAD resulted in a reversal of the anti-tumour activity observed following chemotherapy treatment. When Z-VAD was used with chemotherapy and T4 cells, there was a partial, yet significant reversal in the reduction seen in tumour cell viability. The reversal was not complete, suggesting that caspase induction, or indeed apoptosis, was not the sole mechanism but was definitely contributing to the combination therapy's synergistic effect. Mannose-6phosphate receptor-mediated autophagy and the arrest of the cell cycle in G2/M have also shown to be induced by chemotherapy and significantly contributing to the synergy [102].
Clinical Trials
Using CARs has resulted in successful outcomes in hematopoietic malignancies and inspired introduction of similar strategies to treat solid tumours [118][119][120]. Despite encouraging results of in vitro and in vivo studies, the solid cancer methods of treatment are not developed enough to achieve the desired results. There are a few studies that describe the potential application of CARs in the treatment strategy of patients with OC. In early clinical trials of first-generation CAR T cells for OC, safety and therapeutic efficacy were difficult to be determined because of the aforementioned poor in vivo expansion and persistence of the transferred lymphocytes. For example, Wright et al. investigated whether mucin 1 variable number tandem repeat (VNTR)-stimulated mononuclear cells (M1SMC) can be given safely intraperitoneally to subjects with recurrent OC after resection and chemotherapy [121]. In the study, 7 participants underwent up to 4 cycles of treatment. Each time patients were subjected to leukapheresis (separation of white blood cells from a blood sample) before intraperitoneal infusion of tumour-specific cytotoxic T-lymphocytes. There was no other intervention performed on the subjects. The therapy was well tolerated; the only clinical side effect was abdominal pain in one patient. Median survival was 11.5 months; one subject was free of disease at the end of the study. After the first month of immunotherapy, the tumour marker CA-125 was not significantly reduced from the statistical point of view. Nevertheless, after that time, its significance increased. The killer cells, cytokine production and memory T-lymphocytes increased after the first cycle of stimulation but plateaued or decreased after that. The percent of NK cells inversely correlated with other immune parameters [121]. Unlike other tumours which do not typically possess physical barriers that would prevent their interactions with CAR T cells, many OC tumours have formidable barriers that render these masses inaccessible to invasion by immune cells.
The next clinical trial involved 15 patients, and the treatment was based on lentiviraltransduced chimeric antigen receptor (CAR)-modified autologous T cells redirected against mesothelin (CART-meso) cells (single infusion 1-3 × 10 7 -10 8 cells/m 2 i.v.). The most common adverse events were low-grade fatigue and nausea observed in 47% and 40% of the group. Lymphodepletion improved the initial expansion of CART-meso cells but did not impact CART-meso cell persistence. However, researchers detected CAR DNA in tumour biopsies and ascites from several patients, suggesting CART-meso have infiltrating abilities. A single infusion of CART-meso cells was safe in this human study but produced minimal anti-tumour activity. The best overall response was stable disease (11/15 patients) [122]. Studies evaluating a fully human anti-mesothelin and other CARs in OC treatment are ongoing (Table 3). Table 3. Actually running studies including the application of CAR-T chimeric antigen receptors in solid cancer treatment, according to ClinicalTrials.gov.
Study Title Summary Intervention Phase Locations
The Fourth Generation CART-cell Therapy for Refractory-Relapsed OC Challenges such as the immunosuppressive character of the TME, CAR-T cell persistence and trafficking to the tumour seem to limit CAR-T-cell efficacy in solid cancers [123]. Over the past decade, significant efforts have been made to develop CARs targeting OC. Comprehensive descriptions of promising CAR candidates have recently been published [33,[79][80][81]97,124]. Because of the immunosuppressive cells in the TME, including tumour-associated macrophages (TAMs), myeloid-derived suppressor cells (MDSCs) and regulatory T cells (Tregs), the anti-tumour immune function of OC patients is significantly attenuated. Thus, patients have the poorest outcomes after receiving immunotherapy. Numerous CAR strategies to affect the TME have been proposed, these were either directly aiming at cell surface components supporting the tumour, or combining the tumourtargeted CAR with anti-immuno-inhibitory drugs such as checkpoint inhibitors [124].
Cancer stem cells as a new target for CARsAs results of various studies suggest a notable phenomenon in the typical clinical course of OC is stem cell-driven repopulation. From the perspective of our research and the role of cancer stem cells (CSCs) in developing and advancing solid tumour malignancies, we can suggest that this disease is particularly well-suited for this purpose. Three theories describe the origin of CSCs: a normal stem cell, transit-amplifying cell, or a normal progenitor cell [125]. CSCs are precursors and protoplasts of heterogeneous mature cancer cells with a huge capacity for self-renewal and are believed to be a key factor for tumour development and recurrence. It is well-known that CSC cells are resistant to chemotherapy and radiotherapy. In view of this fact, it seems worthwhile to focus on immunotherapy. According to many confirmed clinical data, it has been advocated as a promising strategy to control problematic CSC cells [30,126].
CSC cells, similarly to tissue stem cells (TSC), have an enhanced ability to resist harmful internal and external factors. Due to an excellent ability to repair DNA damage, resistance to low proliferation rates, upregulation of detoxification enzymes and efflux pumps, CSC cells are much less sensitive to the use of radiotherapy, cytotoxic drugs or targeted therapies. Because CSCs seek to mimic the function of their healthy counterparts, in preliminary studies researchers used the same techniques, such as ALDH1s's detoxifying enzymatic activity, that were used to define stem cells in putative tissues of origin, particularly the fallopian tube and ovaries. In contrast, other studies have focused on surface proteins. These proteins' expression has previously been demonstrated on cells with stem cell properties in other types of cancer, such as CD24, CD44 and CD133, to identify CSCs in OC [127].
Ovarian cancer stem cells (OCSC)s were identified only a few years ago. Parte et al. have investigated 34 samples of OC human tissue and revealed the presence and different distribution of various CSC surface markers (CD133, CD24 and CD44), functional CSC markers (ALDH1/2) and cell proliferation (KI67) specific markers [127]. Klapdor et al. developed a novel anti-CD24 CAR targeting OC. This new 3rd generation anti-CD24 CAR has been shown to be highly specific and exhibits powerful cytotoxic activity against CD24-positive OC cell lines and primary cells [78]. Also, specific elimination of ovarian CSCs by anti-CD133-CAR expressing NK92 cells offers quite an optimistic strategy that, if confirmed in vivo, should form the foundation of future clinical research aimed at preventing relapses [128].
Collected data suggest that many surface markers present on CSCs surface (CD133, CD44, CD47) could be used as a target for chimeric antigen receptors. Note, that markers must be selected with exceptional due care and be tumour-specific, not expressed via healthy cells, as it could result in severe side effects. Embellishing or arming CARs to be switchable (toxicity controlled by antigen dosage) could be a method of choice. Irrespective of this, OCSCs exhibit significant phenotypic and functional heterogeneity, which is vital in designing and developing targeted therapies. For this reason, it is necessary to conduct research studies on a regular basis to explain all the heterogeneity features of CSCs in OC. The same applies to the challenge of determining their association with histopathological subtypes, clinical parameters and molecular aberrations. Given the recent advances in the analysis of single cells at the genetic level, transcriptome and proteome profiling, it seems we have finally amassed enough tools and knowledge to address this issue. Acknowledgments: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. The funders have not participated in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | 2021-04-04T06:16:22.952Z | 2021-03-28T00:00:00.000 | {
"year": 2021,
"sha1": "a714e5257da05baad1b9a01c79bd7d210a9922c5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/7/3495/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a43fb6cc34627ad075187996a8b08d726cf225d5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
123884329 | pes2o/s2orc | v3-fos-license | The models of cosmological inflation in the context of kinetic approximation
In this work the building of models of cosmological inflation with approximate linear dependence of the scalar field kinetic energy on the state parameter is considered. The key parameters of cosmological perturbations are also calculated.
is usually used. Modern experiments [4] testify that the Universe is spatially flat and now the parameter of a condition of dark energy is 1 0.1 w . Standard way of obtaining time-dependent the parameter of a state is inclusion of scalar fields in cosmological model. Under sufficiently general assumptions, within four-dimensional quintessential model with one scalar field 1 1 3 w and phantom models of 1 w .
Models of inflation are set by a type of effective potential
. In this case, potential determines the behavior of a scalar field which comes down to V minimum. The end of inflation leads to violation of conditions of slow rolling, the field oscillates about a minimum and process of reheating begins. This process includes at once some various stages, such as the decay of inflation condensate, the birth of particles of standard model and their thermalization. The equations of scalar field dynamics in flat Friedmann-Robertson-Walker Universe in units 81 Gc are [3] From three equations of the scalar field dynamics are independent two only.
The building of consistent model of cosmological inflation requires performance of the following conditions: the existence of the stage of accelerated expansion which means 1 1 3 w , the stage of the accelerated expansion comes to the end with a reheating with the subsequent birth of photons, that is transition to a stage of radiation domination to which corresponds 13 w , the correspondence of received cosmological parameters to the observations.
The usual method for studying of inflationary dynamics is slow roll approximation, which means simplification of the equations of the scalar field dynamics by neglecting of kinetic energy [3].
Thus, in the case of the slow roll approximation 2 1 0 2 , that limits a form of potential and provides the existence of an inflationary stage.
The energy density and the pressure for a scalar field where X is kinetic energy of the scalar field . The state parameter for the slow roll approximation and Hubble parameter H const . It is also possible to receive the exact solutions of the equations (1) -(3) by means of various methods for arbitrary potentials [5][6][7][8][9][10][11][12][13][14][15][16][17]. In case of exact solutions, for which restrictions for a form of potential are absent [9], the proof of existence of a stage of the accelerated expansion and an exit from inflation, proceeding from a type of a scale factor or the state parameter is required. In this work the possibility of solutions of the scalar field dynamical equations without the restrictions for a form of potential and kinetic energy with exit from the inflationary stage is considered.
We will consider kinetic energy X of the scalar field as linear function of the state parameter so that at 1 w the kinetic energy 0 X , which corresponds to the accelerated expansion caused by the cosmological constant Substituting (8) in the equation (7), taking into account the equation (1), we will receive For the phantom fields 0 Therefore, determination of kinetic energy of the scalar field in the form of which we will call kinetic approximation, provides the accelerated expansion H const and gives the method of solution of the scalar fields dynamical equations by the choice of the state parameter w . One can rewrite the equations (1) and (3) in the other form Now we will consider various cases of the state parameter as the function of Hubble parameter and time.
The state parameter as the function of Hubble parameter
where A , B , and C are arbitrary constants, the constant n accepts the integer values ( 1, 2 n ). The solutions (15) - (18) correspond to the solutions received in the paper [8] with another state parameter.
The stage of inflation is provided by approximate linear dependence of kinetic energy on the state parameter and an exit from inflation is provided by means of the special choice of parameters A , B , C and n .
The state parameter as the function of time
Now we consider solutions for the state parameter as the function of time Mpc is the wave number corresponding to the Hubble radius at the time of the matter-radiation equality.
Thus, the model parameters must be set taking into account the observational data.
Conclusion
In this work the method of the solution of the scalar field dynamical equations on the inflationary stage based on representation of kinetic energy by linear function of the state parameter was considered. Such representation of kinetic energy is entered for providing a stage of the accelerated expansion. Within the offered approach the model of cosmological inflation for the set state parameter with the graceful exit to a stage of radiation domination has been constructed.
Advantage of kinetic approximation approach concerning of slow roll approximation is the accounting of a scalar field kinetic energy and existence of an inflationary stage for arbitrary potentials. | 2019-04-21T13:13:49.290Z | 2016-07-26T00:00:00.000 | {
"year": 2016,
"sha1": "f189a63b5cf5eceeaea92b9e33863bae383b0542",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/731/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9fee2c43c27bd412c721c4653f6cc8ab5c62a5a4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
226341390 | pes2o/s2orc | v3-fos-license | Double trouble : A case of synchronous presentation of acute myeloid leukaemia and multiple myeloma
An 80-year-old patient was referred to our division for a bone marrow examination with a clinical history of pancytopenia. He had multiple co-morbidities (hypertension, benign prostatic hyperplasia, previous myocardial infarction and gastro-esophageal reflux disease) and was HIV negative. Of significance was the history of a monoclonal gammopathy of undetermined significance (MGUS), which was diagnosed 8 years prior and confirmed as still an MGUS 4 years later.
The bone marrow aspirate showed 27% blasts ( Figure 1) with some granulocytic maturation demonstrated by 13% segmented neutrophils; 8% plasma cells were counted on average, the majority of which were flame cells ( Figure 1). The good-quality trephine biopsy revealed a hypercellular bone marrow for age, with appreciable foci of blasts in areas and an interstitial increase in plasma cells. CD34 and CD138 immunohistochemistry demonstrated two abnormal populations, 40% myeloblasts and 30% plasma cells, respectively.
IgA lambda clonality was demonstrated on serum immunofixation and urine protein electrophoresis. No lytic lesions were reported on radiological examination; however, the more sensitive magnetic resonance imaging was not carried out. The presence of 30% clonal bone marrow plasma cells and a myeloma defining event met the International Myeloma Working Group diagnostic criteria for multiple myeloma (MM) (Box 1). 1 Because of the concomitant AML, the anaemia cannot be definitively attributable to the proliferative clonal plasma cells, in which case smouldering MM would be more accurate. However, there is no accurate way to discount that the anaemia may be due to the clonal plasmacytosis and is likely attributed to both conditions. A diagnosis of AML with concurrent MM that had progressed from MGUS was established based on the above findings and detailed history of documented MGUS. Unfortunately, the patient demised soon after the diagnosis was made.
Malignancies following successful chemotherapy for various malignant diseases, including multiple myeloma (MM), are well known and documented. The occurrence of acute myeloid leukaemia (AML) after chemotherapy for MM with alkylating agents is well described; however, simultaneous occurrence of the two entities with no prior therapy is extremely rare. We describe a case of an elderly patient with no previous exposure to chemotherapy, who was diagnosed with both MM and AML concurrently. Very few similar cases have been described in the literature.
Double trouble: A case of synchronous presentation of acute myeloid leukaemia and multiple myeloma
Read online: Scan this QR code with your smart phone or mobile device to read online.
Discussion
Reactive plasmacytosis may occur as a paraneoplastic syndrome in certain malignancies, including Hodgkin and non-Hodgkin lymphomas, some carcinomas and AML after chemotherapy. It is a rare finding in cases of AML at diagnosis but has been documented, 2,3,4 and these cases usually have less than 10% plasma cells; however, rare cases with > 20% reactive plasma cells have been reported. 2 As such, a finding of bone marrow plasmacytosis in a newly diagnosed AML presents a diagnostic challenge and warrants further investigations to exclude a concomitant clonal proliferation.
A few cases of dual presentation of MM and AML have been reported in the literature. Acute myeloid leukaemia occurring after MM treatment with chemotherapy containing alkylating agents is more well described, with melphalan use considered to be the main cause of the secondary malignancy 5 and lenalidomide also implicated to be causative. 6,7 Acute myeloid leukaemia diagnosed simultaneously with MM is much rarer, with a few cases of AML occurring in chemotherapynaive MM reported in literature.
No clear mechanism underlying the development of AML and MM without prior exposure to chemotherapy has been elucidated; however, a few have been suggested as possibilities. Patient and disease-related factors have to be considered to play a significant role in the development of the dual haematological malignancies. Postulated mechanisms include a disorder of multipotent stem cells, exposure to common environmental or radiation risk factors and repeated infections, particularly in patients with myeloma, with eventual development of a leukaemic clone. 8,9,10 In addition, MM is a slowly progressive disorder, and disease evolution from MGUS to MM is associated with an immunosuppressive milieu characterised by dysfunction of immune effector cells, loss of effective antigen expression and an increase in immunosuppressive cell types. 11 This decreased immune surveillance fosters immune escape of neoplastic cells and tumour growth, and it may result in failure to eliminate incipient leukaemic clones. 9
BOX 1:
International Myeloma Working Group diagnostic criteria for multiple myeloma and smouldering multiple myeloma.
Multiple Myeloma
Clonal bone marrow plasma cells > 10% or plasmacytoma PLUS 1 or more of the following myeloma defining events: Lu-Qun et al. have speculated that multiple gene mutations may be involved in the occurrence of the two malignancies simultaneously, particularly deletions of RB-1, TP53 and 1p32, which were demonstrated in the plasma cell population of their patient. 12 The trisomy 8 and deletion 20q demonstrated by both FISH and cytogenetics in our patient likely represent the leukaemic clone as the aspirate had fewer plasma cells than the trephine, and the abnormalities are associated with myeloid malignancies rather than a plasma cell neoplasm. 13 Due to the rarity of the concurrent presentation of AML and MM, there is no established treatment modality. Because AML is the more aggressive of the two malignancies, patients are often treated with AML regimens. These typically include anthracyclines, which have some efficacy against myeloma cells in addition to their anti-leukaemic effects. Despite the treatment with these regimens, these patients have a poor outcome, with a reported median survival of < 5 months. 14 A single patient has been reported to survive after successfully undergoing an allogeneic stem cell transplant. 15
Conclusion
Multiple myeloma is a disease of the elderly with a median age at diagnosis of ~70 years, whilst the incidence of MGUS also increases with age. 13 The risk of progression to MM is 1% per year, with IgA MGUS having a slightly higher progression risk. Given the patient's history of MGUS, it is possible that the AML might have developed coincidentally in the background of the MGUS, progressing to MM. Similar reported cases, however, highlight the possibility of yet-tobe-defined mechanisms at play, with host factors probably playing a key role.
The altered bone marrow microenvironment that results in providing a supportive niche for emerging neoplastic clones and suppression of the host immune system resulting in the escape and progression of incipient clones begin to emerge as possible mechanisms for consideration in the development of dual malignant neoplasms. | 2020-10-29T09:06:23.895Z | 2020-10-26T00:00:00.000 | {
"year": 2020,
"sha1": "db5cc761ddbd1d417c68672b693829035a110233",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4102/sajo.v4i0.147",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "755865de1b27f08b0feebba6de9a44dfe0a2c625",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213575479 | pes2o/s2orc | v3-fos-license | Synchronization of local knowledge with formal regulations supporting natural resource conservation in Bajo communities in West Muna District
The purpose of the article is to inform the reader about local knowledge supporting the conservation of marine natural resources from the Bajo Community in West Muna District. The theory for reading data is Geertz's thought of from the native point of view, with ethnographic methods. Data collection was done by in-depth interviewing techniques and participant observation. Data were analyzed descriptively qualitatively. The results of the research, Pamali, Karanu, and MboJanggo are the local knowledge of the Bajo Community towards the conservation of marine natural resources, relatively actualized by the elderly, except for the younger generation because of the exposure to globalization and the absence of local government appeals to it. Two ways to maintain local knowledge: (a) through education; (b) through the Bajo Community social network; and (c) local knowledge is synchronized with formal rules. In conclusion, the novelty of this study was that local knowledge was interrupted because the culture disappeared. The impact: the local wisdom of the Bajo Community is threatened to disappear. The presence of the Regional Autonomy system should be the local government as the driving force for the survival of local wisdom from the community from exposure to globalization.
Introduction
Progress in transportation, information technology, communication media, accelerates changes in all aspects of human life, both individually, community, and society [1]. The international community anticipates this through the Millennium Development Goold's (MDG's) [2]. Various ways Indonesian people have anticipated it [3]. Government activities need to be supported by the conservation of natural resources [4,5]. Several results of research on local wisdom-based natural resource management practices have conservation principles, both on land, on the coast, and the sea [6,7]. The problem is that there is still much local knowledge belonging to the community that has not been identified, including in West Muna Regency, Southeast Sulawesi. The pattern of mastery and utilization of coastal waters by generations in the Bajo tribe in Southeast South Sulawesi [8]. The local wisdom of the Bajo Community in West Muna, regarding the conservation of marine resources known as Pamali, Karanu, MboJanggo, can support Indonesia's vision and strategy to become the World Maritime Axes [9]. However, this is no longer socialized by generation due to the advancement of contemporary technology. While in other areas of the sea and coastal areas, it is one of the exciting [10]. The novelty of this research concerns theory in reading research data on From the native point of view [11].
Methods
The study was carried out in the three most disadvantaged villages and had very little public facilities in West Muna District, namely: Mandike Village, which has a population of 662 people, Pasipambangan Village with a population of 588 people, Katela Village with a population of 660 inhabitants.
While the research process is 1, conduct open interviews with informants. Interviews carried out were guided by a list of questions compiled in the interview guidelines, 2. Participatory observations according to the topic of the research which are about perceptions, events, and actions of informants relating to sea management at the location in three villages, 3 and analyzing qualitative data to answer research problems.
The tools and materials used in conducting this research are the researchers themselves, as stated in ethnography, namely: notebooks, writing instruments, maps, tape recorders, and laptops. The research instruments used were the researchers themselves, interview guidelines, observation guidelines, and documentation. The data analysis method used is qualitative descriptive analysis.
Data analysis was carried out in a qualitative descriptive manner to reveal local wisdom used in the process of determining conservation areas, conservation area boundaries, and marine conservation area maps according to the community.
Analysis of bajo community knowledge about the sea.
The Bajo community that lives on boats and nomads in the sea changes, has knowledge of the sea: (a) The Sea as a Source of Life; (b) The sea as a liaison rather than inter-island separator; (c) The sea has invaluable natural resources that need to be maintained properly; (d) Sea as sehe (friend), so that it should not be damaged and disturbed; (e) The sea as tabar (medicine), which contains various types of medicines; (f). Sea as anudinta (food); (g).The sea as a weed (means of transportation); (h) Sea as patambangang (residence); (i). The sea as pamunangala'bakaraha'(source of good and bad); (j) Sea as an umbo ma'dilao mining site (the ancestral site of the Bajo people who control the sea); (k) The sea is an open area (open access) and is freely managed by everyone.
Local knowledge analysis of the potential supporting natural resource conservation efforts
Some knowledge about Pamali, Karanu, and MboJanggo is a ban on permanently carrying out activities in an area in the sea because of sacred places. The shift in the practice of noble values has been degraded by several factors: first, the existence of the regions / regions of Pamali, Karanu, and MboJanggo is increasingly unclear, due to ignorance of stakeholders. Second, the local wisdom has not been supported by the Village Regulation (Perdes); Third, customary institutions are less involved in government policymaking, especially in the process of implementing the current village administration. The leadership of the SandroBajo Community is a strategic thing to maintain this customary system sustainably, through public channels and educational channels. The general route can be through social networks called Kalaki and Sabe / Bela, and education.
Analysis of local knowledge synchronization potentials with formal regulations
Management of marine resources with local wisdom does not mean negating the formal system, but it about how to pay respect to traditional systems that are still recognized in an area by (a) not eliminating the existence of traditional systems, and (b) Conformity between formal systems and traditional systems in area management that has the same goal, namely the preservation of biodiversity for the survival of life. Both systems (formal and customary based Pamali, Karanu, and MboJanggo) can be elaborated in a system collaboration scheme, more than just management collaboration that views the community as participant stakeholders at government events. A collaborative system is a reconstruction from a centralized management system to a community-based decentralized system where the system of living in society is aligned with the formal system, the form can be divided by government or regulated institutions, Kalaki and Sabea / Bela social networks with 2 (two) way, namely: Vertical Synchronization. Done by looking at whether a statutory regulation that applies in a particular field is not contradictory to one another. In addition to having to pay attention to the hierarchy of building regulations, vertical synchronization must also be considered chronologically for the year and the number of statutory regulations concerned. Horizontal Synchronization. The harmonization of the draft law covers 2 (two) aspects, namely vertical harmonization, and horizontal normalization.
Conclusion
Inculturation of local knowledge that supports the conservation of the marine natural resources of the Bajo Community in West Muna Districts such as Pamali, Karanu, and MboJanggo which have weakened so that the kinds disappear. For this reason, it is necessary to reinculturate through education, synchronized with the formal rules, and social networks of the Bajo Community. The local government has not cared about the conservation of marine natural resources even though the United Nations and the Central Government have required it. The impact is that the concepts of society that support the conservation of marine natural resources are threatened to disappear. The presence of the Regional Autonomy system should be the local government as the driving force for the rise of local / community wisdom. | 2019-12-19T09:12:22.125Z | 2019-12-18T00:00:00.000 | {
"year": 2019,
"sha1": "4a7b1c6c89aab75a1c6006f3dedff77aa98b5117",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/382/1/012029",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a79f3ac869d0ff5d7515616321f6157dc3b391d6",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geography"
]
} |
245129326 | pes2o/s2orc | v3-fos-license | Semisimplicity of \'etale cohomology of certain Shimura varieties
Building on work of Fayad and Nekov\'{a}\v{r}, we show that a certain part of the etale cohomology of some abelian-type Shimura varieties is semisimple, assuming the associated automorphic Galois representations exists, and satisfies some good properties. The proof combines an abstract semisimplicity criterion of Fayad-Nekov\'{a}\v{r} with the Eichler-Shimura relations.
Introduction
Let G be a reductive group over Q, and X be a conjugacy class of homomorphisms h : Res C/R G m → G R such that (G, X) is a Shimura datumn. Given a compact open subgroup K ⊂ G(A f ), we can form the associated Shimura variety which is a complex manifold for K small enough, and has a canonical model over E, for some number field E.
Consider a representation valued in complex vector spaces such that ξ(Z(Q) ∩ K) = 1, where Z is the center of G. This gives rise to a locally constant sheaf of complex vector spaces We now assume that G der is anisotropic, so that the Shimura variety Sh K (G, X) is compact. Similar results should hold withétale cohomology replaced with intersection cohomology of the Baily-Borel compactification Sh BB K (G, X) of the Shimura variety. We now consider the complex analytic cohomology of the tower of the Shimura variety, i.e. Choose h ∈ X, and let K ∞ be the stabilizer of h in G(R). By Matsushima's formula, we have a decomposition of H i (Sh(G, X) an , L ξ ) in terms of Lie algebra cohomology where π runs through unitary automorphic representations of G(A), and m(π) is the multiplicity of π appearing in L 2 (G(Q)\G(A), ω), the space of measurable functions on G(Q)\G(A) which are square-integrable modulo center, where ω is the central character of π. Recall that π ∞ is cohomological in degree i for ξ (i.e. H i (g, K ∞ ; π ∞ ⊗ ξ) = 0) if and only if the central character ω ξ of ξ satisfies ω ξ | Z(R) = ω −1 π∞ . Fix a prime l, and an isomorphism ι : Q l ∼ − → C. The representation ξ gives rise to an l-adic automorphic sheaf, in the following way. ξ gives rise to a representation valued in Q l -vector spaces ξ l : G Q l → GL(V ξ,l ), and similarly gives rise to a sheaf L ξ,l = G(Q)\V ξ,l × X × G(A f )/K.
If we consider theétale cohomology of the tower where V i (π ∞ ) = Hom G(A f ) (π ∞ , H í et (Sh(G, X) Q , L ξ,l )). We define the Galois representation The global Langlands correspondence conjectures that to the automorphic representation π we have an associated semisimple Galois representation Gal(Ē/E) → L G, and we denote the compositionρ : where µ is the minuscule cocharacter associated to the Shimura datumn. When the group G is of the form Res F/Q G ′ for some connected reductive group G ′ , and F is a totally real field of degree d, one has a decomposition (perhaps after passing to a finite extension Moreover,ρ v should have the following form. Observe that over C, we have a decomposition and the cocharacter µ also decomposes as v µ v . Then we should havẽ where here we view π as an automorphic representation of G ′ over F . The main theorem of this paper is the following: Theorem 1.0.1. Let (G, X) be a Shimura datumn of abelian type such that G = Res F/Q G ′ for some connected reductive group G, and totally real number field F . Let π be an automorphic For all v, suppose that the Galois representation associated to π exists, and is given byρ v : Gal(Ē/E) → L G ′ → GL(V µv ). Suppose that moreover we also know that (1)ρ v is strongly irreducible (2) For all primes v ′ of E such that v ′ |l, the Hodge-Tate weights ofρ, viewed as a Then ρ is a semisimple representation.
To show this result, we first define partial Frobenius isogeny at a positive density of primes p, and then show the Eichler-Shimura congruence relations for partial Frobenius for split groups, using results from [Lee20]. More precisely, we show the following: Theorem 1.0.2. Let (G, X) be a Shimura datumn of abelian type, such that G = Res F/Q G ′ for some connected reductive group G, and totally real number field F of degree d. Let p be a prime satisfying the conditions in Proposition 2.9.4. Then for all i = 1, . . . , d we have a partial Frobenius correspondence Frob p i such that where H i is the renormalized characteristic polynomial of the irreducible representation ofĜ ′ i with highest weightμ i .
This, combined with a semisimplicity criterion for Lie algebras shown in [FN19], allows us to deduce the main result. In the final section, we apply this result to some abelian-type Shimura varieties attached to similitude groups, following the construction of the automorphic Galois representations in [KS20].
Acknowledgements. Many thanks to my advisor, Mark Kisin, for introducing this problem to me, and for various helpful conversations. Thanks also to Sug Woo Shin, for suggestions about applications in Section 4.
Eichler-Shimura Relations
In this section, we review some key results shown in [Lee20] to prove the Eichler-Shimura relations.
2.1. p-divisible groups. For the entirety of this subsection, we fix a prime p > 2, and let G be a connected reductive group over Q p . Let k be a perfect field of characteristic p. We denote by L = W (F p )[1/p] the maximal unramified extension of Q p .
Definition 2.1.1. A p-divisible group with G structure over k consists of a p-divisible group G /k and a collection of ϕ-invariant tensors (s α,0 ) which define a reductive subgroup of GL(D(G )) such that there exists a finite free Z p -module U and an isomorphism such that under this isomorphism (s α,0 ) correspond to tensors (s α ) ⊂ U ⊗ . Moreover, these s α define the reductive subgroup G Zp ⊂ GL(U ).
Associated to any p-divisible group G with G-structure overF p , we have a G(W (F p ))-σconjugacy class of elements b ∈ G(L), such that under the isomorphism (2.1.2) the Frobenius on D(G )(W (F p )) is given by bσ.
Let µ be the minuscule cocharacter of G such that b lies in G(W (F p ))p µ G(W (F p )) under the Cartan decomposition.
Definition 2.1.3. Let RZ(G, b, µ) be the functor which assigns to any p-locally nilpotent smooth W -algebra R the set of isomorphism classes ((X, ρ, t α ) such that (1) (X, t α ) is a p-divisible group over R with tensors t α ⊂ D(X) ⊗ , where (t α ) consists of morphisms of crystals t α : 1 → D(X) ⊗ over Spec(R) such that is Frobenius equivariant; (2) ρ : X R/p → X R/p is a quasi isogeny; (3) For some nilpotent ideal J ⊂ R containing (p), the pull-back of t α over Spec(R/J) is identified with s α under the isomorphism of isocrystals induced by ρ: (4) For some (any) formally smooth p-adic W -liftR of R, endowed with the standard PDstructure on ker(R → R) = p mR for some m, let (t α (R)) denote theR-section of (t α ). Then theR-scheme classifying isomorphisms matching (t α (R)) and (1 ⊗ s α ), is a G W -torsor. If the group G admits a decomposition over Q p as G = G 1 ×G 2 , then the associated Rapoport-Zink spaces also decompose, as the following proposition [Kim18, Thm 4.9.1] shows: induced by taking product of p-divisible groups, and isogenies.
2.2. Partial Frobenius for Hodge type. Consider the situation where G ′ is a connected reductive group over Q, and G = Res F/Q G ′ , where F is a totally real field of degree r over Q.
Additionally, we suppose that the Shimura datum of interest is of Hodge type. Suppose that p is a prime which satisfies the following criterion: (1) p splits in F (2) The group G has good reduction at p.
Observe that since G is a reductive group, there is some finite extension K of Q over which the group G splits. Thus, we see that there is a positive density of primes p which satisfies the above two criterion.
We can define the partial Frobenius as follows. We remark here that contrary to previous definitions of the partial Frobenius, we do not define the partial Frobenius over the universal abelian variety A over Sh K (G, X). Instead, we will define these elements only over isogeny classes, in a group theoretic way.
Observe that since p splits in F , we see that , and since the group G Qp is split, so too are each of the factors G ′ i . The decomposition of the group G induces a decomposition of the Rapoport-Zink space. We thus have an isomorphism of Rapoport-Zink spaces and given an isogeny f over a characteristic p ring R, we can decompose the isogeny In particular, we can apply the above construction to the Frobenius isogeny, to get quasiisogenies Frob p i for i = 1, . . . , r and by construction we have the following relationship between the partial Frobenius and the actual Frobenius Remark 2.2.1. Note, moreover, that if we consider Frob p as a p-power quasi isogeny between p-divisible groups overF, represented by an element f in G(L), then we can write 2.3. The moduli space p − Isog. We again suppose that the Shimura variety is of Hodge type, and recall the constructions in [Lee20] of the associated moduli space p − Isog.
Let T be a scheme over O E,(v) , and consider any two points x, y lying in S K (G, S)(T ). For any geometric point t of T , let x t , y t be the pullback of x, y to t. From the main construction in [Kis10], we have l-adicétale and de Rham tensors (s xt,α,l ) for l = p, and (s xt,α,dR ) (respectively (s yt,α,l ), (s yt,α,dR ) for y t ). Observe that k(t), the residue field at t, could be of either characteristic 0 or characteristic p. Suppose k(t) is a field of characteristic 0, i.e. it is an extension of E. Then, we also have p-adicétale tensors (s xt,α,p ), (s yt,α,p ). Otherwise, if k(t) is of characteristic p, it is an extension of κ. Similarly, we have crystalline tensors (s xt,α,0 ), (s yt,α,0 ).
We define a quasi-isogeny between x, y to be a quasi-isogeny f : A x → A y of abelian schemes over T , such that for any geometric point t, the induced quasi-isogeny f t : A xt → A yt of abelian varieties over k(t) preserves all the tensors described above.
We define a p-quasi-isogeny between x, y to be a quasi-isogeny as defined above, such that the isomorphism on the rational prime-to-p Tate modules f : In particular, we see that the weak polarizations on A x , A y differ by some power of p.
Let p − Isog be the fppf -sheaf of groupoids of p-quasi-isogenies between points on S Kp (G, X).
, we can define p − Isog K p in a similar way, by setting K = K p K p and considering p-quasi-isogenies between points on S K (G, X) instead. For small enough K p such that S K (G, X) is a scheme, p − Isog K p is in fact also a scheme over O E,(v) . In the following, we always assume sufficient level structure K p such that p − Isog K p is a scheme, and for notational simplicity we will simply denote this by p − Isog.
We have can define projection maps back to S Kp (G, X) sending a p-quasi isogeny (x, f ) to x (respectively y) These maps s, t are proper, and surjective. Consider the closure J of the generic fiber p − Isog ⊗ E in p − Isog. We abuse notation and still denote the special fiber of J by p − Isog ⊗ κ, and the Q-vector space of irreducible We now consider the µ-ordinary locus p − Isog ord ⊗ κ. This is the subspace of p − Isog ⊗ κ which maps to the µ-ordinary locus under the map s (equivalently, t). The following argument can be extracted from [Lee20]: Proof. The discussion in [Lee20,§6] shows that for any irreducible component in p − Isog ⊗ κ, it has a dense open subset which corresponds to the Newton strata for some unramified [b] ∈ B(G, υ). If G is split over Q p , then the only unramified element in B(G, υ) is the µ-ordinary σ-conjugacy class, since we have an isomorphism B(G, υ) ≃ B(G ad , υ ad ), which maps unramified elements to each other. From [XZ17, 4.2.11], we see that the identity element represents the basic element in B(M ad , µ ad ) if and only if [1] ∈ B(M ad , µ ad ), i.e. µ ad = 0 in π 1 (M ad ) Γ . If M ad is split, then µ ad = 0 in π 1 (M ad ) Γ implies that µ ad is the sum of coroots of M ad , and hence µ ad is either the identity or it cannot be minuscule.
Abstract Eichler-Shimura Relations. We have isomorphisms of Hecke algebras
Similarly, since G Qp admits a decomposition, we can write For a quasi-split reductive group G with standard parabolic subgroup P and Levi subgroup M , we can define following algebra homomorphism, known as the twisted Satake homomorphisṁ The twisted Satake isomorphism also factors: we have an isomorphisṁ Consider now the representation ρ i :Ĝ → GL(V µ i ) ofĜ with highest weight cocharacter (1, . . . , µ i , . . . , 1), where µ i is in the i-th position. Observe that ρ µ = ⊗ i ρ µ i . Define the polynomial as the polynomial given by Note that since µ is central in M , µ i is also central in M , hence we can consider the element 2.5. Newton stratification. From now on, we will drop the assumption that the Shimura datumn is of Hodge type, and consider general abelian type Shimura datum. For any abelian type Shimura datumn, we will let (G 1 , X 1 ) denote the Hodge type Shimura datumn such that there exists a central isogeny f : G der 1 → G der which induces an isomorphism (G ad 1 , X ad 1 ) ≃ (G ad , X ad ). We now recall the construction of Newton strata for Shimura varieties of abelian type, as constructed in [SZ17]. Observe that for any connected reductive group G, and minuscule cocharacter υ, we have an isomorphism B(G, υ) = B(G ad , υ ad ). In [SZ17], the Newton strata is first constructed for adjoint groups, and thus we have a stratification on S K ad p (G ad , X ad ). The Newton strata for S Kp (G, X) is then defined to be the pullback of the Newton strata for S K ad p (G ad , X ad ) via the natural map S Kp (G, X) → S K ad p (G ad , X ad ). Fix a connected component X + of X, and a connected component X + 1 of X 1 , such that their images in X ad are equal to some connected component X ad,+ . Let Sh Kp (G, X) + denote the connected component of Sh Kp (G, X) containing {1} × X + , and similarly let Sh K 1,p (G 1 , X 1 ) + denote the connected component of Sh K 1,p (G 1 , X 1 ) containing {1} × X + 1 .
We observe that the Newton strata of S Kp (G, X) + and S K 1,p (G 1 , X 1 ) + is exactly that pulled back along the maps S K 1,p (G 1 , X 1 ) + → S Kp (G, X) + → S K ad p (G ad , X ad ) + in particular, we see that the µ 1 -ordinary locus of S K 1,p (G 1 , X 1 ) + is exactly the preimage of the µ-ordinary locus of S Kp (G, X) + .
2.6. Model for the Hecke correspondences. For general abelian type Shimura varieties, we do not have a moduli interpretation in terms of abelian varieties, and thus we do not have the general formalism of p−Isog. However, we can still define models for the Hecke correspondences, defined as follows.
Consider the Hecke correspondence C ⊂ Sh K (G, X) × Sh K (G, X) given by 1 KpgKp on the generic fiber, and let C be the closure of C in S K (G, X) × S K (G, X). For any correspondence C over S K (G, X), we will let C 0 denote the special fiber, which is a correspondence over S K (G, X) κ . We have a similar construction for Hecke correspondences for the groups G 1 , G ad . Observe that the Hecke operators for G, G ad are related as follows. Let g ad denote the image of g ∈ G(Q p ) in G ad . C ad ⊂ Sh K ad (G ad , X ad ) × Sh K ad (G ad , X ad ) given by 1 K ad p g ad K ad p on the generic fiber. Let C be the closure of C ad in S K ad (G ad , X ad ) × S K ad (G ad , X ad ), and let C ad 0 denote the special fiber.
Our key observation is the following: C ad is the image of C under the (finite) projection maps Thus, we see that C ad 0 is the image of the projection of C 0 to a correspondence on S K ad (G ad , X ad ) κ . We can also define the µ-ordinary locus C ord 0 of C 0 to be the subspace of C 0 which maps to the µ-ordinary locus under the natural projection maps to S K (G, X). The discussion of Newton strata above shows that C ad,ord 0 is the image of the projection to S K ad (G ad , X ad ) ord κ × S K ad (G ad , X ad ) ord κ of C ord 0 , and also C ord 1,0 .
Proposition 2.6.1. Let G be split over Q p , and suppose g ∈ G(Q p ) is such that there exists g 1 ∈ G 1 (Q p ) such that g ad = g ad 1 ∈ G ad (Q p ). Consider the correspondence C associated with 1 KpgKp as above. Then we have C 0 has a dense µ-ordinary locus.
Proof. Note that since we have an isomorphism B(G, µ) ≃ B(G ad , µ ad ), the condition that G is split implies that we also have exactly one unramified σ-conjugacy class in B(G 1 , µ 1 ), and thus p − Isog(G 1 , X 1 ) ⊗ κ has a dense µ-ordinary locus, by Proposition 2.3.1.
By the above arguments, we know that the Hecke correspondence C ad 0 has a dense µ-ordinary locus, since it is the image under a finite map of the Hecke correspondence C 1,0 on Sh K 1 (G 1 , X 1 ), which will have a dense µ-ordinary locus since p − Isog(G 1 , X 1 ) ⊗ κ has a dense µ-ordinary locus.
Thus, since the image of C 0 under a finite map to S K ad (G ad , X ad ) κ × S K ad (G ad , X ad ) κ has a dense µ-ordinary locus, the original Hecke correspondence must C 0 must have a dense µ-ordinary locus as well.
Observe that from the definition of the H G,X (t) that the Hecke correspondences which appear as coefficients in H G,X (t) are closed subschemes of Hecke correspondences lying in the subring R of H(G(Q p )//K p ) generated by 1 Kpµ(p)Kp . Let µ 1 be the cocharacter of G 1,Qp associated to X 1 . Observe that we have µ ad = µ ad 1 when after projecting to cocharacters of G ad . Thus, if the group G is split, then the proposition holds for the coefficients of the Hecke polynomial. 2.7. Canonical liftings of µ-ordinary points. Consider now any µ-ordinary point x ∈ S Kp (G, X)(F p ). We suppose now that the point x lies in S Kp (G, X) + . Note that this is always possible up to the action of some g p ∈ G(A p f ), since by [Kis10, 2.2.5] G(A p f ) acts transitively on S Kp (G, X).
Note that since the map X → X ad is injective, and takes a special point x ∈ X to a special point in X ad , if we consider the imagex ofx 1 in S Kp (G, X) + , thenx is a special point whose reduction mod p is the point x. Moreover, note that the cocharacter µx is determined by the map to the adjoint group µ ad x , since for any map G m → G, it is determined by the induced maps to G/G der and G ad , and since G/G der is commutative, the map G m → G/G der is constant for all elements x ∈ X. Thus, we see that the associated cocharacter µx satisfies µx ,Qp = µ, since µ ad = µ ad 1 . Thus, we have the following corollary: Corollary 2.7.1. Let x be a µ-ordinary point in S Kp (G, X)(F p ). Then x admits a lifting to a special pointx, and the p n -Frobenius map onx is given by We now want to show the following proposition, which is a generalization of [B02, Lemma 4.5].
Proposition 2.7.2. Let x be a µ-ordinary point in S Kp (G, X)(F p ), and letx = [gK p × h] be the lifting constructed in Corollary 2.7.1. Let U denote the unipotent radical of the parabolic subgroup of G associated to µ. Then for any u ∈ U (Q p ), we have that the mod p reductions of the points are equal.
Proof. Up to the action of some g p ∈ G(A p f ), we may assume that x ∈ S Kp (G, X) + (F p ). Observe that we have an isomorphism of root systems Φ(G, T ) = Φ(G ad , T ad ) = Φ(G 1 , T 1 ). Hence if we consider U 1 the unipotent radical of the standard parabolic subgroup of G 1 corresponding to µ 1 , then we can identify U 1 with U . In particular, since this result is true for the liftx 1 , for any u ∈ U 1 (Q p ), the same is true for the action of n ∈ U (Q p ) onx.
2.8. We now letG be the simply connected cover of G der 1 . Let Z G denote the center of the group G. Recall that we have a central isogeny Z ×G → G, and thus, for a maximal torus T of G defined over Q, we have a injective map with finite cokernel whereT inG is a maximal torus of G der .
In particular, observe that for any cocharacter λ ∈ X * (T ), there exists some positive integer m such that λ m lifts (up to some cocharacter in X * (Z G )) to a cocharacter ofG.
By [MS82,3.4], there exists a Shimura variety Sh(G ′ , X ′ ) and a map with central kernel G ′ → G 1 such that G ′der =G and via the composition map G → G der 1 → G der there is an isomorphism of Shimura data (G ′ad , X ′ad ) ≃ (G ad 1 , X ad 1 ) ≃ (G ad , X ad ). Now, we consider the Shimura variety Sh(G,X). SinceG is simply connected, observe that the action of any g ′ ∈G(Q p ) preserves connected components. We now choose a connected component X ′+ of X ′ which maps to X ad,+ , and let Sh(G, X) + be the connected component which contains X ′+ × {1}. In particular, we see that g ′ maps Sh(G ′ , X ′ ) + back to itself.
Let g ∈ G der (Q p ) be the image of g ′ under the central isogenyG → G der . Moreover, note that on geometrically connected components we have §3.3]. Thus, we see that since the action of g ′ preserves Sh(G ′ , X ′ ) + , so too does the action of g preserve connected components of Sh K (G, X), and moreover the action of g on Sh K (G, X) + is exactly the quotient by ∆ of the action of g ′ on Sh(G ′ , X ′ ) + . If we let g 1 ∈ G der 1 (Q p ) be the image of g ′ under the central isogenyG → G der 1 , a similar result holds for the action of g 1 .
Moreover, we see that from [Kis17,3.7.10] we have a surjective map 2.9. Partial Frobenius for abelian type. Similar to the situation for the partial Frobenius for Shimura varieties of Hodge type, we would like to define the partial Frobenius to be the p-power quasi-isogeny represented by µ i (p). Since we are in the abelian type case, we cannot work directly with p-divisible groups. Instead, here we will define the partial Frobenius correspondence, at least over the ordinary locus. Suppose that p is a prime which satisfies the following criterion: (1) p splits in F (2) The group G has good reduction at p 2.9.1. Firstly we will assume the group G is adjoint. By [Kis17,4.6.6], there is a Hodge type Shimura datum (G 1 , X 1 ) such that (1) (G ad 1 , X ad 1 ) ∼ − → (G, X) and Z G 1 is a torus; (2) if (G, X) has good reduction at p, then (G 1 , X 1 ) in (1) can be chosen to have good reduction at p, and such that E(G, X) p = E(G 1 , X 1 ) p .
Since Z G 1 is an unramified torus, by [Ama69, Corollary 2], we know that H 1 (Q p , Z G 1 ) is trivial, and thus we have a surjective map More precisely, it tells us that the element µ i (p) ∈ G(Q p ) lifts to an elementμ i (p) ∈ G 1 (Q p ) for some cocharacterμ i of G 1 . Now, observe thatmu i (p) lies in the center of M 1 , the centralizer of µ 1 . Thus, we may consider the section of p−Isog G 1 given by the image of 1μ i (p)M 1 (Zp) . This gives me a correspondence on S (G 1 , X 1 ) κ , which we project to get a correspondence on S (G, X) κ . This is the partial Frobenius correspondence Frob p i . Observe that by construction we have since the product of the images of 1μ i (p)M 1 (Zp) is the image of 1μ (p)M 1 (Zp) , whereμ is a cocharacter whose image in G(Q p ) is µ(p), which corresponds to the Frobenius over the ordinary locus.
2.9.2. More generally, if G is not adjoint, then we will consider the Hecke correspondence on S (G, X) κ given by h(1 Kpµ i (p)Kp ), and its image in S (G ad , X ad ) κ . The image is given by the Hecke correspondence h(1 K ad Let the Hecke polynomial be for elements A j ∈ H(G(Q p )//K p ). Let h(A j ) denote the mod p algebraic cycle in Corr(S κ , S κ ) corresponding to A j . Thus, we want to show that where Frob p i is the correspondence defined above. The proof then follows as in [B02, Thm 4.7]. As constructed above, we letx be the special point lift of x. Writex = [gK × h], and we write the coefficients of the Hecke polynomial A j in terms of left K p -cosets of G(Q p ) It remains for us to show that Theorem 3.1.2. Let Γ be a profinite group, V, W 1 , ..., W r non-zero vector spaces of finite dimension over Q. Let ρ : Γ → Aut Q (V ) and ρ i : Γ → Aut Q (W ) be representations of Γ with Lie algebras g i = Lie(ρ i (Γ)), g = Lie(ρ(Γ)). We denoteḡ i = g i ⊗Q,ḡ = g ⊗Q. If the following three conditions hold, then the representation ρ = ρ ss is semisimple.
(1) Each ρ i is strongly irreducible (which implies that each g i is a reductiveQ-Lie algebra and each element of its centre acts on W i by a scalar).
(2) For each i = 1, . . . , r, every (equivalently, some) Cartan subalgebra h i of g i acts on W i without multiplicities (i.e., all weight spaces of h i onW i are one-dimensional).
Under these conditions on π, Kret and Shin [KS20, Theorem A] construct the Galois representation ρ π : Gal(F /F ) → GSpin 2n+1 (Q l ) associated to π. If we moreover assume that the Zariski closure of the image of ρ π maps onto SO 2n+1 (which should hold generically), then at all places v|∞ where the group is not compact modulo center, the associated representatioñ ρ π,v : Gal(F /F ) → GSpin 2n+1 (Q l ) spin −−→ GL(V ), ρ π,v will be strongly irreducible, since it is irreducible and has connected image, hence the Zariski closure of the image of any finite index open subgroup is also SO 2n+1 , and hence irreducible.
If we moreover assume that (1) The representation π v is spin-regular at every infinite place v of F , then [KS20, Theorem C] implies π is potentially automorphic. We will further assume that π is automorphic. Moreover, spin-regularity implies that the Hodge-Tate weights ofρ π,v are distinct. Under all these conditions, we have may apply the Main Theorem, and we can deduce the following: Theorem 4.0.1. Let π be a cuspidal L-algebraic automorphic representation of G(A F ), satisfying (1) There is a finite F -place v St such that π v St is the Steinberg representation of GSp 2n (F v St ) twisted by a character.
Then the Galois module
Hom G(A f ) (π ∞ , H * et (Sh(G, X), Q l )) (which is finite-dimensional over Q l is semisimple. | 2021-12-14T09:03:11.066Z | 2022-06-15T00:00:00.000 | {
"year": 2022,
"sha1": "4bf8da1bf760e0aede19da52ea9280e293352d9c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d0efe631d5433e30a07af07b12a80e276845484c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236286267 | pes2o/s2orc | v3-fos-license | Different Survival Benet of Osimertinib in Different Sequences: A Real ‐ World Outcome of Osimertinib Treatment in Pretreated T790M-Positive Advanced NSCLC in Taiwan
Background: To investigate the relationships among the clinical characteristics, different EGFR-TKIs, and osimertinib treatment in different treatment lines. Methods: We retrospectively screened a total of 3807 patients diagnosed between 2013 and 2019 at Kaohsiung Chang Gung Memorial Hospital. Furthermore, 98 patients after re-biopsy or liquid with EGFR T790M mutation who received osimertinib were enrolled for analysis. Results: Among all 98 patients, the median PFS of those who received osimertinib therapy was 10.48 months, and the median OS of those who received osimertinib therapy was 42.21 months. The OS of those who received osimertinib therapy after previous getinib, afatinib, or erlotinib therapy was 87.93, 49.00, and 42.00 months, respectively (P=0.006). There was a signicant difference in disease control rate between those who received osimertinib treatment after previous chemotherapy (Group A) or immediately following EGFR-TKI therapy (Group B) (93.3% vs. 77.4%, P=0.029). There was also a signicant difference in PS between those who received osimertinib as a second-line treatment and those who received it as a third-line treatment (10.83 vs. 17.33 months, P=0.044). In addition, COPD tended to be a poor prognostic factor for PFS and OS. Conclusion: In this retrospective real ‐ world analysis, it was determined that pretreatment with getinib and previous chemotherapy could affect the treatment outcomes of NSCLC patients treated with osimertinib. Furthermore, COPD tended to a poor prognostic factor for PFS and OS in such patients.
those who received osimertinib therapy after previous ge tinib, afatinib, or erlotinib therapy was 87.93, 49.00, and 42.00 months, respectively (P=0.006). There was a signi cant difference in disease control rate between those who received osimertinib treatment after previous chemotherapy (Group A) or immediately following EGFR-TKI therapy (Group B) (93.3% vs. 77.4%, P=0.029). There was also a signi cant difference in PS between those who received osimertinib as a second-line treatment and those who received it as a third-line treatment (10.83 vs. 17.33 months, P=0.044). In addition, COPD tended to be a poor prognostic factor for PFS and OS.
Conclusion: In this retrospective real-world analysis, it was determined that pretreatment with ge tinib and previous chemotherapy could affect the treatment outcomes of NSCLC patients treated with osimertinib. Furthermore, COPD tended to a poor prognostic factor for PFS and OS in such patients.
Background
Lung cancer is the cancer with high prevalence and high mortality worldwide. Non-small cell lung cancer (NSCLC) accounts for about 80%-85% of all cases of lung cancer. According to the results of history and molecular biology tests, the treatment of lung cancer is personalized. Among various target oncogenes, epidermal growth factor receptor (EGFR) mutations are earliest and key genetic drivers of NSCLC. EGFR mutations are present in 10% of the Caucasian population, but in 40%-50% of the Asian population, including the population of Taiwan. [1][2][3] Previous clinical trials and studies have shown that compared with platinum-based chemotherapy regimens, EGFR-tyrosine kinase inhibitors (TKIs) produce better response rates and fewer adverse reactions. The objective response rate of the rst and second generation EGFR-TKI is around between 60% and 80%, and the median progression-free survival (PFS) duration is around 10 to 13 months. [4][5][6][7][8][9][10][11][12] When these patients experienced disease progression (PD), newly acquired resistant EGFR p.Thr790Met (T790M) point mutations were developed in about 50%-70% of patients. [13][14][15] These acquired resistant mutations enhance the binding a nity of adenosine triphosphate to the EGFR kinase domain, thereby reducing the e cacy of the rst and second generation EGFR-TKIs.
Osimertinib, a third-generation EGFR-TKI, was designed to and is active in non-small cell lung cancers harboring the EGFR T790M mutation. [16][17][18][19] Published reports of clinical trials have shown that osimertinib has better e cacy in patients who undergo disease progression after the rst and secondgeneration EGFR-TKI treatments.[16-19] AURA 3, a phase 3 clinical trial regarding osimertinib, also reported an better PFS associated in osimertinib compared to standard chemotherapy for NSCLC patients with acquired T790M mutations. [19] Therefore, re-biopsy or liquid biopsy is needed to prove the mechanism of acquired drug-resistance when EGFR mutations patients with PD after EGFR-TKI treatment.
In this study, we evaluated the response rate, progression-free survival (PFS), and overall survival (OS) of patients who received osimertinib treatment after a rst-generation EGFR-TKI (ge tinib or erlotinib) or a second-generation EGFR-TKI (afatinib). The main objective of this study was to investigate the relationships among the clinical characteristics, different EGFR-TKIs, and osimertinib treatment in different treatment lines.
Methods
The study retrospectively screened a total of 3807 patients who were diagnosed with pathologicallycon rmed lung cancer between January 2013 and April 2019 at Kaohsiung Chang Gung Memorial Hospital. Among these patients, there were 879 patients with inoperable EGFR mutation-positive adenocarcinoma who had received a rst-generation EGFR-TKI (ge tinib or erlotinib) or a secondgeneration EGFR-TKI (afatinib) as the rst-line therapy. Furthermore, 267 of these 879 patients who were resistant to rst-or second-generation EGFR-TKIs had received a re-biopsy (including bronchoscopy, chest computed tomography guided biopsy, or video-assisted thoracoscopic surgery) and/or liquid biopsy (the Department of Pathology of Kaohsiung Chang Gung Memorial Hospital was in charged for the detection of the EGFR T790M mutation in cell-free plasma DNA) between March 2015 and December 2018. Of those patients, there were 98 patients with EGFR T790M mutation-positive adenocarcinomas who had received osimertinib therapy (80 mg per day) for at least 2 weeks since March 2016. Among these 98 patients, 91 patients were provided with treatment through the expanded access programs supported by AstraZeneca until the occurrence of disease progression or the unacceptable adverse effects. All of the 98 patients who received osimertinib treatment were enrolled for analysis.
Each of these 98 patients regularly received a chest CT scan in initially start of the osimertinib treatment and every three months thereafter to evaluate their tumor responses. Brain MRI imaging and Tc-99m MDP bone scans would also be performed if there were related symptoms. Progression-free survival (PFS), overall survival (OS), overall response rate (ORR), and disease control rate (DCR) were calculated to evaluate their e cacy. The PFS was calculated from the time of starting osimertinib until the time of radiological progression based on RECIST (according to the Response Evaluation Criteria in Solid Tumors) v1.121 or death; with censoring at the time of the last follow-up in the event which the patient was not disease progression. The ORR was de ned as the percentage of patients who presented a complete response or partial response in the rst follow-up image study after the starting osimertinib treatment, while the DCR was calculated as the percentage of patients who exhibited a complete response, partial response, or stable disease. Furthermore, the duration of overall survival was calculated the duration from the starting osimertinib treatment until the patient expired.
Results
The demographic and clinical characteristics of the 98 patients with EGFR T790M mutation-positive adenocarcinomas who received osimertinib therapy are described in Table 1 Table 2 shows the responses to osimertinib treatment after previous therapy with a different rst-line EGFR-TKI. There was no signi cant difference in response rate to osimertinib after previous therapy between the patients treated with the different rst-line EGFR-TKIs. The median PFS of those who received osimertinib therapy after previous therapy with ge tinib, afatinib, or erlotinib was 12.83, 11.87, and 10.90 months, respectively (P=0.293) (Supplementary Figure 1) . The median OS of those who received osimertinib therapy after previous therapy with ge tinib, afatinib, or erlotinib was 87.93, 49.00, and 42.00 months, respectively (Supplementary Figure 1); there was a signi cant difference in OS between the patients treated with the different rst-line EGFR-TKIs (P=0.006). Table 3 shows the response, PFS, and OS results of the patients who received osimertinib treatment after previous chemotherapy (Group A) or immediately following treatment with another EGFR-TKI (Group B NA months, P=0.274) between these two groups.
Furthermore, we compared the response results for the patients treated with osimertinib as the secondline, third-line, or ≥ fourth-line therapy (Table 4). There was partial signi cant difference in median PFS between the patients treated with osimertinib as the second-line, third-line, or ≥ fourth-line therapy (10.83, 17.33, and 9.33 months, respectively, P=0.077) (Supplementary Figure 2), but there was a signi cant difference in median PFS between the patients treated with osimertinib as the second-line or third-line therapy (10.83 vs. 17.33 months, hazard ratio=0.51, 95% CI=0.26-0.99), P=0.044) (Supplementary Figure 3). There was no signi cant difference in OS among these patients. Figure 5), and without brain metastasis before osimertinib treatment (Patients were without brain metastasis at the time osimertinib was initiated) (P=0.029, hazard ratio=0.56, 95% CI= 0.33-0.95) (Supplementary Figure 6). In terms of OS, there was a signi cant difference only in the patients without COPD (P=0.031, hazard ratio=0.45, 95% CI= 0.21-0.95). Using a Cox proportional hazards regression, we determined that brain metastasis before osimertinib treatment was a poor prognostic factor for PFS and that ge tinib as a rstline therapy and inclusion in Group A (osimertinib treatment after previous chemotherapy) were better prognostic factors for OS (Table 6). Furthermore, COPD tended to be a poor prognostic factor for PFS and OS (Table 6).
Discussion
In this study, we evaluated the response to osimertinib among NSCLC patients with T790M EGFRresistant mutations following treatment with rst-or second-generation EGFR-TKIs. We found that the ge tinib group had better OS (Table 2), that osimertinib treatment after previous chemotherapy (Group A) had a better response rate (Table 3), that osimertinib as the third-line treatment had better PFS than osmertinib as the second-line treatment (Table 4), that brain metastasis noted during osimertinib treatment was a poor prognostic factor for PFS, that ge tinib as a rst-line therapy and inclusion in Group A (osimertinib treatment after previous chemotherapy) were better prognostic factors for OS, and that COPD tended to be a poor prognostic factor for PFS and OS (Table 6).
As shown in Table 7. In group A, 12 (26.67%) patients were with brain metastasis before osimertinib; the PFS was 11.07 months in patients with brain metastasis before osimertinib versus 21.13 months in patients without brain metastasis before osimertinib, respectively. In group B, 20 (37.7%) patients were with brain metastasis before osimertinib; the PFS was 10.27 months in patients with brain metastasis before osimertinib versus 11.87 months in patients without brain metastasis before osimertinib, respectively. So, this could explain osimertinib as the third-line treatment had better PFS than osimertinib as the second-line treatment.
In the LUX-Lung 3 and LUX-Lung 6 trials, OS was signi cantly longer for patients with EGFR Del19positive tumors in the afatinib group than in the chemotherapy group in both trials: in LUX- Lung (AF mEGFR ) and T790M (AF T790M ) after acquiring resistance between the rst and second generation EGFR-treated. In Kuo's study, the AF T790M /AF mEGFR ratio of the rst-generation EGFR-TKIs treatment group was signi cantly higher than that of the second-generation EGFR-TKIs treatment group. In addition, there was a highly signi cant correlation between AF T790M and AF mEGFR. This could explain why osimertinib tended to have a better PFS following pretreatment with ge tinib than with afatinib in this study. In our study, these data regarding AF T790M /AF mEGFR ratio was not available due to its retrospective study. So Kuo's data cannot explain a better PFS following pretreatment with ge tinib than with erlotinib.
In Taiwan To detect T790M resistance mutations, in most studies, re-biopsy was performed when the disease progressed [13,37], and the results showed that T790M accounted for 50-60% of the resistance mechanism. Since the cancer genome is heterogeneous, it can evolve over time, and it can also interact with different treatments[38], It is unclear whether the timing of a re-biopsy or liquid biopsy will affect the detection rate of T790M. However, in one previous study [39], The results provide evidence that there is no signi cant association between the timing of re-biopsy and the detection rate of T790M. In addition, this study also shows that T790M can exist for a long time after the progression of EGFR-TKI treatment, and it is also an important carcinogenic driving factor. In our study, the time interval between biopsies was 25.95±16.56 (1.33-99.10) months (Table 1); furthermore, the patients treated with ge tinib had a longer time interval between biopsies than those for the patients treated with erlotinib and afatinib. As previous description, ge tinib (since November 2007) was covered by national reimbursement earlier than erlotinib (since June 2008) and afatinib (since May 2014) in Taiwan. So, this could explain why the ge tinib group had a longer PFS than the erlotinib and afatinib groups. Furthermore, osimertinib was approved with second-line use since 2016 and rst-line use since 2019, but covered by national reimbursement since April 2020. The timing difference of approval and national reimbursement time difference could affect outcome between these three rst-line EGFR-TKIs.
Non-small cell lung cancer is the main cause of brain metastases.[40, 41] Amongst these with recurrent/advanced NSCLC, brain metastases are a common cause for cancer-related morbidity and mortality. As targeted therapy continues to improve the prognosis of NSCLC patients with target oncogene,[8] The deterrence of brain metastases has become an increasingly relevant treatment problem. The rst and second generation EGFR-TKIs (ie ge tinib, erlotinib, and afatinib) cannot effectively cross the intact complete blood-brain barrier which the ratio of the patient's cerebrospinal uid to plasma is as low as 0.01 to 0.003. In the AURA 3 and FLAURA studies [19,42], the PFS bene t of osimertinib was observed in patients with or without known or treated brain metastases at trial entry. Patients with brain metastases tended to have a worse PFS bene t (PFS = 15.2, 95% CI = 12.1-21.4 months) than those without brain metastases (PFS =19.1, 95% CI = 15.2-23.5 months) in EGFR mutation NSCLC patients in the FLAURA study [42]. It seems that this could explain why initial brain metastasis did not in uence the osimertinib PFS but brain metastasis during osimertinib treatment did in uence the osimertinib PFS in our study.
This retrospective study has several limitations. First, this study was conducted at a single medical center, such that the patient population may be biased by patient selection and referral patterns. Second, this study was a retrospective survey, which not only resulted in incomplete data for some patients, but also did not control for laboratory examinations. Third, the multiple lines of treatment before administering osimertinib may have confounded the effects. Another limitation was that any genomic alterations beyond EGFR mutations were not measured in this study. Only rst-generation EGFR-TKIs were enrolled for analysis in AURA 3 trial. Although both rst-and second-generation EGFR-TKIs were enrolled for analysis, but it still is a retrospective analysis. In the future, further randomized controlled trial should be conducted to evaluate PFS and OS bene t between different sequences of EKFR-TKIs.
Conclusion
We found that the ge tinib group had better OS, that osimertinib treatment after previous chemotherapy (Group A) had a better response rate, that osimertinib as the third-line treatment had a better PFS than osimertinib as the second-line treatment, that brain metastasis noted before osimertinib treatment was a poor prognostic factor for PFS, that ge tinib as a rst-line treatment and inclusion in Group A (osimertinib treatment after previous chemotherapy) were better prognostic factors for OS, and that COPD tended to be a poor prognostic factor for PFS and OS. But, osimertinib is still neither easily available nor covered by national reimbursement in many countries. In our study, an alternative sequence (using chemotherapy rst when initially osimertinib not available) still s better PFS bene t. Furthermore,
Funding
This study was supported by grants from the Chang Gung Memorial Hospital (CMRPG8E1661~1663, CMRPG8F1351, CMRPG8F1491~1493, and CMPRG8H1201 to Chin-Chou Wang. CMRPG8F1441 to Chia-Cheng Tseng). The funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Competing interests statement
The authors state that that there no potential con icts of interest. * Excluding the data of osimertinib use immediately after previous EGFR-TKI therapy (n=53, Group B). ** All patients (n=98). PFS= median progression free survival, OS= median overall survival. Group A= osimertinib treatment after previous chemotherapy, Group B= osimertinib treatment immediately following treatment with another EGFR-TKI. Brain Metastasis (OSI) = brain metastasis was noted before osimertinib treatment. Brain Metastasis (OSI) = brain metastasis was noted before osimertinib treatment. | 2021-07-26T00:06:09.220Z | 2021-06-08T00:00:00.000 | {
"year": 2021,
"sha1": "1d21ebac8dd668eeca60b846876f22f53e7754c7",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-554803/v1.pdf?c=1631898642000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "1a7155cd35c7013e5f7606a76468ad1323e286d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249293223 | pes2o/s2orc | v3-fos-license | Laparoscopic surgery of urachal remnants in children: 3-center experience and comparison to an open approach
Since the first description by Trondsen in 1993, laparoscopy has become the preferred method of surgery of urachal remnants in children. Some authors call it the “gold standard.” Nonetheless, the comparison with open surgery in the literature is limited to several tens of patients. In this paper, we aim to summarize our experience reporting data of a large group of patients. We conducted a retrospective analysis of anonymized data from patients who underwent surgical interventions at three clinical centers. A total of 78 boys and 33 girls (M:F 2.36:1) were included in our study. Eighty-seven of them underwent mini-invasive surgery (group 1); 24 were operated in a conventional manner (group 2). The predominant form of the urachal anomaly found was the cyst (58.5%), while an umbilical sinus was present in 47 patients (42.3%), a bladder diverticulum in 7 (6.3%), and a patent urachus in 3 cases (2.7%). The average duration of surgery was 60.7 min (20–192 min) in group 1 and 42.7 min (20–90 min) in group 2; excluding the cases with simultaneous interventions, the average duration was found to be 54.5 and 39.7 min, respectively. Twenty-nine simultaneous operations for associated pathologies were performed in 19 cases (21.8%) in our MIS group, in 8 of them (9.19%) for a preoperatively unknown associated pathology, compared to 4 simultaneous operations performed in 4 patients (16.7%) in the open surgery group. We observed intra-operative complications in 2 cases in Group 1; early postoperative complications included hematuria in 14 cases (16%). The duration of postoperative analgesia was significantly shorter in the MIS group. Laparoscopic surgery has better cosmetic results and allows for additional diagnostics and simultaneous operations that in turn lead to a shorter duration of postoperative analgesia, but has a longer duration in comparison to an open technique.
Background
Laparoscopic surgery for children with urinary duct abnormalities has been performed for a long time. The first case was described by Trondsen in 1993 [1]. Since then, a large number of works on minimally invasive operations have been published. In the case of urachal remnants, most authors are inclined to believe that laparoscopic surgery in childhood is the preferred method of surgical treatment. For this reason, Castanheira de Oliveira M. [2] calls it the "gold standard. " In the overwhelming majority of publications, either single cases of laparoscopic operations or small groups of patients are reported on [3][4][5][6][7][8]. Unfortunately, such data is not significative enough from a statistical point of view.
An even smaller number of studies [4,5,8,9] has aimed at comparing laparoscopic and open surgery, even then the total number of enrolled patients is limited to a few tens of individuals.
Moreover, only a small number of publications pay attention to the technique of surgical intervention, as well as to the importance of the diagnostic role of laparoscopy and the possibility of performing simultaneous operations it offers.
In this paper, we aim to summarize the experience of three different centers in the surgical treatment of urachal abnormalities in children, analyzing data from a large group of patients.
To our knowledge, this is the largest group of patients operated for urachal remnants to be retrospectively analyzed and reported in the literature.
Methods
In this study, we conducted a retrospective analysis based on our medical documentation. The anonymized data of 111 children, who underwent surgical treatment for urachal remnants from 1995 to 2019 in three different centers, were analyzed. The centers are as follows: The analysis was carried out according to the following parameters: • Patients (demographic data) • Type of UR • Symptoms • Comparison between "group 1" and "group 2" for We employed the t test for statistical analysis.
Technical aspects
The operations were performed in a supine position, with the operating table usually fixed horizontally. Sometimes when working near the bottom of the bladder, especially in the presence of increased bowel gaseous distention, lowering the head end of the table a little (Trendelenburg position, − 15/− 30°) became necessary. This allowed us to move aside the loops of the small intestine and obtain a better view of the operative area. Bladder catheterization was performed in all cases, in order to allow filling of the bladder during surgery.
In 84 cases, we used 3 trocars: one 10-mm for the camera and two 5-mm operative trocars in triangulation; in children less than 1 year old, a 5-mm trocar for the camera and 3-mm operative trocars were used. In two cases, an additional fourth trocar (5 mm in one case and 12 mm in another) was required. In one case, removal of a urachal cyst was performed as a simultaneous intervention during a laparoscopic nephroureterectomy. For this reason, the trocar positioning differed from those employed in interventions where urachal remnant removal constituted the main procedure.
In the Moscow hospitals, the first trocar was placed at a point located one-third of the distance from the umbilicus to the xiphoid process. The second and third trocars were placed symmetrically on both sides along the midclavicular lines or slightly more laterally. Their positioning was susceptible to variations depending on the localization and the type of UR. If necessary (such as in the presence of a diverticulum of the bladder or a large urachal cyst located in the bottom of the bladder; the latter case occurring in one enrolled patient), an additional fourth trocar for an atraumatic clamp or stapler can be placed (Fig. 1A).
In Perugia, the midclavicular subcostal right place was used by default for the pure laparoscopic technique. Following the achievement of a carboperitoneum, two other operative trocars (3 mm or 5 mm) were placed in the left upper abdominal quadrant and in the right flank (Fig. 1B). When an umbilical fistula was present, a laparoscopic-assisted excision was performed by removing the laparoscopically excised remnant through an incision practiced for fistulectomy [10].
Generally, URs can be easily identified during laparoscopy. In Moscow, all visible urachal structures were resected. Partial resection of the bladder, if technically feasible, was not performed. Clipping of the urachal structure was achieved either using bipolar coagulation and scissors and/or the Roeder loop (in the case of a thin fibrous structure) (Fig. 2) or with a stapler.
On the contrary, minimal resection of the bladder dome following the medial umbilical ligament to avoid recurrences was routinely carried out in Perugia. The bladder dome was then sutured laparoscopically.
In all centers, suturing of the parietal peritoneum was performed selectively in case of a large defect.
In the open group, either a lower median laparotomy or a Pfannenstiel incision was made, depending on the shape of the urachal remnant and the individual experience of the surgeon. Both techniques are well suited for the exposure and resection of URs.
Patients
A total of 78 boys and 33 girls (M:F 2.36:1) were included in our study. In all subgroups, a significant predominance of male patients was noted.
The patients' age distribution for each group and subgroup is shown in Fig. 3.
Type of the UR
The presence of an urachal cyst was found to be the predominant form of anomaly, occurring in 65 patients (58.5%) out of the total. An umbilical sinus was present in 47 patients (42.3%). A bladder diverticulum was observed in 7 children (6.3%). A patent urachus, the rarest form of anomaly, was diagnosed only in 3 cases (2.7%) ( Table 1).
Symptoms
In our sample, children with a symptomatic postoperative course outnumbered those with an asymptomatic one. An accurate description of symptoms and their prevalence is reported in Table 2.
Surgical time
The average duration of laparoscopic surgery (group 1) was 60.7 min (range 20-192 min) The overall operative time in group 2 ranged from 20 to 90 min (mean time 42.7 min). In Table 3, average operative times according to groups, subgroups, and whether a simultaneous intervention was carried out or not are reported.
Operations for abnormalities of the urinary duct only were performed in 68 children (Table 3: group 1, n). In this group of children, the average operative Table 3.
Simultaneous operations
In group 1, 29 simultaneous operations for associated pathologies were performed in 19 patients (21.8%) while in group 2, 4 simultaneous operations were performed in 4 patients (16.7%, see Table 4).
Complications
Pre-operative In two cases (both of which in group 1), we observed an acute inflammation of a urachal cyst, with a clinical picture of acute abdomen. These patients required emergency surgery.
In one of the two cases, the cyst was found to be perforated and wrapped within the greater omentum (changes in the greater omentum required its partial resection).
Intra-operative complications
We observed intra-operative complications in 2 cases in group 1. In one of them, this consisted in bleeding from a UR stump, which had a wide implantation base on the urinary bladder wall.
In the second case, due to an important inflammation of the remnants with tenacious attachment to the omentum and bleeding, conversion to open technique was carried out through a Pfannenstiel incision. This clinical situation stemmed from an infected urachal cyst that had initially been drained under laparoscopic vision 30 days before and subsequently treated with antibiotic therapy.
Postoperative complications No septic complications were observed in the postoperative period in both groups.
Hematuria had its onset typically between the end of the first and the beginning of the second postoperative day. In one case, due to the presence of a significant amount of blood in urine, a child underwent cystoscopy to rule out a bladder injury on day 3 and was diagnosed with hemorrhagic cystitis.
Postoperative analgesia
Analgesia in the early postoperative period was achieved for all children using NSAIDs, usually metamizole in weight-adjusted dosage (10-15 mg/kg). In subgroups 1.1 and 1.2, after surgery, a single dose of analgesic was administered, then, during the first 2 days, pain medications were administered only as needed.
In subgroup 1.3, the following analgesic scheme was used: paracetamol 15 mg/kg i.v. intraoperatively, which was then subsequently repeated every 6-8 h for the first two postoperative days. Rescue doses with ibuprofen 5 mg/kg orally were provided.
Since laparotomy is associated with greater surgical trauma, longer use of analgesics was required in group 2, for an average of 2-4 postoperative days. Other details are shown in Table 5.
Cosmetic results
In our opinion laparoscopy, as expected, has a significant advantage in terms of cosmetic results, as illustrated by the scars in Fig. 4A, B.
Discussion
Transition from open to laparoscopic and mini-invasive surgery has taken place for a number of reasons. First, traditional surgery, especially in older children, required a wide access-a lower median laparotomy, also known as Pfannenstiel incision. In fact, small sections did not always provide sufficient visualization of the structure of the URs. Certain technical difficulties were also associated with the intraperitoneal location of the non-obliterated portion of the urachus. Laparoscopic operations lack these disadvantages. The introduction of a laparoscopic camera through a small incision makes it possible to assess the extent of the structures to be removed, as well as to inspect the abdominal organs for combined pathology. Though the simultaneous operations in our experience mostly consisted of elective interventions that are difficult to attribute to the benefits of laparoscopic access, one should take into account the fact that in 12 patients (13.7%), 9 laparoscopic herniorraphies were performed and, in 4 cases, the presence of inguinal hernia was an intraoperative finding. If we further consider 2 appendectomies (in one case with omental resection) and 2 cases of adhesion dissection, we obtain a total of 14 simultaneous interventions in 12 cases (13.8%) belonging to our MIS group 1; in 8 of them (9.19%) for a condition unknown before surgery. Using Student's criterion to analyze mean operative times, we observe statistically significant differences between the open and the MIS group (39.7 vs 54.5 min).
Some authors believe that treatment approaches should become more differentiated. In a paper published in 2019 by Tanaka K. et al. [8], the authors presented a comparison of the results of open and laparoscopic surgery in a sample of 30 children. The only significant difference between these approaches was the duration of the operation. In the group of children under 10 years of age, there were no cosmetic advantages of laparoscopic access, and in the group of older children, the differences in the duration of the operation were not statistically significant. In light of this, the authors recommend laparoscopic interventions in children older than 10 years.
If on the one hand we consider open surgery an acceptable alternative, it is also our opinion that the proposed age level of 10 years is quite elevated. In addition, our data show that the duration of required postoperative analgesia was significantly lower in group 1 ( Table 5).
As mentioned above, one of our intraoperative complications was bleeding from a stump of the UR, which had a wide base on the urinary bladder. After two Roeder loops were slipped down, the stump of the urachus was clamped with an atraumatic clamp and sutured with a continuous twisting seam. We believe that in such situations, the use of staplers should be recommended.
The underlying processes leading to hematuria remain unclear. In a few cases, hematuria occurred in situations where a bladder injury was completely excluded (3 children with umbilical sinuses), which was confirmed by reviewing the videos of the operation.
In our study, we noticed some degree of correlation between the presence of inflammatory changes in the removed part of the urachus and the occurrence of hematuria in the postoperative period: vascular turgor, hyperemia, or stiffness of the walls were noted in 7 out of 9 patients.
During laparoscopy, we repeatedly observed inflammatory alterations of varying degrees in the wall of the urachus (such as vascular turgor, Fig. 5). Usually, these children sought medical help for low-intensity episodic abdominal pain, which suggests the presence of a chronic inflammatory process.
The issue of partial omphalectomy in the presence of an umbilical sinus is currently controversial [11,12]. There were also some differences between the Moscow group and the Perugia group in their approach. Surgeons from the Moscow group proceeded from the assumption that surgically performing the highest possible intersection of the structures of the urachus (after its mobilization and traction, Fig. 6) eliminates the cause of the clinical manifestations.
According to the surgeons from the Perugia group, in the presence of an inflamed urachal sinus and detected umbilical fistula, partial central omphalectomy appears to be the best procedure to avoid the recurrence of symptoms. This way, communication with the external environment, and thus the possibility of external contaminants causing sinus infection, is eliminated. In such cases, their group performs "laparoscopically assisted" interventions with partial omphalectomy as described above.
In the Moscow group, clinical recurrence in the form of umbilical discharge was observed only in one case. Sclerotherapy with a 10% iodine solution was performed.
Whether the excision of the bladder dome should be routinely performed remains to be assessed. Despite the existence of such recommendations due to an apparent tumor progression risk described in literature in previous years, in our recent screening study we found that the real incidence rate of URs in the general population was severely underestimated [13]. In light of this, the opportunity of such an approach should be critically revised.
In conclusion, it is our opinion that laparoscopic surgery today should be considered the gold standard for urachal anomalies surgery in children of all ages. Its most notable advantages include achieving the best cosmetic result, providing the possibility of additional diagnostics and simultaneous operations, and, despite longer average operative times, allowing for a shorter duration of postoperative analgesia. | 2022-06-03T15:09:35.381Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "b345c8611fc4ef6f84404dddcff9ff42013f3374",
"oa_license": "CCBY",
"oa_url": "https://aops.springeropen.com/track/pdf/10.1186/s43159-022-00180-5",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "85a10390ec8b6fe57f153d668e8bdea719edbdeb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
67820625 | pes2o/s2orc | v3-fos-license | FORMULATION AND OPTIMIZATION OF VARDENAFIL HYDROCHLORIDE ORAL DISINTEGRATING TABLETS : EFFECT OF SUPERDISINTEGRANTS BY
One of the fruitful results of oral technological advancement in dosage forms is the orally disintegrating tablets (ODTs) as they disintegrate rapidly in the mouth and do not require water for administration. This work employed mixture design approach for developing and optimizing oral disintegrating tablets of a slightly water soluble drug, vardenafil hydrochloride. Three component mixture design was used to optimize the type and concentration of superdisntegrants, crosscarmellose sodium (X1), crosspovidone (X2) and sodium starch glycolate (X3) using water soluble dextrates (Emdex ® ) as a filler. Disintegration time, wetting time and t90 values for all formulations ranged from 33.69 to 208.68 s, 40.42 to 209.83 s and 80.04 to 484.63 s, respectively. According to the results, the selected variables have a strong influence on disintegration time, wetting time and t90 of the ODTs. The lowest disintegration time, wetting time and t90 were showed by ODTs formula composed of 1.72 % of crosscarmellose in combination with 4.28 % of crosspovidone. So, this formula was chosen as the optimized formula. Stability studies also showed that the optimized formula was stable under accelerated conditions. And, by comparing the selected formula with Prosolv ® ODT G2 as a ready ODT system, it showed faster disintegration time and higher dissolution rate. Hence, the best superdisintegrants to be used with the water soluble dextrates are crosspovidone in combination with crosscarmellose sodium.
INTRODUCTION
Over the past three decades, orally disintegrating tablets (ODTs) have gained considerable attention as a preferred alternative to conventional tablets and capsules due to better patient compliance (Ganesh and Deshpande 2011).ODTs are solid dosage forms containing medicinal substances which disintegrate rapidly, usually in a matter of seconds, when placed on the tongue (Hirani et al. 2009).These dosage forms are of particular advantages in certain patient groups who have difficulty in swallowing such as pediatric, geriatric and psychiatric patients (Sastry et al. 2000, Suresh et al. 2008).ODTs are formulated in such a way that they disperse, dissolve, or disintegrate rapidly in oral cavity, allowing release of the medication from the dosage form without water.This attributed to rapid water absorption into the core of the tablets, rapid disintegration of tablets and dissolving water soluble tablet components which leads to rapid dissolution of the tablets (Velmurugan and Vinushitha 2010).Dissolved medication either swallowed or subjected to pregastric absorption which may lead to increasing the rate and extent of drug absorption and may decrease hepatic metabolism (Van Arnum 2007).
Three main manufacturing methods are used for the manufacturing of different technologies of ODTs.These methods are: freeze drying (Ghosh et al. 2011), molding (Fu et al. 2004) and compression method (Wagh et al. 2010).To develop a rapidly disintegrating tablet with direct-compression method, it was necessary to find suitable excipients with good compressibility and disintegrating ability.Although the superdisintegrants primarily affect the rate of disintegration, when used at high concentrations, they can also affect mouth feel, tablet hardness, and friability.Thus, several factors must be considered when selecting a superdisintegrant (Abdelbary et al. 2009).The direct-compression tablet"s disintegration and dissolution are based on the single or combined action of disintegrants and water-soluble excipients.In many cases, the disintegrants have a major role in the disintegration/dissolution process of rapidly disintegrating tablets made by direct compression.The choice of a suitable type and an optimal amount of disintegrants is paramount for ensuring a high disintegration rate (Dobetti 2001).The simultaneous presence of a disintegrant with a high swelling (or disintegrating) force, defined as ""disintegrating agent" and a substance with a low swelling force, defined as ""swelling agent," was claimed as the key factor for the rapid disintegration of a tablet, also offering satisfactory physical resistance (Cousin et al. 1995).
ODTs formulations have the advantages of both solid and liquid dosage forms and are useful for immediate release of drug with improved bioavailability enabling reduction of the therapeutic dose with almost the same pharmacological action; as higher doses from conventional dosage forms, and decreasing the adverse effects (Mostafa et al. 2013).
Vardenafil is an oral therapy for the treatment of erectile dysfunction.It is a selective inhibitor of cyclic guanosine monophosphate (cGMP) specific phosphodiesterase type 5 (PDE5) which is the most abundant PDE in the human corpus cavernosum (DrugBank 2016).
A mixture design approach was used to optimize the type and concentration of superdisintegrants in developing using water soluble filler (dextrates).Simplex centroid mixture design was applied to optimize the concentration of crosscarmellose sodium (X 1 ), crosspovidone (X 2 ) and sodium starch glycolate (X 3 ).Special cubic model with interaction terms was derived to evaluate the effect of the three components on ODTs disintegration time (Y 1 ), wetting time (Y 2 ) and time for 90 % release of the drug t 90 (Y 3 ).The effect of lower and higher superdisintegrant concentrations were also investigated for the optimized formula.Furthermore, the optimized formula was subjected to accelerated stability conditions and finally it was compared with Prosolv ® ODT G2 as a ready ODT system.
Bulk Density
Apparent bulk density (ρb) was determined by pouring blend into a graduated cylinder.The bulk volume (Vb) and weight of powder (M) was determined.The bulk density was calculated using the formula (USP30-NF25 2007) ρb = M/Vb (1)
Tapped Density
The measuring cylinder containing known mass of blend was tapped for a fixed time.The minimum vo-lume (Vt) occupied in the cylinder and weight (M) of the blend as measured.The tapped density (ρt) was calculated using the formula (USP30-NF25 2007)
Hausner ratio
Hausner ratio is an indirect index of ease of powder flow.It is calculated by the following formula (USP30-NF25 2007) Where ρt is tapped density and ρb is bulk density.Lower hausner ratio (< 1.25) indicate better flow prop-erties than higher ones (>1.25).
Carr's index
The compressibility index of blend can be determined using Carr"s compressibility index, and can be determined by the following formula (USP30-NF25 2007):
Preformulation study by DSC
DSC studies were carried out using (Shimadzu DTA-50 Analyzer, Kyoto, Japan) to check the compatibility of ingedients.DSC thermograms of pure drug (vardenafil hydrochloride), crosscarmellose sodium, crosspovidone, sodium starch glycolate, dextrates, mannitol and sodium stearyl fumarate were obtained.Approximately 5 mg of samples were weighed and placed in the aluminum pans and heated at a rate of 10 °C/min, with indium in the reference pan; in an atmosphere of nitrogen to a temperature 350 °C.The DSC studies were performed for vardenafil hydrochloride and physical mixture of vardenafil hydrochloride with the investigated excipients.
Experimental design
Three component simplex centroid mixture design was used to optimize crosscarmellose sodium, CCS (X 1 ), crosspovidone, CP (X 2 ) and sodium starch glycolate, SSG (X 3 ) concentrations using a statistical package (Design-Expert® Version 7.0.0).Binary and tertiary interaction terms were derived for the statistical models to evaluate the effect of the three components on ODTs disintegration time (Y 1 ), wetting time (Y 2 ) and time for 90 % drug release, t 90 (Y 3 ).
The three independent variables as well as their proportions and the analyzed responses are shown in table (1); the matrix of the simplex centroid mixture design is represented in table (2), while the tablet formulations are represented in table (3).
A polynomial function is usually used to describe the response in a mixture experiment.This polynomial function represents how the components affect the response.To better study the shape of the response surface, the natural choice for a design would be the one whose points are spread evenly over the whole simplex.A simplex centroid design only includes the centroid points.Since a simplex centroid design usually has fewer runs than a simplex lattice design with the same degree, a polynomial model with fewer terms should be used.A simplex centroid design can be used to fit the following model.
The above model is called the special cubic model.The intercept term is not included due to the correlation between the three components (their sum is 100%).
Tablet manufacturing
ODTs were manufactured by direct compression method.The composition of vardenafil hydrochloride OTDs tablet formulations is displayed in Table (3).The corresponding amounts of drug, filler (dextrates) and superdisintegrants (CCS, CP and SSG) were accurately weighed.The weighed powder excipients were transferred into Cube Mixer KB (Erweka, S2Y, Heusenstamm, Germany) and mixed for 5 min.Thereafter, the corresponding amount of mannitol was accurately weighed, added to the mixture and the powder mixture was further mixed for 10 min.The formula weight of sodium stearyl fumarate was mixed with the powder in the cube mixer for 2 min.Finally, the powder was compressed into tablets using Single stroke tablet compressing machine (Royal Artist, Mumbai, India) using 6-mm diameter rounded flat punches.The tablets were collected during compression for in-process testing (weight and hardness) and were stored in airtight high-density polyethylene (HDPE) bottles pending further testing (Mostafa, Ibrahim et al. 2013).
Weight variation
Twenty tablets from each batch were individually weighed and the average weight and standard deviation were reported (Rahman et al. 2012).
Hardness
Tablet hardness was determined using hardness tester PTB 311 (Pharma test GmbH, Hainburg, Germany) for 10 tablets of each batch with known weight.The average hardness and standard deviation were reported.
Content Uniformity
Uniformity of dosage unit was assessed according to (USP30-NF25 2007) requirements.Twenty tablets were randomly selected and average weight was calculated and powdered in a glass mortar.Powder equivalent to 11.85 mg of drug was weighed and dissolved in 100 ml of 6.8 pH phosphate buffer, filtered and drug content analyzed using UV spectrophotometer (Shimadzu 1800 dual beam, Kyoto, Japan) at a wavelength of 215 nm.The drug concentration was measured using the constructed standard calibration curve.
In vitro disintegration time
In vitro disintegration test was assessed according to the (USP30-NF25 2007) requirements for immediate release tablets.One dosage unit was put in each of the six tubes of the basket.The apparatus (Pharmatest PTZ 3E, Hainburg, Germany) was operated, using distilled water as the immersion fluid, maintained at 37 ± 2°C.Time for complete disintegration of each table and standard deviation were calculated (Shoukri et al. 2009).
Wetting time and wetting ratio
Ten milliliters of distilled water containing eosin, a water soluble dye, were placed in a Petri dish of 10 cm diameter containing circular tissue papers of 10 cm diameter.One tablet carefully placed in the center of the Petri dish and the time required for water to reach the upper surface of the tablet was noted as the wetting time.The test results are presented as mean value of three determinations ± SD (Jonwal et al. 2010).The complete wetted tablet was then weighed.Water absorption ratio, R, was determined according to the following equation: Where W b and Wa are tablet weights before and after water absorption, respectively.
Stability study of the optimized ODTs formula
In order to investigate effect of storage on the optimized formula, accelerated stability study was carried out at 40 ± 2 ºC in a humidity chamber having 75 ± 5 % RH.Samples were withdrawn after three months and evaluated for change in drug content, hardness and disintegration time.Also, the kinetic parameters were calculated.
Comparing the optimized ODTs formula with Prosolv ® ODT as a ready ODT system.
The optimized vardenafil hydrochloride ODTs formula was compared to Prosolv ® ODT G2, a ready ODT system.Table (4) represents ODTs formula prepared by Prosolv ® ODT.Prosolv ® ODT G2 is composed of microcrystalline cellulose, colloidal silicon dioxide, mannitol, fructose and crospovidone manufactured using JRS Pharma's coprocessing technology.It is a high functionality excipient for orally disintegrating tablet formulation, development, and manufacture.Disintegration time, wetting time, t 90 and dissolution profiles were evaluated for both optimized formula and Prosolv ® ODT.
Data obtained from experimental design
All tablet formulations were prepared according to the matrix of the design (Table 2) and according to formulae mentioned in Table (3).Vardenafil hydrochloride ODTs properties (weight, hardness, hausner ratio, carr's index, wetting time, wetting ratio and drug content) are summarized in Table ( 5) and the responses measured (disintegration "Y 1 ", wetting time "Y 2 " and time for 90% drug release "Y 3 ") are summarized in Figure (2).All manufactured tablets formulations met the USP30-NF25 requirements for uniformity of dosage units.Acceptance values of all formulations were within the USP30-NF25 limit and drug content was ranged from 98.21 ± 0.841 % to 102.82 ± 1.162 %.Furthermore, all tablets friability was below 0.577 % which compiles with USP requirements.Disintegration time, wetting time and t 90 values for all the ten formulations (Figure 2) varied from 33.69 to 208.68 sec., 40.42 to 209.83 sec.and 80.04 to 484.63 sec.respectively.These results indicate that the selected variables have strong influence on disintegration time, wetting time and t 90 of the ODTs.The resulting equations of analysis for each response variable were as follows: Y 1 = 205.39X 1 + 67.18 X 2 + 174.66 X 3 -359.26X 1 X 2 -19.57X 1 X 3 -276.11X 2 X 3 + 685.95 X 1 X 2 X 3 (8) Y 2 = 201.21X 1 + 80.92 X 2 + 209.57X 3 -221.11X 1 X 2 + 69.35 X 1 X 3 -253.74X 2 X 3 -563.42X 1 X 2 X 3 (9) Y 3 = 483.49X 1 + 178.33 X 2 + 474.05 X 3 -892.80X 1 X 2 -92.91 X 1 X 3 -870.44X 2 X 3 + 178 X 1 X 2 X 3 (10) The above equations were derived by the best-fit method to describe the main effect of process variables (X 1 , X 2 and X 3 ) and their interaction (X 1 X 2 , X 1 X 3 , X 2 X 3 and X 1 X 2 X 3 ) on the responses (Y 1 , Y 2 and Y 3 ).The values of the coefficients (regression coefficient) are associated with the effect of these variables on the response.Coefficients with more than one factor represent an interaction effect of two factors (e.g.X 1 X 2 , X 1 X 3 and X 2 X 3 ) while interaction of the three components (e.g.X 1 X 2 X 3 ) represent special cubic relationships.A positive sign reflects a synergistic effect while a negative sign stands for an antagonistic effect.It can be concluded, from all regression equations 8-10 and figure (3) that the special cubic interaction effect (X 1 X 2 X 3 ) had a great significant (p-value < 0.05) effect on disintegration time (Y 1 ).Also, it had a great insignificant antagonistic effect on wetting time (Y 2 ) with minor effect on t 90 (Y 3 ).Interaction effect (X 1 X 2 ) had a great significant antagonistic effect on disintegration time (Y 1 ) and t 90 (Y 3 ) with little effect on wetting time (Y 2 ) as the interaction effect (X 2 X 3 ) had the highest effect on it more than the other binary interactions.Moreover, interaction effect (X 2 X 3 ) had a noticeable significant antagonistic effect on t 90 (Y 3 ) and disintegration time (Y 1 ), while, (X 1 X 3 ) had insignificant effect on all responses.According to the individual variables X 1 had the highest synergistic effect on the three responses and X 2 had the least effect.
Analysis of fitted data
Response surface plots (Figure 4) are graphically representing the regression equations 3-5 showing the effect of CCS (X 1 ), CP (X 2 ) and SSG (X 3 ) on the disintegration time, wetting time and T 90 of vardenafil hydrochloride ODTs.By visual observation of the plots, it can be seen that they nearly have the same pattern indicating a marked decrease in disintegration time, wetting time and t 90 by increasing the concentration of CP and reached their minimum values at a concentration around 4.24 %, thereafter, increase slightly to the concentration of 6 % of CP.This results are in accordance with Prajapati and Patel (Prajapati and Patel 2010) who studied the different behavior of wetting time, in-vitro disintegration time and cumulative % drug released of piroxicam ODTs prepared by different superdisintegrants using water soluble filler (Mannitol).They concluded that the crospovidone can be successfully utilized for preparation of ODTs.This may be due to crospovidone swell by 95% to 120% upon contact with water; in addition, during tablet compaction the highly compressible crosspovidone particles become extremely deformed.As the deformed crosspovidone particles come in contact with water that is wicked into the tablet, the particles recover their normal structure and then swell, resulting in rapid volume expansion and high hydrostatic pressures that cause tablet disintegration (Balasubramaniam et al. 2008).Moreover, Zade, Kawtikwar et al. investigated the effect of type and concentration of the disintegrants (Croscarmellose Sodium, Sodium starch glycolate and crospovidone) on tizanidine ODTs properties.They concluded that tablets prepared using 5 % of crosspovidone had the least disintegration time among tablets prepared by direct compression method (Zade et al. 2009).On the other hand CCS showed increased disintegration time and wetting time due to it works mainly by wicking mechanism and according to Lopez-Solıs and Villafuerte-Robles high hygroscopic disintegrants will be inhibited by other high water consuming formula components (e.g.dextrates) (Lopez-Solıs and Villafuerte-Robles 2001).Besides, water soluble fillers causes increase in viscosity of the penetrating fluids which tends to reduce effectiveness of disintegrating agents and as they are water soluble, they are likely to dissolve rather than disintegrate (Gopinath et al. 2012).
Figure ( 5) represent the in vitro dissolution profiles of vardenafil hydrochloride from different ODTs formulations.It can be seen that the formula 4, 6 and 7 showed the fastest dissolution profiles as they released the highest drug concentration after 3 minutes followed by formula 2 and 9 which showed slower dissolution rate giving the maximum drug dissolution after 4 and 5 minutes respectively.Formulations 1, 3, 5, 8 and 10 showed the lowest dissolution rate (66.78 %, 65.46 %, 73.34 %, 83.21 and 86.40 %) respectively, after 5 minutes.These results are in accordance with Battu, Repka et al who studied the effect of varying concentrations of different superdisintegrants on disintegration time and in vitro dissolution profiles of different rapidly disintegrating fenoverine tablets.They found that release of drug was faster from formulations containing 6 % crospovidone compared to other formulations (Battu et al. 2007).
Stability study of the optimized ODTs formula
The optimized formula was selected for studying the effect of storage on tablet physical properties and drug release.Figure ( 7) showed the disintegration time and hardness of the optimized ODTs formula before and after storage under accelerated conditions for three months.Also, table (7) showed the percent remained of vardenafil hydrochloride from the selected formulation after storage at 40 ± 2 O C and 75 ± 5 %, relative humidity.
It is clear from Figure (7) that the disintegration time of the selected formula was not significantly affected by storage under accelerated conditions.On the other hand, the hardness was significantly increased from 2.25 ± 0.058 Kp to 2.77 ± 0.071 Kp.Those results are in accordance with shukla and price who studies the effect of moisture content on the crushing force of dextrates.They found that dextrates crushing force was increased dramatically by increasing the moisture content due to the moisture improves bonding between particles of dextrates (Shukla and Price 1991).Moreover, Table (7) showed that the percent remained of vardenafil hydrochloride after 90 days was 98.28 % and by calculating the kinetic parameters it is obvious that the degredation of vardenafil hydrochloride was zero order reaction based on the values of correlation coefficient (r) and the t 90 for the optimized formula was 1.52 years (Samy et al. 2001), table (8).Comparing the optimized ODTs formula with Prosolv ® ODT as a ready ODT system
Conclusion
Vardenafil hydrochloride ODTs were successfully prepared using direct compression method.The composition of ODTs could be optimized using a simplex centroid mixture design so as to obtain rapid disintegration, wettability and drug dissolution along with acceptable tablets hardness and friability.Furthermore, the stability study results of optimized batch indicate that there is no alteration after storage.Also, by comparing the optimized formula with Prosolv ® ODT it showed significant lower disintegration time and higher dissolution rate.This could enhance drug absorption and bioavailability with a rapid onset of action, resulting in improved patient compliance and convenience.It can be concluded from the results of the optimization study that ODTs containing water soluble filler and binder like dextrates and mannitol can be formulated successfully using superdisintegrants crospovidone in combination with crosscarmellose sodium.
vardenafil hydrochloride showed sharp endothermic peak at 227 ºC corresponding to its melting point which indicates its crystallinity and purity, Figure (1.a) (DrugDetails 2016).DSC thermogram of crosscarmellose sodium, crosspovidone and sodium starch glycolate (Jagadish et al. 2010), Figures (1.b, 1.c and 1.d) showed broad endotherm at 64.68 ºC, 60.33 ºC and 63.56 ºC respectively due to melting whereas during scanning of Emdex ® , endothermic peaks at 86.69 ºC, 148.64 ºC and 220.70 ºC were observed, (Figure 1.e).DSC thermogram of mannitol showed endothermic peak at 169.11 ºC, while DSC of sodium stearyl fumarate showed three endothermic peaks at 101.05 ºC, 139.14 ºC and 201 ºC and an exothermic peak at 246 ºC, Figures (1. f and 1.g), respectively.As shown in figure (1), Physical mixture of each ingredient with drug showed their identical peaks at defined temperature range.Presence of all peaks indicates that all ingredients are compatible with drug.
Figure ( 5
Figure (5): Dissolution profile of different vardenafil hydrochloride ODTs formulations.Model validation and optimization of the formualtion parametersTo validate the regression equations or model, a check point of X1 = 1.72 %, X2 = 4.28 % and X3 = 0 % was selected.The predicted and observed values of disintegration time, wetting time and time for 90% release of the tablets for the check point were in close agreement with the values predicted by the model, Figure(6).Concentration of CCS and CP that showed the lowest disintegration time, wetting time and t 90 were chosen as the optimum concentration as shown in Table (6).Optimization was carried out using multiple response optimization.Optimum concentration was verified from the contour plots of all responses.At this optimum concentration, oral disintegrating tablets showed a disintegration time of 33.69 sec.(predicted) and 34.27 ± 1.913 sec.(observed), a wetting time of 70.07 sec.(predicted) and 73.12 ± 2.520 sec.(observed) and T90 of 82.81 sec (predicted) and 81.33 (observed).
Figure ( 6
Figure (6): Comparison of predicted and observed values of disintegration, wetting time and T 90 for check point of vardenafil hydrochloride ODTs.
Figure ( 8
Figure (8) showed the properties of both optimized and Prosolv ® ODT tablets.It is clear from the figure that the optimized ODTs formula had a very significant lower disintegration time, wetting time and t 90 .Additionally, it was clear from the dissolution profiles of both formulations that the optimized formula gave significant higher dissolution rate than Prosolv ® ODT Figure (9).
a Tablet weight = 100 mg
Table ( 5
): Results of each experimental design run in simplex centroid mixture design of vardenafil hydrochloride ODTs.
a Tablet weight = 100 mg | 2019-01-01T23:21:26.440Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "4f82d80e14c7a639044c9327069bad21cbbaf5da",
"oa_license": "CCBY",
"oa_url": "https://ajps.journals.ekb.eg/article_6892_7bc1a8f3d1f3b4ad216d04b2179e51d7.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4f82d80e14c7a639044c9327069bad21cbbaf5da",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
247187820 | pes2o/s2orc | v3-fos-license | Computation of forces arising from the linear Poisson–Boltzmann method in the domain-decomposition paradigm
The Linearized Poisson–Boltzmann (LPB) equation is a popular and widely accepted model for accounting solvent effects in computational (bio-) chemistry. In the present article we derive the analytical forces of the domain-decomposition-based ddLPB-method. We present an efficient strategy to compute the forces and its implementation, and present numerical tests illustrating the accuracy and efficiency of the computation of the analytical forces
Introduction
Most chemical processes and virtually all biochemical processes happen in condensed phase, a situation where the reacting part, or in general the studied part, is embedded in an environment which usually consists of a solvent. For this reason, solvation models, which take into account the effect of the environment on the interesting part (solute), are widely used in computational chemistry and biochemistry. These models can be broadly divided into two classes, explicit solvation models and implicit (continuum) solvation models. Explicit solvation models consider the molecular representation of both, the solute and solvent, making the method more accurate, but computationally expensive and also dependent on a large set of empirical parameters (force field). On the other hand, continuum solvation models treat the solvent as a continuum, described only by a few macroscopic properties. This approach, by its nature, cannot describe specific interactions and anisotropic environment, however it presents some large advantages, it reduces the computational cost significantly, requires fewer parameters and implicitly takes into account the sampling over the degrees of freedom of the solvent. For this reason, implicit solvation models are nowadays popular computational approaches to characterize solvent effects in the simulation of properties and processes of molecular systems in condensed phase [TP94,HN95,RS99,CT99,OL00,TMC05].
Some of the widely used continuum solvation models include the conductor-like screening model (COSMO), proposed in [KS93]; the surface and simulation of volume polarization for electrostatics (SS(V)PE) [Chi99,Chi06]; polarizable continuum model (PCM) [TMC05,MST81,CMT97,BC98,CRSB03] to name a few. In this paper, we focus on the continuum solvation model based on the Poisson-Boltzmann (PB) equations [YL90,NH91] which takes into account both the solvent relative dielectric permittivity and the ionic strength of the solvent.
We consider specifically the linearized Poisson-Boltzmann (LPB) equation which describes the electrostatic potential, ψ of the solvation model in the following form where ε(x) is the space-dependent dielectric permittivity function, κ(x) is the modified Debye-Hückel parameter, and ρ M (x) is the solute charge distribution. We denote the solute cavity by Ω and the solvent region by Ω C = R 3 \ Ω. To describe the solute-solvent region we will use the van-der Waals (vdW) surface (see Fig. 1). The solute cavity Ω is defined as a union of overlapping subdomains, Ω j , i.e., where each Ω j is a vdW ball with radius r j and center x j , and M is the total number of atoms. Then ε(x) has the form where ε 1 and ε 2 are the solute and solvent's dielectric permittivity, respectively. Furthermore, κ(x) has the form where κ > 0 is the Debye-Hückel screening constant of the solvent.
The classical PCM and the COSMO model can be considered as the special cases for PB solvation models. In the classical PCM, the solvent is represented as a polarizable continuous medium that is non-ionic (κ = 0) whereas the COSMO is a reduced version of the PCM, where the solvent is represented as a conductor-like continuum.
We would like to mention some of the widely used methods for solving the LPB equation such as the boundary element method (BEM), the finite difference method (FDM), and the finite element method (FEM), and we refer to [LZHM08] for a review. The main idea of the BEM is to recast the LPB equation as an integral equation defined on a two-dimensional solute-solvent interface [YL90,BFZ02,ABWT09,BCR11]. It is an efficient way to solve the LPB equation, which can be optimized using fast multipole methods [ZPH + 15] and the hierarchial treecode technique [LZHM08]. The PAFMPB solver [LCHM10, ZPH + 15] uses the former optimization technique, whereas the TABI-PB [GK13] uses the latter one. The PB-SAM solver developed by Head-Gordon et al. [LHG06,YHG10,YHG13] discretizes the solute-solvent interface (such as the vdW surface) with grid points on atomic spheres like a collocation method and solves the associated linear system by use of the fast multipole method. It primarily targets the interaction of disjoint molecular compounds. However, one of the limitations of all these solvers relying on integral equations and layer potentials is that it cannot be generalized to solve the nonlinear PB (NPB) equation as opposed to PDE-based methods such as the FDM or FEM.
The finite difference approach is the most popular method to solve linear or nonlinear PB equations. The main idea is to cover the region of interest with a big-box grid and choose different kinds of boundary conditions. Some of the popular software packages using the FDM include UHBD [MBW + 95], Delphi [LLS + 12], MIBPB [CCC + 10], and APBS [BSJ + 01, DCL + 07, JES + 17]. One of the drawbacks of the FDM is that the cost can increase considerably with respect to the grid dimension.
The finite element approach, compared to FDM, provides more flexible mesh refinement and a proper convergence analysis [CHX07]. The SDPBS and SMPBS offer fast and efficient approximations of the sizemodified PB equation [ [SCLM16,GLS17,NSSL19]. These methods do not require any mesh or grid of the molecular surface, are easy to implement, and about two orders of magnitude faster than the state of the art [LLS + 14]. In particular, the ddCOSMO solver can perform up to thousands of times faster than equivalent existing algorithms.
Similar to the aforementioned dd approaches the ddLPB method does also not require any mesh or grid but depends, as ddCOSMO and ddPCM, only on the Lebedev quadrature points [LL99] on a two-dimensional sphere. Hence it is convenient to be applied in molecular dynamics without re-meshing the molecular surface as in the BEM. The ddLPB solver adopts a spectral Galerkin method for discretization and benefits from high sparsity of the involved matrices for the Laplace and screening Poisson equations in Ω, which are coupled by a non-local integral equation on the boundary. The latter takes the majority of cost but can be further accelerated using for example the fast multipole method (FMM). Numerical implementations show that the ddLPB solver is very efficient even without acceleration techniques (see [QSM19] for details) and its FMM-acceleration is ongoing work in progress but not within the scope of this article. The focus of this work is to develop the framework of the computation of first derivatives and the forces for the ddLPB method, the FMM-acceleration is a subsequent step. As the spheres are centered around their nuclei, the computation of the forces becomes natural. By the nature of the problem, this is very technical task, but a necessity in order to make accessible the method to models requiring the gradient of the solvation energy with respect to the nuclear coordinates such as molecular dynamics or geometry optimization.
The paper is divided as follows: Section 2 introduces the notations and gives a summary of the domain decomposition algorithm for the LPB equation. In Section 3 we compute the forces and compute the analytical derivatives. Lastly, in Section 4 we present a comprehensive numerical study, before we conclude in Section 5.
Linear Poisson-Boltzmann Equations
One notes that the LPB equation (1) can be written as two equations, one defined in the solute cavity Ω, namely the Laplace equation given by which is obtained from transforming the Poisson equation by using the transformation ψ r = ψ − ψ 0 where ψ 0 is the potential generated by ρ M in the vacuum, i.e., and a homogeneous screened Poisson (HSP) equation defined on the solvent region given by where S κ : H −1/2 (Γ) → H 1/2 (Γ) denotes a single-layer operator on Γ and H ±1/2 (Γ) denote the fractional Sobolev spaces [Ada75]. We call ψ r and ψ e the reaction potential and the extended potential, respectively. In this paper, we assume that the solute's charge distribution ρ M is supported in Ω and in particular given by the sum of M point charges, i.e., where q i denotes the (partial) charge carried on the i th atom with center x i , and δ is the Dirac delta distribution, but the framework can easily be generalized to non-classical charges under the usual assumption supp (ρ M ) ⊂ Ω.
Domain Decomposition Algorithm
The domain decomposition algorithm that we will consider in this paper has been derived in [QSM19]. For brevity, we will not be deriving the whole method, but we will only present the main equations required for the derivation of analytical forces. We first introduce certain notations and functions that will be used throughout the paper. We denote the characteristic function on Ω i by χ i , i.e., and then let where N i denotes the set of indices of spheres intersecting Ω i (i not included). We make the convention that if |N i | = 0, we define ω ij (x) = 0 for all j. The boundary Γ i of the sphere Ω i can either be on the solute-solvent boundary, Γ, i.e., on the external part or inside the solute cavity, i.e., the internal part. To distinguish between the two cases we define the characteristic function, χ e i (x) as where Γ e i and Γ i i denote the external and internal part of the boundary Γ i respectively, see Fig. 2. With the definition of ω ij (x) from Eq. (6) we have the relation Ωi We define the radial scaling function of order depending on the i th atom by The angular dependency relative to the i th atom is denoted by where Y m : S 2 → R is the real-valued orthonormal spherical harmonic of degree and order m. Moreover, we define the following radial Bessel function by where i (x) is the modified spherical Bessel's function of the first kind. Finally, we have integrals over the unit sphere S 2 which will be numerically approximated using the Lebedev quadrature rule [LL99] with N leb points. The approximation over the sphere Ω i is given by where x n i = x i + r i s n , s n ∈ S 2 , and ω n is the quadrature weight. The fully discretized domain decomposition algorithm for the LPB equation gives rise to the system of equations given by The matrices A, B, C 1 , and C 2 are of the size M ( max + 1) 2 × M ( max + 1) 2 where max denotes the maximum degree of spherical harmonics. The vectors G 0 and F 0 on the right-hand side correspond to ψ 0 and ∂ n ψ 0 , respectively, and X r and X e denote the solution vectors corresponding to the reaction potential and the extended potential, respectively. After calculating X, we can approximate ψ r and ψ e respectively by a linear combination of spherical harmonics as follows and We now show the specific formulas of the matrices. The (i m, j m ) th matrix entry for A is given by, and the (i m, j m ) th matrix entry for B is given by, We note that both the matrices A and B are sparse in nature, as blocks are nonzero only for interlocking vdW balls. Next, we move to the matrices C 1 and C 2 where the (i m, j m ) th entry of C 1 is given by and for C 2 by, where matrix Q is a matrix of size M ( max + 1) 2 × M N leb and the (j m , in) th entry is given by where k j 0 (x) is defined similarly to Eq. (10) given by k 0 (x) is the modified spherical Bessel's function of the second kind, Finally, we have the right-hand side vectors. The (i m) th entry of the vector G 0 is given by where is the solution of Eq.
(3) and the (i m) th entry of F 0 is given by where and
Computation of Forces
The computation of the electrostatic solvation energy, E s in [QSM19], follows the ideas of [FBM02] where the reaction potential was used to compute E s . For the computation of forces, we require the whole electrostatic potential and hence we define E s as where X is given in Eq. (11), Q has the same size as X with and the inner product ·, · j is given by The force with respect to a parameter λ, such as the position of x k of the k th atom, is given by, The ddLPB system is given by LX = g. Taking the derivative with respect to λ: Substituting ∇ λ X in the force computation where L * is the adjoint of the matrix L and L −1 * Q is the solution of the system Using the definition of X adj we get the computation of forces as We note that in Eq. (28) we require the computation of the adjoint system (but only once for any number of different parameters λ) and the derivatives of the g and L matrix. The adjoint matrix of the system is given by where A T stands for the transpose of the matrix A and respectively others.
In the next subsection we would present the analytical derivatives that arise in Eq. (28).
Analytical Derivatives
We now restrict ourselves to the case where λ denotes the central coordinate x k of the k th atom. We note that entries of matrix L and vector g have certain functions that are not smooth, namely, χ i (x), χ e i (x), and ω ij (x). To define their differentiable counterparts, we follow the ideas presented in [LSC + 13]. We first introduce a polynomial, p η (t) given by where η is a smoothness parameter. Then the regularized characteristic function is given by Using Eq. (30), the regularized version of ω η ij (x) defined in Eq. (6) is given by with where and r j 1 is defined in Eq. (8). Finally, the differentiable counterpart of χ e i (x) is given by One thing to note is that in the definition of d i (x) we have a minimum which is not a smooth function. On close inspection we note that if f i (x) < 1, then d i (x) = 1, else d i (x) = 1/f i (x).
Sparse matrices A and B
As noted in the previous sections, the matrices A and B are sparse in nature with constant diagonal entries.
As we are finding derivatives with respect to the position of sphere Ω k , i.e., x k , we have the following cases which gives non-zero contribution 1. j ∈ N i and i = k (see Subfig. 3a); 2. j ∈ N i and j = k (see Subfig. 3b); 3. j ∈ N i and k ∈ N i and k = j (see Subfig. 3c and 3d). Fig. 3 shows the aforementioned cases. Looking at the matrix entries for A and B we note that we have three terms depending on the position, namely ω η ij (x n i ), Y j m (x n i ), and r j (x n i ) for matrix A; and i j (x n i ) for matrix B.
For abbreviation, we denote ∇ x k by ∇ k in the following content. The derivative of ω η ij (x n i ) is given by where Further, the derivative of Y j m (x n i ) is given by We now show the details for derivation of Eq. (36). Note that ∀x = (x 1 , x 2 , x 3 ), we have and which yield that The equation Eq. (36) is then followed. Lastly, we have the derivatives of the radial scaling r j (x n i ) given by and the Bessel scaling, i j (x n i ) which is given by Collecting all the terms, we can compute the derivatives of the (i m, j m ) th element of matrix A and B. In the case of i = j, We now consider different cases for i = j as follows.
1. Case j ∈ N i and k = i (Subfig. 3a): 2. Case j ∈ N i and k = j (Subfig. 3b): 3c and 3d): 3.1.2 Dense matrices C 1 and C 2 Now, we move our attention towards the computation of derivatives for the matrices C 1 and C 2 . We compute the derivative of C 1 and C 2 together, i.e., we consider where the (i m) th entry of [C 1 X r + C 2 X e ] is given by: We note that we have two terms depending on x k , i.e., χ η i (x n i ) and [Q] in j m . Unlike for matrices A and B we have non-trivial contributions on the diagonal as well. We divide the computation of derivative of Eq. (44) into two parts with help of the product rule as follows Derivative of χ η i (x n i ). The first contribution is the derivative of χ η i (x n i ) when keeping Q as constant. The non zero contribution comes when k = i or k ∈ N i . Combining (34) and (35), we have Here we use the fact that if f i n > 1, then χ η i (x n i ) = 0; if f i n ≤ 1, then d i (x n i ) = 1.
Derivative of [Q]
in j m . The second contribution comes from the derivatives of matrix Q. The entries are given by Eq. (18).
In this matrix we note that three terms depend on the position namely, Pχ η j m 0m0 , k j 0 (x n i ), and Y j 0 m0 (x n i ). To be precise, we have The non-zero contribution of the derivative for k j 0 (x n i ) and Y j 0m0 (x n i ) comes when k = i or k = j. The derivative of k j 0 is given by: while the derivative of Y j 0 m0 (x n i ) is already given by Eq. (36) with , m replaced by 0 , m 0 . The final contribution comes from the derivative of Pχ η j m 0m0 . We have the computation of where the derivative of χ η j x n0 j is given by Eq. (45) with i, n replaced by j, n 0 .
Right-hand side G 0 and F 0
The final derivatives we require are those of the right-hand side G 0 and F 0 . In Eq. (21) we have two terms depending on x k ; χ η i (x n i ) and ψ 0 (x n i ). The derivatives of χ η i (x n i ) is given by Eq. (45) and the derivative of ψ 0 is given by Next we move towards the computation of derivative for F 0 . We note that the entries of F 0 are very similar to the entries of [C 1 X r + C 2 X e ], with only the addition of the term ∂ n ψ 0 (x n i ). The computation of other terms namely, χ η i (x n i ), k j 0 (x n i ), and Y j 0m0 (x n i ) has been taken before. The derivatives of ∂ n ψ 0 is given by where I 3×3 is the identity matrix of size 3 × 3 and n = s n at x n i is the unit normal derivative. The computation of forces can be summarized as follows: 1. Solve Eq. (11) to get the reaction potential X r and the extended potential X e .
3. Compute the analytical derivatives of the matrix L and the right-hand side g with respect to a parameter λ.
Numerical Simulations
In this section, we present the numerical studies for the computation of forces. Before presenting the examples, we would like to mention some details on solving the system (1) and (27). We follow the same ideas as prescribed in [QSM19]. We transform the system (1) into a fixed point iteration technique where the ν th iterative step is given by We refer to this as outer iteration with initial conditions X (0) r X (0) e T = 0. Applying the preconditioner requires as well to solve two linear systems, and we refer to them as micro-iterations as this is also performed in an iterative manner. For the two inner linear systems, we use as guess the solution of the previous macro-iterations, or zero if solving them for the first time.
For each linear system, the stopping criterion is on the relative increment of the solution, i.e., where · ∞ is the ∞ -norm of the corresponding vector. However, we use two different tolerances for the macro and micro-iterations, namely, the inner tolerance is equal to the outer tolerance divided by 100. The code was tested on a set of input structures with different number of atoms, spanning from 10 1 to 10 4 atoms. We prepared the input structures using the tool PDB2PQR provided in the APBS software package [JES + 17], with the AMBER force field to assign the atomic partial charges [PC03]. The radii were assigned in a subsequent step, according to a definition of a solvent accessible surface (SAS): for each atom we set its radius to its value as reported in ref. [Bon64] plus a contribution from the effective size of the solvent (1.4 Å for water). Table 1 reports detailed information about the structures.
Comparison between ddLPB and APBS
Once the structures were ready, we performed a series of calculation using both APBS and ddLPB. All the calculations were performed on a server equipped with four Intel(R) Xeon(R) Gold 6140M running at 2.30 GHz, for a total of 72 cores, and 1.2 TB of RAM.
We set the (relative) dielectric constant of the solute's region to 1 (vacuum) and the dielectric constant of the environment to 78.54 (water). We included two ions of charge +1 and −1, both in concentration 0.1 M, which combined with a temperature of 298.15 K, correspond to κ = 0.104. For what concerns APBS, the calculations were performed using the box provided by PDB2PQR (keyword key), which is enough to contain the structures, and a number of grid points (keyword grid) suitable for the multigrid algorithm, hence calculated using n = c 2 +1 + 1, where n is the number of grid points along a given dimension, is the depth of the multilevel solver (keyword nlev), and c is an arbitrary integer. We choose c such that, with = −4, a certain target density of points is achieved. In the following discussion, we report the actual density of points computed with = −4 and as an average over the three dimensions. Finally, the remaining relevant keywords are chgm = spl4, bcfl = mdh, srad = 0.0, and swin = 0.3.
For what concerns ddLPB, we set the tolerance tol = 10 −6 , the number of Lebedev grid points to 302, and the smooth-switching window using η = 0.1. The maximum degree of spherical harmonics is set to values between 2 and 12, to study the convergence of the results.
We compare the energies obtained from APBS and ddLPB for the molecules presented in Table 1. Since there is no exact energy, we compute the reference energy of APBS from linear extrapolation and the one of ddLPB from exponential fitting respectively (see [QSM19] for details). The energy and memory w.r.t. different discretization parameters are illustrated in Figure 4 and Tables 2-3. It can be observed that for each molecule, the difference between two reference energies is less than 1%, which actually validates the energy computation of ddLPB. Table 2 and 3 give the numerical values for the solvation energy and memory presented in Figure 4, as well as the number of macro-iterations. Solving the linear system requires a relatively low number of macro-iterations, thus making the method particularly efficient. Furthermore, the number of macro-iterations is stable regardless of the input structure, suggesting that the method retains its efficiency even on systems different than those presented in this benchmark. The memory usage is quadratic in max and linear in the number of atoms, the latter is a requirement for applying the method to very large systems. On the other hand, APBS has memory requirements linear with respect to the system's volume and cubic in the grid point density. For both the codes, increasing the accuracy results in a high memory usage, however the lower scaling of ddLPB memory requirements makes it possible to achieve higher accuracies. As an example, for the intermediate-size system 1du9, by using ddLPB it is possible to achieve an accuracy of ∼0.2% with a memory usage of 7.7 GB, whereas the same accuracy cannot be achieved using APBS due to a too high memory usage. The same finding holds also for the larger systems.
Numerical validation of analytical forces
The analytical forces computed by Eq. (28) have been tested against numerical forces done through finite differences. The numerical forces are evaluated using the following definition Here, λ is a generic parameter, for instance one component of a nuclear coordinate, and 0 < h 1 is a small step size.
For the numerical test, we selected the two smallest structure (1ay3, 1etn) and we computed all the numerical derivatives with respect to the nuclear coordinates using Eq. (52), for various finite step sizes. Due to high computational cost related to the repeated number of calculations, we used a coarser discretization: maximum degree of spherical harmonics 2, 110 Lebedev grid points, convergence set to 10 −6 .
Due to the finite difference approximation of the analytical derivative we expect a first-order convergence of with respect to h. Note that the force acting on the component α = 1, 2, 3 of nuclei j due to the solvation model is given by F j,α = − ∂Es ∂xj,α . Figure 5 illustrates the convergence of the maximum ( ∞ -error) and the root-mean-squared deviation (RMSD), or equivalently the 2 -error, of the error vector Err as a function of h and first-order convergence is indeed observed. However, beyond h = 10 −5 , the finite precision of the algorithms interferes with the convergence of the numerical forces.
Computing the finite-difference approximation D h [E s ](x j,α ) with respect to every coordinate becomes very expensive for large structures, which is the reason we restrict this benchmark and verification to the Table 3: Solvation energy, memory, and number of iterations for ddLPB and APBS. "Rel. En." stands for the relative energy, "Mem." stands for the memory, "Iter." stands for the number of macro-iterations, and "h" stands for the grid spacing (APBS). Step size (Å) Gradient (kJ mol −1 Å −1 ) 1etn Max Diff RMSD Figure 5: Comparison between numerical forces at various step sizes, and the analytical forces for the 1ay3 (left) and 1etn (right) molecules. The two curves report the maximum difference and the root-mean-squared deviation (RMSD) between the two sets of forces.
small structures 1ay3 and 1etn. Note, however, that computing the analytical forces F j,α is indeed much faster and scales in the current implementation quadratically with respect to the number of atoms, but will be subject of further improvements in the future using the FMM.
Conclusion
In this work, we provide the detailed derivation of analytical forces for the ddLPB numerical method which efficiently approximates solutions to the linearized Poisson-Boltzmann equation that is a frequent model used in computational (bio-) chemistry. The derivation is technical but mandatory and is based on an adjoint method to compute analytical derivatives of the energy with respect to (possibly many) external parameters such as the nuclear coordinates which result in the computation of the forces. The implementation of the energy and forces have been validated by a series of benchmark problems and by comparing the results with those of the APBS-package. The current implementation scales quadratically w.r.t. the number of atoms and it is work in progress to accelerate the quadratic bottlenecks with the fast multipole method (FMM) to achieve a fully linearly scaling ddLPB-implementation for energy and forces. | 2022-03-03T01:33:31.784Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "78888cf1b9d58245606129509a8937c6188454c3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "78888cf1b9d58245606129509a8937c6188454c3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
253526280 | pes2o/s2orc | v3-fos-license | Effect of intermittent shade on nitrogen dynamics assessed by 15N trace isotopes, enzymatic activity and yield of Brassica napus L.
Modern era of agriculture is concerned with the environmental influence on crop growth and development. Shading is one of the crucial factors affecting crop growth considerably, which has been neglected over the years. Therefore, a two-year field experiment was aimed to investigate the effects of shading at flowering (S1) and pod development (S2) stages on nitrogen (N) dynamics, carbohydrates and yield of rapeseed. Two rapeseed genotypes (Chuannong and Zhongyouza) were selected to evaluate the effects of shading on 15N trace isotopes, enzymatic activities, dry matter, nitrogen and carbohydrate distribution and their relationship with yield. The results demonstrated that both shading treatments disturbed the nitrogen accumulation and transportation at the maturity stage. It was found that shading induced the downregulation of the N mobilizing enzymes (NR, NiR, GS, and GOGAT) in leaves and pods at both developmental stages. Shading at both growth stages resulted in reduced dry matter of both varieties but only S2 exhibited the decline in pod shell and seeds dry weight in both years. Besides this, carbohydrates distribution toward economic organs was declined by S2 treatment and its substantial impact was also experienced in seed weight and seeds number per pod which ultimately decreased the yield in both genotypes. We also revealed that yield is positively correlated with dry matter, nitrogen content and carbohydrates transportation. In contrast to Chuannong, the Zhongyouza genotype performed relatively better under shade stress. Overall, it was noticed that shading at pod developmental stage considerable affected the transportation of N and carbohydrates which led to reduced rapeseed yield as compared to shading at flowering stage. Our study provides basic theoretical support for the management techniques of rapeseed grown under low light regions and revealed the critical growth stage which can be negatively impacted by low light.
Modern era of agriculture is concerned with the environmental influence on crop growth and development. Shading is one of the crucial factors affecting crop growth considerably, which has been neglected over the years. Therefore, a two-year field experiment was aimed to investigate the effects of shading at flowering (S1) and pod development (S2) stages on nitrogen (N) dynamics, carbohydrates and yield of rapeseed. Two rapeseed genotypes (Chuannong and Zhongyouza) were selected to evaluate the effects of shading on 15 N trace isotopes, enzymatic activities, dry matter, nitrogen and carbohydrate distribution and their relationship with yield. The results demonstrated that both shading treatments disturbed the nitrogen accumulation and transportation at the maturity stage. It was found that shading induced the downregulation of the N mobilizing enzymes (NR, NiR, GS, and GOGAT) in leaves and pods at both developmental stages. Shading at both growth stages resulted in reduced dry matter of both varieties but only S2 exhibited the decline in pod shell and seeds dry weight in both years. Besides this, carbohydrates distribution toward economic organs was declined by S2 treatment and its substantial impact was also experienced in seed weight and seeds number per pod which ultimately decreased the yield in both genotypes. We also revealed that yield is positively correlated with dry matter, nitrogen content and carbohydrates transportation. In contrast to Chuannong, the Zhongyouza genotype performed relatively better under shade stress. Overall, it was noticed that shading at pod developmental stage considerable affected the transportation of N and carbohydrates which led to reduced
Introduction
In the modern era, agriculture is concerned with the environmental impact on crop yield and nutritional quality. Rapeseed (Brassica napus L.) is one of the most frequently consumed oilseeds crop worldwide, with double the oil yield per hectare as soybean. After rice, maize, and wheat, rapeseed is China's fourth most farmed crop (Hu et al., 2022). The Yangtze River Basin is the main rapeseed-producing region, where farmers adopt an intensive cropping system to get better yields (Li et al., 2018). Furthermore, the demand for rapeseed oil as a sustainable energy source has risen significantly (Ahmad et al., 2011). Light is possibly the most geographically and temporally variable of all the environmental conditions that affect plant performance (Nascimento et al., 2015). Light signals photomorphogenesis and supplies energy to develop plant assimilatory power (Kumar et al., 2016). Global climate change has reduced daylight hours and solar radiation during the last 50 years (Ren, 2005). Clouds and greater plant populations can restrict light availability, especially in later growth phases. Under the influence of meteorological and environmental factors, the tallest crops are frequently susceptible to low light stress or self-shading (Gao et al., 2018). The impact of shade stress depends on the cultivar, growth stage, shading intensity, and shading duration. Shade stress damages the plant's morphology and ultrastructure, limiting chlorophyll synthesis and lowering the canopy's photosynthetic capability (Li et al., 2010;Mu et al., 2010;Bellasio and Griffiths, 2014). As a result, shading stress lowers photosynthate production and grain yield (Clay et al., 2009;Chikov et al., 2016;Ren et al., 2016). Light plays a vital role in plants' photosynthate accumulation and nutrient intake and distribution (Clay et al., 2009;Cui et al., 2013). Absorption, assimilation, and transport of nitrogen (N) directly impact growth and development (Bu et al., 2014;Jia et al., 2014;Ihtisham et al., 2018). Nitrate is the most prevalent form of nitrogen available to plants due to the quick nitrification of the regularly used reduced forms of nitrogen. After being absorbed by the plant, nitrate needs to be converted to an ammoniacal form in order to be incorporated into amino acids for protein synthesis. The first enzyme that carries out the rate-limiting step in converting nitrate to ammonia in the nitrate assimilatory pathway is nitrate reductase, which is substrate-inducible (Eilrich and Hageman, 1973). Inorganic nitrogen can only be absorbed and utilized when transformed into organic nitrogen, with glutamate and glutamine being the major assimilation metabolites generated from ammonia. Glutamine synthetase (GS)/ glutamate synthase (GOGAT) was discovered to catalyze ammonia assimilation (Lea and Miflin, 1974) and it was determined to be the principal mechanism for ammonia assimilation in higher plants (Hirel et al., 2001;Miflin and Habash, 2002;Glevarec et al., 2004;Martin et al., 2006). The transamination that transfers amino groups from glutamate to other amino acids perform crucial functions in nitrogen metabolism (Lea et al., 1992). The accumulation and partition of photosynthate determine the grain yield (Sun et al., 2017;Zhai et al., 2017). Before anthesis, a large number of carbohydrates and nitrogenous chemicals accumulate, which are then reallocated to the grain (Yang et al., 2001;Xu et al., 2006). The content of grain N depends on the rate of nitrogen accumulation and proportion of translocation from distinct organs of crop (Chen et al., 2015a). Additionally, the ratio of nitrogen translocated from the vegetative organs to the grain is influenced by climatic factors, management techniques, soil nutrients, and water availability, which are crucial for crop yield (Dordas and Sioulas, 2009). Yangtze river basin is the part of southern region of China. As a result of the significantly decreased light intensity in southern China, where plants face low light stress during different growth stages of different crops (Setień et al., 2013;Gao et al., 2017b). Thus, it is critical to explore the accumulation and remobilization of dry matter (DM), N and carbohydrates under shading stress at different growth stages of rapeseed. Although many studies have examined the changes in N distribution in response to various growth conditions such as temperature, precipitation, and nitrogendeposition conditions (Villar-Salvador et al., 2015), but very few have focused on the effects of shade stress on N assimilation and distribution at different growth stages of crops especially the rapeseed. So, we hypothesized that low light stress at pod development stage significantly altered the N dynamic, carbohydrates transportation and ultimately causes the yield reduction. Artificial shade environments were used to simulate the field shade conditions to investigate the plant dry matter and nitrogen accumulation processes. The accumulation and translocation of nitrogen were investigated using the 15 N stable isotope tracer under shading at various growth stages of rapeseed. The specific objectives of this study were to quantify the effects of various shading periods on rapeseed dry matter and N accumulation and to identify the critical growth stage that has the most significant impact on N dynamics and yield in rapeseed plants.
Experimental location
A two-year field experiment was carried out at Huihe village, Chengdu plain, Sichuan province (102°54-104°53 E, 30°05-31°26 N) from 2020-22. It is a subtropical region with an average temperature of 16.1°C, annual total precipitation of 1780 mm, and a total sunshine duration of 1050 h (Sichuan Province Agrometeorological Center, China). The basic soil fertility of soil includes organic matter (20.3 g/kg), total nitrogen (1.3 g/kg), available phosphorus (0.015 mg/kg), available potassium (0.118 mg/kg), and pH (6.7) in the topsoil layer (0-20cm). The monthly annual temperature and rainfall of the rapeseed growing season are demonstrated in Figure 1.
Experimental materials and layout
The two rapeseed genotypes (Chuannong and Zhongyouza) were involved in the two-year field trial. These rapeseed genotypes are abundantly cultivated in Sichuan province, especially in the higher reaches of the Yangtze river basin. The experiment employed a two-factor split-plot design. Three shading treatments were established at various growth stages of rapeseed; S0 = control (ambient light), S1 = shade from GS5 to GS6 and S2 = shade from GS7 to GS8 (Figure 2). The plants were enclosed by a layer of black polyethylene nets, which blocked approximately 35% of solar radiation. Two cultivars were assigned to the main plot and subplots received shading treatment. All treatments were carried out three times, yielding 24 plots with a 12 m 2 plot size. The prior harvested crop was rice, and the soil fertility was medium. The field was rotated before sowing, and a rope was manually pulled on-line while maintaining a row-row distance of 33 cm and a plant-plant gap of 20 cm. One seedling was left in each hole after emergence, and the baseline planting density was 150,000 plants/ha. Phosphorus and potassium fertilizers were applied at a rate of 90 kg/ha as base fertilizer. Nitrogen fertilizer was used at a 90 kg/ ha rate in split dosages of 50% as base fertilizer + 50% topdressing at seedling stage. Local measurements were practiced to control the pests and weeds.
Sampling and measurement 2.3.1 Yield parameters
At maturity, 10 plants were chosen to measure the number of pods per plant, number of seeds per pod, 1000 seed weight and yield of both genotypes.
Dry matter determination
At maturity stage, 10 plants were separated into stems, pod shells, and seeds to determine the dry matter. After that, samples were dried for 30 minutes at 105°C, followed by drying at 80°C, until a consistent weight was attained and the data was recorded as dry matter.
Determination of nitrogen content
The 6 plants were divided into stems, leaves, pod shells, and seeds at the GS6 and GS8 stages. Afterward, the samples were dried at 105°C for 30 minutes to stop the enzymatic activity, then dried at 80°C till a constant weight was obtained. Samples were then mashed with a mortar and sieved through a 0.5 mm sieve. The semi-automatic Kjeldahl nitrogen analyzer (FOSS 2300) was used to calculate total nitrogen content (Sparks et al., 2020). The following indices were calculated by following the previously published methods (Dordas and Sioulas, 2009;Gao et al., 2020): NHI ðg plant -1 ) = Seed N at GS8=total N of above-ground biomass at GS8 NA ðg plant -1 ) = Seed N at GS8=NT Note: N=nitrogen; GS6=pod development growth stage; GS8=harvesting stage; NT=nitrogen translocation; NTE= nitrogen translocation efficiency; NCP=nitrogen contribution p r o p o r t i o n ; N H I = n i t r o g e n h a r v e s t i n d e x a n d NA=nitrogen assimilation.
The plants (3) with similar phenological characteristics of each plot were labeled with 15 N at GS5. Labeled plants of each plot were harvested at the end of GS7 and divided into leaves, stem, pod shell and grain. The samples were dried at 105°C for 30 minutes and then at 80°C in an oven (DHG-9423A Shanghai SANFA Scientific Instrument Co., Ltd.) to attain a constant weight. All of the samples were grounded into powder and sieved at 200 mesh. The enrichment of 15N in 4 mg powdered plant samples was determined using an isotope 100 mass spectrometer (Isoprime, Manchester, UK). The control treatment was Diagrammatic representation of shading treatment at different growth stages of rapeseed. Control (ambient light) (S0); shade from GS5 to GS6 (S1); shade from GS7 to GS8 (S2); germination and emergence stage (GS1); leaf development stage (GS2); side-shoot development stage (GS3); stem prolongation stage (GS4); inflorescence emergence (GS5); flowering stage (GS6); pod development (GS7); harvesting stage (GS8). measured based on the plants without 15 N isotopes tracing. The accumulation of 15 N in organs was calculated as follows (Clay et al., 2016):
Assay of NR, NiR, GS and GOGAT activities
Samples of fresh leaves and pods were collected in liquid nitrogen at 10-day intervals following shading to assess enzyme activity. According to previously described procedures, the enzymes nitrate reductase (NR), nitrite reductase (NiR), glutamine synthetase (GS), and glutamate synthase (GOGAT) were examined (LIANG et al., 2011;Majlath et al., 2016;Khan et al., 2020).
Total non-structural carbohydrates
The plant samples were oven-dried for 30 minutes at 105°C before being kept at 80°C till they reached a consistent weight. After that, samples were pulverized in an electric mortar and 0.1 g of powder was added to 6 mL of 80% ethanol, which was then put in water bath at 80°C for 40 minutes and centrifuged for 5 minutes at 5000 rpm. The supernatant was transferred to 50 mL tubes as the main solution, and the procedure was repeated twice. To make the primary solution 50 mL, 80% ethanol was added. For decolorization, 0.1 g charcoal solution was added to the primary solution, and the primary solution was filtered to use for the following analyses (Asghar et al., 2020).
Determination of sucrose
To determine sucrose content, 0.9 mL of primary solution was taken into test tubes and 0.1 mL of 2 M NaOH was added and placed in the water bath for 10 minutes. After heating, samples were allowed to cool at room temperature for 15 minutes. After that mixture was heated at 80°C with 3 mL of 10 M HCL and 1 mL of 0.1% resorcinol for 10 minutes. The supernatant was taken and absorbance was measured in a spectrophotometer at 480 nm (Spectra Max i3x from Austria) (Ghafoor et al., 2021).
Determination of reducing sugar
In 10 mL test tubes, 1.5 mL primary solution, 0.5 mL deionized H 2 O, and 1.5 mL DNS solution were mixed to determine the reducing sugars. After that, the tubes were placed in 80°C water bath for 10 minutes. A spectrophotometer measured the absorbance at 520 nm in the supernatant (Spectra Max i3x from Austria).
Determination of soluble sugar
To measure the soluble sugar, 20 mL of test tubes were filled with 1 mL of primary solution and 4 ml of 0.2% sulfate anthrone combination. After that, samples were heated for 15 minutes in a water bath and cooled for 15 minutes at room temperature. The supernatant was measured at 480 nm in a spectrophotometer (Spectra Max i3x from Austria) .
Statistical analysis
The data was recorded and sorted out by Microsoft Excel 2019. SPSS 19.0 (SPSS, Chicago, IL, USA) software was used to statistically analyze all the data. To estimate the differences among treatments, ANOVA with three-way analysis of variance followed by least significant difference (LSD) at p<0.05 significance level was performed. Pearson correlation coefficient were calculated to determine the relationship between different parameters. All the tables and figures were shaped by Excel 2019 and Origin 2021 software (OriginLab Co., Northampton, MA, USA).
Effect of shade on the yield attributes of rapeseed genotypes
Different shading treatments significantly altered the yield variables of both investigated rapeseed genotypes. In contrast to S0, the Chuannong genotype showed a decreased number of pods by 7.40 and 9.23% and Zhongyouza genotype experienced the 7.16 and 8.25% after S1 and S2 treatments as compared to S0, respectively. While the number of seeds per pod was lowered by 5.91 and 39.60% in Chuannong and 7.58 and 33.85% in Zhongyouza after the respective shading treatments. Under S1 and S2, the Chuannong exhibited 2.78 and 19.73% reduction and Zhongyouza showed 4.42 and 12.04% decline in 1000-seed weight following S1 and S2, respectively. In case of yield, the S1 and S2 declined the yield of Chuannong genotype by 13.31 and 50.03% and this reduction was 11.06 and 37.01% in Zhongyouza, respectively. Under various shading treatments, the aforementioned yield characteristics in both years demonstrated a similar trend, while 2020-21 year significantly exhibited higher yield in both genotypes ( Figure 3). Additionally, S2 had a significant impact on all yield parameters. Taken altogether, it was observed that the Chuannong genotype was more shade-sensitive and showed lower yield than Zhongyouza under shade treatment. Effect of shading on yield parameters of rapeseed. S0= control (ambient light); S1= shade at the whole flowering stage and S2= shade at the start of pod development to pod maturity. Values were determined using the (n=10) LSD test, and various small letters denote the significance level of treatments at the 0.05 probability level (Duncan test).
Impact of shade stress on dry matter accumulation of rapeseed
Shade stress significantly reduced the dry matter of both rapeseed genotypes. According to two-year average data, the DM was reduced by 7.28 and 33.32% in Chuannong and 7.16 and 31.91% in Zhongyouza following S1 and S2 treatment as compared to S0, respectively. Shading at both growth stages disrupted the accumulation and distribution of DM in the organs of rapeseed. Under S2, a significant drop of DM was detected in the rapeseed organs. The seed weight was more affected by shading than stem and pod shells at the organ level under S2.
Contrary to S0, the Chuannong genotype showed the 8.96 and 58.34% decline in seed weights after S1 and S2 treatments as compared to S0, respectively, while Zhongyouza exhibited 22.9 and 49.63% inhibition after the respective shading treatments. The stem, pod shell, and seed weights of both genotypes under shading followed a similar decreasing trend: S2<S1<S0 ( Figure 4). The DM of all organs followed the same reducing tendency in both years. but 2020-21 displayed higher dry matter than year. Aside from that, shade during the pod stage (S2) substantially impacted both cultivars' dry matter.
Shade-dependent changes in nitrogen accumulation and distribution in rapeseed
The differences in nitrogen accumulation and distribution were found under shading stress at distinct growth stages. The values in Tables 1 and 2 represent the mean value for two-year experiment. The total nitrogen (TN) of both genotypes was detected in the following increasing order: S0>S1>S2 at maturity stage. In contrast to S0, S1 and S2 treatments reduced the TN distribution of Chuannong by 17.84 and 73.29%, respectively, however this reduction was 8.47 and 40.27% in Zhongyouza, respectively (Table 1). Shading had an impact on rapeseed organs of both genotypes. For instance, S1 had lower nitrogen values for leaves and stems, whereas pod shells and seeds showed lower nitrogen values under S2. Regarding genotypes, a higher TN was observed in Zhongyouza. Moreover, shading treatments affected both genotypes' N contents of the leaves, stems, and pods ( Table 1).
The lower value of N translocation (NT), N translocation efficiency (NTE) and N contribution proportion (NCP) was perceived in S1, whereas higher values were examined in S2 treatment. The NTE was 5.30 and 36.78% lower in Chuannong Effect of shading on dry matter accumulation in rapeseed. S0= control (ambient light); S1= shade at the whole flowering stage and S2= shade at the start of pod development to pod maturity. Values were determined using the (n=10) LSD test, and various small letters denote the significance level of treatments at the 0.05 probability level (Duncan test). and 23.08 and 37.08% in Zhongyouza genotype under S1 as compared to S0 and S2, respectively. The N harvest index (NHI) and N assimilation (NA) values of both genotypes were lowest in S2 treatment (Table 2).
Shading decreased the distribution of 15 N isotopes in different organs of rapeseed ( Figure 5). Compared to S0, the stem 15 N accumulation was declined in Chuannong genotype by 69.31 and 12.85% under S1 and S2, respectively, however, this change was 54.14 and 5.12% in Zhongyouza genotype. In Chuannong, the reduction of 7.17 and 72.35% in seeds 15 N accumulation was observed following S1and S2 relative to S0, respectively. While this inhibition was 15.31 and 86.38% for Zhongyouza. Leaf and stem 15 N accumulation of both genotypes displayed a increasing trend; S0>S2>S1. While pod shell and seeds exhibited a increasing trend; S0>S1>S2. The 15 N accumulation in the entire plant decreased under both shade tratements, compared to S0. In general, both rapeseed genotypes exhibited the following 15 N accumulation trend; S0>S1>S2. Taken altogether, it was noticed that the Zhongyouza displayed a higher accumulation of 15 N than Chuannong ( Figure 5). 2.55 ± 0.13a 1.42 ± 0.11a 2.10 ± 0.09a 6.08 ± 0.03a 1.95 ± 0.02a 0.89 ± 0.03b 6.14 ± 0.01a 8.97 ± 0.01a S1 1.09 ± 0.02c 0.56 ± 0.03bc 1.47 ± 0.03b 3.13 ± 0.01c 1.05 ± 0.03c 0.73 ± 0.01c 5.06 ± 0.01c 6.83 ± 0.03c S2 2.53 ± 0.14b 1.39 ± 0.06a 2.14 ± 0.04a 6.05 ± 0.16a 1.83 ± 0.02a 0.63 ± 0.03cd 3.12 ± 0.03d 5.57 ± 0.02e Variance analysis Y ** ** ** * ** ** ** ** V * * * * n s * * * * * * * * * * T ** ** ** * ** ** ** ** Y×V n s n s n s n s n s n s n s n s Y×T n s n s n s n s n s n s n s n s V×T * ns ns * ** ** ** ** Y×V×T n s n s n s n s n s n s n s n s S0, control (ambient light); S1, shade at the whole flowering stage and S2, shade at the start of pod development to pod maturity. Values were determined using the (n=6) LSD test, and various small letters denote the significance level of treatments at the 0.05 probability level (Duncan test). Y, V and T represent the year, variety and treatment, while **, * and ns denote the highly significant, significant and non-significant. Y×V×T n s n s n s n s n s S0, control (ambient light); S1, shade at the whole flowering stage and S2, shade at the start of pod development to pod maturity. Values were determined using the (n=6) LSD test, and various small letters denote the significance level of treatments at the 0.05 probability level (Duncan test). Y, V and T represent the year, variety and treatment. While **, * and ns denote the highly significant, significant and non-significant.
Shade-induced modifications in enzymatic activities of rapeseed
The shading stress at both growth stages considerably influenced the enzymatic activities in the leaves and pod shells of both the studied genotypes. Relative to S0, the S1 reduced the NR, NiR, GS and GOGAT activities by 20.65, 8.60, 33.74 and 9.24% in leaves of Chuannong, respectively. Whereas the Zhongyouza experienced a 28.31, 12.96, 21.47 and 14.05% reduction following S1 as compared to S0, respectively ( Figure 6).
In case of pod shell, the NR activity of Chuannong was reduced by 6.37 and 28.33% after S1 and S2 relative to S0, respectively, while this decline was 11.51 and 30% for Zhongyouza genotype. A decline of 7.75 and 11.47% was detected in NiR activity of Chuannong and 4.98 and 10.56% Zhongyouza genotypes under S1 and S2 when compared with S0, respectively. The S1 and S2 treatments also declined the pod shell GS activity of Chuannong (8.91 and 25.45%) and Zhongyouza (9.31 and 27.74%), respectively. Similarly, the pod shell GOGAT activity showed 15.05 and 24.67% decline in Chuannong and 6.50 and 16.46% in Zhongyouza genotype following S1 and S2, respectively ( Figure 6). Comparing both genotypes, our findings unveiled that the Chuannong cultivar showed higher NR and GOGAT activity while Zhongyouza showed more NiR and GS enzymatic activities under S1 and S2 treatments. Furthermore, comparing S1 and S2, S2 significantly lowered all the enzymatic activities in both genotypes.
Shade-mediated modifications in carbohydrates accumulation at maturity
The changing trend of carbohydrates content of the both rapeseed genotypes was the same in both years. The shading treatment considerably reduced the sucrose, reducing sugar and soluble sugar contents of the stem and pod shell of both tested genotypes. The sucrose content of stem was declined by 14.08 and 41.28% in Chuannong and 6.87 and 35.36% in Zhongyouza, while pod shell showed a reduction of 16.45 and 35.18% in Chuannong and 14.19 and 37.99% in Zhongyouza following S1 and S2 treatments as compared to S0, respectively (average value based on two years). Generally, it was noticed that Zhongyouza genotype showed higher sucrose content than Chuannong genotype under all treatments.
Under various shading treatments, the reducing sugar content of Chuannong and Zhongyouza showed the following trend: S0>S1>S2 in both years. Compared with S0, the stem reducing sugar contents of Chuannong genotype experienced a decline by 15.21 and 76.66%, while this reduction for Zhongyouza genotype 10.71 and 51.21% after S1 and S2 treatments, respectively. In addition, the S1 and S2 decreased the reducing sugar of pod shell by 25.53 and 84.37% in Chuannong genotype and 15.52 and 55.81% in Zhongyouza genotype, respectively.
The soluble sugar content of Zhongyouza genotype was higher than that of Chuannong under all the treatments. Contrary to control, the stem soluble sugar of Chuannong genotype was inhibited by 10.52 and 46.72% and Zhongyouza genotype was reduced by 10 and 44.26% following S1 and S2 treatments, respectively. However, pod shell soluble sugar content showed a decline of 8.56 and 36.24% in Chuannong and 8.21 and 34.39% in Zhongyouza genotypes after the respective shading treatments. Moreover, the following inclination of carbohydrates was observed in both cultivars; S0>S1>S2 (Table 3). Furthermore, Zhongyouza showed significantly higher carbohydrates content in stem and pod shell and 2020-21 year showed higher values of carbohydrates as compared to . Collectively, it was seen that the shade at the pod development stage (S2) significantly affected the carbohydrates content in both years. Distribution of 15 N to different plant organs of rapeseed at maturity under different shade conditions. S0= control (ambient light); S1= shade at the whole flowering stage and S2= shade at the start of pod development to pod maturity. Values were determined using the (n=3) LSD test, and various small letters denote the significance level of treatments at the 0.05 probability level (Duncan test). N mobilizing enzymatic activities under shade stress. S0= control (ambient light); S1= shade at the whole flowering stage and S2= shade at the start of pod development to pod maturity. Values were determined using the LSD test, and various small letters denote the significance level of treatments at the 0.05 probability level (Duncan test).
Correlation analysis
The current study's correlation analysis demonstrated that shade stress was substantially connected to yield metrics, nitrogen absorption and carbohydrates transportation. All the enzyme activities were significantly positive correlated with N transportation to different organs but a non-significant correlation of enzymatic activities with yield was observed. A negative correlation of NT, NTE, NCP and NA with carbohydrates were examined but carbohydrates exhibited positive correlation with yield parameters. Moreover, total dry matter and 15 N displayed a significantly positive correlation with seed yield (Figure 7).
Response of yield parameters and dry matter under shade stress
Light is a critical environmental component impacting the growth and development of crops (Guoping et al., 2008;Zhong et al., 2014). Numerous studies have documented a decrease in yield due to shading stress (Cantagallo et al., 2004;Zhang et al., 2006;Acreche et al., 2009;Mu et al., 2010). In two-year studies, there were no significant declines in pods per plant but a significant fall in pod filling and grain weight (Wang et al., 2015). Previous researches have demonstrated that the drop in grain production was due to a reduction in grain number and weight (Acreche et al., 2009;Mu et al., 2010;Polthanee et al., 2011). Variations in ovule fertility and seed number per pod were impacted by changes in growth circumstances such as N availability, light and temperature (Bouttier and Morgan, 1992). Grain yield and spikelet filling had significant positive linear associations, while grain yield and grain weight showed a positive relationship (Wang et al., 2015). Shading reduced grain dry weight during grain filling, lowering grain yield (Ishibashi et al., 2014). To pinpoint crucial growth stage, most other reported field experiments have not used sufficiently defined durations of shading. For instance, (Habekotte, 1993) and (Iglesias and Miralles, 2014) both used shade (60% and 50%, respectively) for entire anthesis stage and resulting in yield losses of 50% and 15%, respectively, but no particular growth stage could be determined. Thus, to our knowledge, our study is among the fewer studies of rapeseed that has identified a relatively critical growth stage which affected by shading.
We found that shade in the beginning of the pod's development limited the assimilates transfer and decreased the Y×V×T n s n s n s n s n s n s S0, control (ambient light); S1, shade at the whole flowering stage and S2, shade at the start of pod development to pod maturity. Values were determined using the (n=10) LSD test, and various small letters denote the significance level of treatments at the 0.05 probability level (Duncan test). Y, V and T represent the year, variety and treatment. While **, * and ns denote the highly significant, significant and non-significant.
weight of the pod shell and the number of seeds per pod (Tayo and Morgan, 1979). In the current study, the S1 demonstrated a relatively high yield when the supply of pods and seeds resumes to normal levels after the shade has been removed, although a shortage of assimilates on flowers under shade is also damaging and diminishes the potential for compensatory growth. As previously discussed, I canola appears more vulnerable to severe temperatures and water deficits during late blooming and early pod set, aligning its sensitive periods with those of pulses rather than cereals (Sadras and Dreccer, 2015). Thus, they function similarly to the shade treatments applied in this study. In general, the observed correlations between the time of shade treatments and their effects on yield components and their relationships at maturity are similar to previously described physiological consequences of reduced assimilate supply (Tayo and Morgan, 1979;Keiller and Morgan, 1988;Diepenbrock, 2000). Dry matter production and accumulation are the primary determinants of crop yield, which are also limited by different environmental factors (CH, 1995). Shading stress greatly changed the physiology and morphology of the plant and eventually decreased the dry matter accumulation and distribution, resulting in decreased grain yield (Acreche et al., 2009;Li et al., 2010;Mauro et al., 2011). Dry matter accumulation in high-yield maize accounts for more than 60% of the total dry matter. Grain yield is influenced by the development and distribution of dry matter in vegetative organs such as the stem, leaf, and sheath (Huang et al., 2007).
Our results demonstrated that S2 treatment considerably reduced the pod shell and seeds dry weight, which caused the yield drop in both rapeseed genotypes. We can conclude that shade at pod development stage (S2) is crucial to cause reduction in dry matter and yield.
N accumulation and distribution under shading conditions
Increasing biological yield is the foundation for increasing output; nutrient intake and distribution are key prerequisites for biological yield (Hirel et al., 2007). This study examined the changes in N accumulation and transportation under shade at various growth stages and deduced a portion of the mechanism underlying the grain yield response to N use. The remobilization of nitrogen in vegetative organs and the uptake of additional nitrogen throughout the grain-filling cycle provide grain N ( Muel ler and Vy n, 2 01 6). Furt her more, nit rogen remobilization in the stem and leaf accounts for 69 to 80% of grain N (Subedi and Ma, 2005;Chen et al., 2014b). As a result, N accumulation and distribution in vegetative and reproductive organs play a pivotal role in dry matter weight at maturity and influence grain yield. According to our findings, shade declined the total N accumulation of rapeseed in the following order: S0>S1>S2>. The total N of the pod shell and seeds were significantly reduced by shade at pod development (S2) Correlation analysis between agronomic traits, nitrogen content, carbohydrates and yield. Red and blue color represents the positive and negative correlation. The size and intensity of color exhibited the significance of variables. PN, pod number; SN, seed number; SY, seed yield; TDM, total dry matter, TN, total nitrogen; NT, nitrogen translocation, NTE, nitrogen translocation efficiency; NCP, nitrogen contribution proportion; NHI, nitrogen harvest index; NA, nitrogen assimilation; 15N, 15 nitrogen isotope; NR, nitrate reductase; NiR, nitrite reductase; GS, glutamine synthetase; GOGAT, glutamate synthase; Suc, sucrose; RS, reducing sugar and SS, soluble sugar. compared to the flowering stage (S1). Furthermore, N buildup of S1 raised after the light was restored, but it did not return to normal levels (Table 1; Figure 5). The S2 inhibited the amount of N translocation towards pod shell and seeds compared to other treatments (Table 1), resulting in poorer yields (Chen et al., 2015b). When the accumulated N at pod development is smaller than the grain requirements, nitrogen transport rises (Chen et al., 2015b), as evidenced by our findings. Late-season shade (S2) reduced the N translocation towards economic organs ( Table 1). As a result, we found that shading reduced N uptake distribution in all organs, resulting in a decrease in seed yield. In conclusion, shade reduced N buildup and impeded N transfer from vegetative organs to grain, such as leaves, stems, and pod shells. This study found that shading at pod developmental stage (S2) had a greater detrimental impact on N uptake than at flowering stage (S1), consistent with root shape and root physiology changes during shading (Figure 8) (Gao et al., 2017a). Shading altered the root structure, reducing root dry weight, absorption area, and active absorption area. Weather, climate, and air pollution contribute to shade, which is a challenging problem to solve in the manufacturing process. Changing sowing times is an excellent way to deal with low-light prone areas at later growth stage of crop, but it can be effected by temperature, soil moisture, and crop rotation as well (Gao et al., 2017a;Zhao et al., 2018).
Shade-dependent changes in N metabolizing enzyme activities
The leaf N content and enzyme activities are closely associated with each other (Sinclair et al., 2000). We observed that, shade inhibited the activity of NR, NiR, GS, and GOGAT, as was shown by the earlier research (Wang et al., 2020). GS and GOGAT are two essential enzymes involved in the N metabolism (Nigro et al., 2017). Due to shade, GS and GOGAT activities reduced gradually in the present study. This observation in grains was the same as in leaves (Wang et al., 2020). Wheat responds as a sensitive to ammonium nutrition at low light intensities, and its low GS activity is insufficient for ammonium assimilation. This occurrence apparently arose as a result of the significantly decreased light intensity in southern China, where plants face weak light stress during grain filling, which is relatable to our findings (Setień et al., 2013;Gao et al., 2017b). When plants were subjected to shade, nitrate delivery to the tops dropped considerably. The drop in NR activity resulted from the Effect of shading stress on the roots structure of two rapeseed genotypes. (Udayakumar et al., 1981). Reduced NR activities in shade-adapted plants make it easier for the plants to coordinate their N and carbon uptake across a variety of light conditions (Fredeen et al., 1991). The 50.3, 24 and 30.4% inhibition was also observed in NR, GS and GOGAT enzymatic activities following shade treatment (Yu et al., 2011). We concluded that leaves and pods' enzymatic activities are greatly reduced by shade. Among the shading treatments, the S2 treatment considerably declined the enzyme activities in the pods, which restricted the nitrogen transport towards seeds that led to low grain yield in both the investigated rapeseed genotypes.
Carbohydrates accumulation and distribution under shade
Shading limited the transformation of photosynthetic products. It accelerates the consumption of assimilates in leaves and stems and reduces grain yield . Studies on different crops showed that the carbohydrate accumulation in leaves, stems, and roots decreased significantly under shading (Chen et al., 2014a;. As one of the main photosynthetic products, sucrose is significantly affected by light intensity and light cycle (Emerson, 1958). The decrease in sucrose content is closely related to light intensity. Shade reduce the output of leaves, the primary organ responsible for the formation of photosynthetic products, and eventually results in a decrease in sucrose content (Wu et al., 2017). The deleterious effect of whole-plant shade during grain filling on grain yield has been ascribed to photo-assimilate deficiency (Singh and Jenner, 1984;Okawa et al., 2003). However, there were differences in the accumulation and transport of carbohydrates under shading stress at different growth stages. This study showed that shading at pod stage (S2) had a more serious impact than shading at flowering stage (S1), which directly led to the reduction in grain yield. This could be because of leaf senescence, reducing photosynthetic potential, carbon fixation, and assimilates at pod development stage (Brouwer et al., 2012), which resulted in insufficient transportation of photosynthetic products. It was discovered that shaded wheat reduced grain output by speeding up the consumption of assimilates in the leaves and stems . The authors further found that the carbohydrate of pod photosynthesis is mostly transported to the grain. In maize plants, the post-anthesis shading weakened the ability of nitrogen accumulation and stimulated the obvious re mobilization of carbohydrate reserves from stem to grain, but the decrease of grain filling rate eventually led to the decrease of grain yield (Reed et al., 1988). In this study, shading at flowering stage still reduced grain yield. The retardation can be attributed to the loss of non-structural carbohydrate transport to the kernel as a result of light deprivation (Mu et al., 2009) and kernel filling rate (Jichao and Zhiyong, 2005), which decreased the endosperm cell number and volume (Jia et al., 2011) and kernel set as a result of accelerated senescence. Starch deposition was decreased by shading, particularly under high shading. Additionally, ear shading decreased the kernel starch (Cui et al., 2012). Based on our findings, we can conclude that S2 treatment significantly reduced the carbohydrates translocation towards economic organs, leading to lower yields in both rapeseed genotypes.
Conclusion
Shading stress decreased DM and N accumulation, N transportation and distribution in multiple organs of rapeseed genotypes, and decreased the grain N content, which consequently reduced yield. The leaf and pod enzyme activities were also considerably influenced by the shade stress, which are associated with N accumulation and distribution. Relative to flowering stage, the shading at pod development stage significantly inhibited the carbohydrates transportation towards seeds. The Zhongyouza genotype outperformed Chuannong in all the aforementioned parameters under shade stress. Based on our findings, the current study provides the deeper insights into the effect of shade stress on the physio-biochemical mechanisms of rapeseed genotypes, which could be helpful for the management techniques of rapeseed grown under low light regions.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. | 2022-11-16T15:32:07.805Z | 2022-11-16T00:00:00.000 | {
"year": 2022,
"sha1": "21df40a9a58c54ddbf171e609effd26ed2cdef6e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "21df40a9a58c54ddbf171e609effd26ed2cdef6e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
19250242 | pes2o/s2orc | v3-fos-license | Gingival margin alterations and the pre-orthodontic treatment amount of keratinized gingiva
The purpose of this retrospective study was to associate the amount of keratinized gingiva present in adolescents prior to orthodontic treatment to the development of gingival recessions after the end of treatment. The sample consisted of the intra-oral photographs and orthodontic study models from 209 Caucasian patients with a mean age of 11.20 ± 1.83 years on their initial records and 14.7 ± 1.8 years on their final records. Patients were either Angle Class I or II and were submitted to non-extraction orthodontic treatment. Gingival recession was evaluated by visual inspection of the lower incisors and canines as seen in the initial and final study models and intra-oral photographs. The amount of recession was quantified using a digital caliper and the observed post-treatment gingival margin alterations were classified as unaltered, coronal migration of the gingival margin or apical migration of the gingival margin. The width of the keratinized gingiva was measured from the mucogingival line to the gingival margin on the pre-treatment photographs. The teeth that developed gingival recession and those that did not have their gingival margin position changed did not differ in relation to the initial amount of keratinized gingiva (3.00 ± 0.61 and 3.5 ± 0.86 mm, respectively). Paradoxically, teeth that presented a coronal migration of the gingival margin had a smaller initial amount of keratinized gingiva (2.26 ± 0.31 mm). The mean amount of initial keratinized gingiva did not predispose lower incisors and canines to gingival recession. Descriptors: Orthodontics; Gingival recession; Gingiva. Resumo: O objetivo deste estudo retrospectivo foi associar a quantidade de gengiva ceratinizada existente em adolescentes pré-tratamento ortodôntico e o desenvolvimento de recessões gengivais pós-tratamento ortodôntico. A amostra consistiu de fotografias intra-orais e modelos de estudo de 209 pacientes leucodermas com idades médias de 11,20 ± 1,83 anos nos exames iniciais e 14,7 ± 1,8 anos nos exames finais. Os pacientes eram Classe I ou II de Angle e foram submetidos a tratamento ortodôntico sem extrações. As recessões gengivais foram avaliadas por inspeção visual dos incisivos e caninos inferiores nas fotografias e nos modelos de estudo iniciais e finais dos pacientes. As alterações da margem gengival pós-tratamento foram medidas com paquímetro digital e subdivididas em inalterada, migração coronal da margem gengival, ou migração apical da margem gengival. A quantidade de gengiva ceratinizada foi medida da linha mucogengival à margem gengival nas fotografias pré-tratamento ortodôntico. Tanto os dentes que desenvolveram recessões gengivais como aqueles que não tiveram a posição da margem gengival alterada não diferiram entre si quanto à quantidade de gengiva ceratinizada inicial (3,00 ± 0,61 e 3,5 ± 0,86 mm, respectivamente). Contraditoriamente, dentes que apresentaram migração coronal da gengiva tinham uma quantidade menor de gengiva ceratinizada inicial (2,26 ± 0,31 mm). A quantidade média de gengiva ceratinizada inicial não predispôs a recessões gengivais de incisivos e caninos inferiores. Descritores: Ortodontia; Retração gengival; Gengiva. Luciane Quadrado Closs(a) Paula Branco(b) Susana Deon Rizzatto(c) Dirceu Barnabé Raveli(d) Cassiano Kuchenbecker Rösing(e) (a) PhD Student; (d)Chair – Department of Orthodontics, School of Dentistry of Araraquara, State University of São Paulo. (b) MSc in Periodontology; (e)Professor of Periodontology – Lutheran University of Brazil. (c) MSc in Orthodontics, Pontifical Catholic University of Rio Grande do Sul. Orthodontics Corresponding author: Luciane Quadrado Closs R. Gen. Couto de Magalhães, 1070/801 Porto Alegre RS Brazil CEP: 90550-130 E-mail: lucloss@uol.com.br Received for publication on Oct 20, 2005 Sent for alterations on Mar 27, 2006 Accepted for publication on Sep 12, 2006 Closs LQ, Branco P, Rizzatto SD, Raveli DB, Rösing CK Braz Oral Res 2007;21(1):58-63 59 Introduction The need for a supposedly adequate zone of keratinized gingiva before tooth movement is a controversal subject in the orthodontic and periodontic literature.10,14,30 It has been suggested that a certain ammount of attached gingiva is necessary for the maintenance of the integrity of the dento-gingival junction. The amount of attached gingiva – if any – required to minimize the occurrence or progression of gingival recessions, however, has never been established.11,24 The observations of Lang, Löe16 (1972) suggest that at least 2 mm of keratinized gingiva, corresponding to approximately 1 mm of attached gingiva, is recommended in order to maintain gingival health. This affirmation has been questioned in more recent studies.8,10,30 According to these studies, less than 1 mm of keratinized/attached gingiva may also be compatible with gingival health. Coatoam et al.7 (1981) found that teeth with minimal widths of keratinized gingiva (less than 2 mm) could withstand orthodontic forces. Some authors recommend mucogingival surgery, as a preventive measure to avoid the development or progression of gingival recession in cases that have a thin keratinized gingiva.13,17 However, some reports emphasize that the absence of keratinized gingiva alone is not an indication for a surgical procedure.11,14 Ngan et al.21 (1991) found that placing a free gingival graft prior to orthodontic treatment had no effect on the extent of the improvement of gingival architecture occurring during treatment. Some cross sectional studies in children, adolescents and adults demonstrate that the width of keratinized gingiva increases with age.5,28 Different studies, on the other hand, did not observe any increase in the width of attached gingiva from the deciduous to the permanent dentitions.6,26 Bimstein, Eidelman5 (1988) found that the attached gingiva tends to be narrower in the permanent dentition when compared to the primary dentition. The absence of keratinized gingiva alone is not an indication for a periodontal surgical procedure. However, if recession increases during orthodontic treatment, then a gingival graft may be indicated. Orthodontic therapy where excessive gingival recession is present may be the indicated treatment.20 In a 10-year longitudinal study of untreated mucogingival defects, it was concluded that in the absence of gingival inflammation, areas with small amounts of keratinized gingiva may remain stable over long periods of time.12 In another longitudinal study in children, Andlin-Sobocki2 (1993) found that the increase of gingival width was greatest for sites with the smallest baseline widths of attached gingiva, and smallest for sites with the greatest baseline width. The author also observed that when the teeth were moved lingually, the gingival width increased and the clinical crown height decreased. In teeth moving facially, the gingival width decreased, and the facial gingiva sometimes receded.3 In a study with a sample of completed orthodontic cases, it was found that 1.3% of the patients showed a decrease in the width of keratinized gingiva because of either minimal lingual or labial movement of the mandibular incisors, whereas 0.69% had an increase in keratinized gingival width subsequent to lingual positioning of the incisors.10 Other factors may contribute to the development of recessions: difficulty in plaque control due to fixed orthodontic accessories, coronally attached frena and muscle attachments, abnormal tooth position, placement of artificial crowns, transverse expansion, proclination of teeth, fenestration or bony dehiscence.15,17,19,29 Wennström30 (1990) stated that the thickness of the soft tissue is more important than its quality. Therefore tooth movement, especially in the labial-lingual direction, should be preceded by careful examination of the dimensions of the tissues covering the pressure side of the teeth to be moved. The aim of the present retrospective study was to associate the amount of pre-orthodontic treatment keratinized gingiva to the development of gingival recessions in adolescents submitted to orthodontic therapy. Material and Methods Subjects The sample consisted of records containing intra-oral photographs and orthodontic study models Gingival margin alterations and the pre-orthodontic treatment amount of keratinized gingiva Braz Oral Res 2007;21(1):58-63 60 from 209 Caucasian adolescents (118 female and 91 male) preand post-orthodontic treatment. The patients presented initial mean ± SD age values of 11.20 ± 1.83 years and final mean ± SD age values of 14.7 ± 1.8 years. The mean active treatment time was 1.99 ± 0.89 years. The patients were treated by two orthodontists with fixed standard edgewise and Roth prescription straight wire appliances. During orthodontic treatment, tipping and bodily movement, including torque, of lower incisors and canines were performed. The final records were taken 28 days or more after removal of the appliances. Inclusion criteria To be included in the study, patients were either Angle Class II or Class I with transverse or vertical problems, with spacing or crowding in the lower anterior teeth not exceeding 4 mm. Treatment was performed without extractions. Patients needed to have all lower incisors totally erupted and with apparent periodontal health. The exclusion criteria were: missing or non-erupted lower anterior teeth, Angle Class III patients, preexisting systemic diseases or medication associated with gingival changes. All patients in the study received oral hygiene instructions right after placement of the orthodontic appliances and during orthodontic treatment, as necessary. Main outcome The dependent variable of this study was gingival recession, which was evaluated by visual inspection of the study models and intra-oral photographs of the initial and final records of the orthodontically treated patients. Gingival recession was recorded when the labial cementoenamel junction was exposed or the buccolingual margin was markedly
Introduction
The need for a supposedly adequate zone of keratinized gingiva before tooth movement is a controversal subject in the orthodontic and periodontic literature. 10,14,30It has been suggested that a certain ammount of attached gingiva is necessary for the maintenance of the integrity of the dento-gingival junction.The amount of attached gingiva -if any -required to minimize the occurrence or progression of gingival recessions, however, has never been established. 11,24he observations of Lang, Löe 16 (1972) suggest that at least 2 mm of keratinized gingiva, corresponding to approximately 1 mm of attached gingiva, is recommended in order to maintain gingival health.This affirmation has been questioned in more recent studies. 8,10,30According to these studies, less than 1 mm of keratinized/attached gingiva may also be compatible with gingival health.Coatoam et al. 7 (1981) found that teeth with minimal widths of keratinized gingiva (less than 2 mm) could withstand orthodontic forces.
Some authors recommend mucogingival surgery, as a preventive measure to avoid the development or progression of gingival recession in cases that have a thin keratinized gingiva. 13,17However, some reports emphasize that the absence of keratinized gingiva alone is not an indication for a surgical procedure. 11,14(1991) found that placing a free gingival graft prior to orthodontic treatment had no effect on the extent of the improvement of gingival architecture occurring during treatment.Some cross sectional studies in children, adolescents and adults demonstrate that the width of keratinized gingiva increases with age. 5,28Different studies, on the other hand, did not observe any increase in the width of attached gingiva from the deciduous to the permanent dentitions. 6,26Bimstein, Eidelman 5 (1988) found that the attached gingiva tends to be narrower in the permanent dentition when compared to the primary dentition.
The absence of keratinized gingiva alone is not an indication for a periodontal surgical procedure.However, if recession increases during orthodontic treatment, then a gingival graft may be indicated.Orthodontic therapy where excessive gingival recession is present may be the indicated treatment. 20 a 10-year longitudinal study of untreated mucogingival defects, it was concluded that in the absence of gingival inflammation, areas with small amounts of keratinized gingiva may remain stable over long periods of time. 12n another longitudinal study in children, Andlin-Sobocki 2 (1993) found that the increase of gingival width was greatest for sites with the smallest baseline widths of attached gingiva, and smallest for sites with the greatest baseline width.The author also observed that when the teeth were moved lingually, the gingival width increased and the clinical crown height decreased.In teeth moving facially, the gingival width decreased, and the facial gingiva sometimes receded. 3n a study with a sample of completed orthodontic cases, it was found that 1.3% of the patients showed a decrease in the width of keratinized gingiva because of either minimal lingual or labial movement of the mandibular incisors, whereas 0.69% had an increase in keratinized gingival width subsequent to lingual positioning of the incisors. 10ther factors may contribute to the development of recessions: difficulty in plaque control due to fixed orthodontic accessories, coronally attached frena and muscle attachments, abnormal tooth position, placement of artificial crowns, transverse expansion, proclination of teeth, fenestration or bony dehiscence. 15,17,19,29ennström 30 (1990) stated that the thickness of the soft tissue is more important than its quality.Therefore tooth movement, especially in the labial-lingual direction, should be preceded by careful examination of the dimensions of the tissues covering the pressure side of the teeth to be moved.
The aim of the present retrospective study was to associate the amount of pre-orthodontic treatment keratinized gingiva to the development of gingival recessions in adolescents submitted to orthodontic therapy.
Subjects
The sample consisted of records containing intra-oral photographs and orthodontic study models from 209 Caucasian adolescents (118 female and 91 male) pre-and post-orthodontic treatment.The patients presented initial mean ± SD age values of 11.20 ± 1.83 years and final mean ± SD age values of 14.7 ± 1.8 years.The mean active treatment time was 1.99 ± 0.89 years.The patients were treated by two orthodontists with fixed standard edgewise and Roth prescription straight wire appliances.During orthodontic treatment, tipping and bodily movement, including torque, of lower incisors and canines were performed.The final records were taken 28 days or more after removal of the appliances.
Inclusion criteria
To be included in the study, patients were either Angle Class II or Class I with transverse or vertical problems, with spacing or crowding in the lower anterior teeth not exceeding 4 mm.Treatment was performed without extractions.Patients needed to have all lower incisors totally erupted and with apparent periodontal health.The exclusion criteria were: missing or non-erupted lower anterior teeth, Angle Class III patients, preexisting systemic diseases or medication associated with gingival changes.All patients in the study received oral hygiene instructions right after placement of the orthodontic appliances and during orthodontic treatment, as necessary.
Main outcome
The dependent variable of this study was gingival recession, which was evaluated by visual inspection of the study models and intra-oral photographs of the initial and final records of the orthodontically treated patients.Gingival recession was recorded when the labial cementoenamel junction was exposed or the buccolingual margin was markedly below the marginal level of the adjacent teeth in all lower incisors and canines.
Gingival recession was measured in milimeters at the midbuccal aspect of each of the mandibular incisors and canines, as the distance between the gingival margin and the cementoenamel junction.The amount of recession was quantified to the nearest 0.1 mm, using a digital caliper (Mitutoyo Digimatic , Mitutoyo Ltd., UK).Patients' photographs and models were evaluated before and after orthodontic treatment and, based on the gingival margin alterations observed, teeth were classified as having an unaltered gingival position, coronal migration of the gingival margin or apical migration of the gingival margin.
Independent variable
Assessment of keratinized gingival width The width of the keratinized gingiva was measured from the mucogingival line to the most apical point of the gingival margin.All measurements were made at the midline of the buccal aspect of the tooth to the nearest 0.5 mm using a digital caliper (Mitutoyo Digimatic , Mitutoyo Ltd., UK).
Error of the method
The reproducibility of the measurements on the records was assessed by statistically analyzing the differences between double measurements repeated on 20 randomly selected study models and photographs with a one week interval.
Kappa statistics was used to evaluate intra-examiner agreement of the presence of gingival recession, and perfect reproducibility was obtained with kappa = 1.
Paired t test and Pearson's correlation coefficient were utilized for assessing the reproducibility of the amount of gingival recession and the width of keratinized gingiva, respectively.For gingival recessions, a p = 0.505 and r = 0.993 were obtained, and for the width of keratinized gingiva, p = 0.128 and r = 0.922 were achieved.The paired measurement differences never exceeded 0.3 mm.
The amount of gingival recession and the width of keratinized gingiva were measured in photographs that did not represent the actual size of the variables measured.Thus, after the collection of the data, a multiplication factor was established to calculate the actual amount of gingival recession and width of keratinized gingiva.The enlargement correction for the photograph analysis was achieved by comparing the crown width of the upper right central incisor on the photo with the dimensions of the same tooth as recorded on the cast.As described by Djeu et al. 9
Statistical analysis
Data were analyzed using the SPSS (Statistical Package for Social Sciences, Inc., Chicago, Ill, USA).Tooth level analyses were performed.Differences between gingival margin alterations were tested by One-way Analysis of Variance, complemented by Tukey Multiple comparison test.Statistically significant differences were considered when p < 0.05.
Results
The initial amount of keratinized gingiva on the sites where a coronal gingival margin migration was detected were statistically smaller than in the cases that had an unaltered gingival position or apical gingival migration.The teeth that developed gingival recession and those that did not have their gingival margin position changed did not differ in relation to the initial amount of keratinized gingiva.(Table 1) In Table 2, a summary of the Analysis of Variance of the data shown in Table 1 is presented.It can be observed that a statistically significant difference could be detected, confirming that the mean amount of keratinized gingiva pre-orthodontic treatment in the group where the gingival position presented coronal migration was smaller than in the groups where the gingival position was either unaltered or presented recession.
Discussion
The present retrospective study assessed the amount of keratinized gingiva pre-orthodontic treatment as related to gingival margin alterations.In a previous study, the width of keratinized gingiva was measured in patients before orthodontic therapy and ranged from 0 to 8.0 mm; after orthodontic treatment, it ranged from 0 to 7.7 mm. 7t has been demonstrated that, although the attached gingiva tends to increase post-eruption, the width of the keratinized gingiva remains relatively stable after the time the tooth breaks through the mucosa. 25,26ose, App 22 (1973) stated that, as the child progresses from the deciduous to the permanent dentition, there is an increase in the mean width of attached gingiva.If these earlier findings were absolute, the problem of a so called inadequate attached gingiva would hardly ever occur in adults.However, these studies were conducted in patients at various ages and did not have a longitudinal design.Part of the difficulties involved in such studies is the manner of assessing gingival width as well as recession.A few studies have reported reproducibility for the width of the keratinized gingiva, but all have used different methods of describing it: Artun, Krogstad 4 (1987) give a Dahlberg error of 0.11 mm for intra-examiner agreement, with no discrepancy between recordings greater than 1 mm; Andlin-Sobocki 2 (1993) showed kappa statistics of 0.62 and 0.55 for inter-examiner agreement, and Andlin-Sobocki, Bodin 3 (1993) found a total agreement in 80% of the double measurements, with 95% within 0.5 mm and all within 1 mm.Based on the results described above, the results from gingival recession studies should be seen with caution due to some measurement errors involved. 17The majority of studies that evaluated gingival recession have used the clinical crown length in the models to access the amount of gingival recession. 4,9,23In this study, since the clinical crown length might have been changed during the period of treatment, the measurements were performed directly on the gingival margin in the photographs, as described by Allais and Melsen. 1,18Moreover, the reproducibility of our measurements was reported and considered adequate.
Some studies demonstrate that individual behavioral factors such as oral hygiene control and gingival biotype, among others, may contribute or predispose to gingival recession. 18,30Since this is a retrospective study, these variables could not be controlled.Measurements of Plaque Index, Probing Depth, Buccolingual amount of gingival tissue and type of orthodontic movement were not assessed in the present study due to the characteristics of the sample studied.Oral hygiene instructions were given as necessary and the included patients presented apparent gingival health.
The validity of using orthodontic records for measuring attached gingiva has been questioned. 9rentini et al. 27 (1995) demonstrated the validity of using photographs and study casts to accurately measure the width of keratinized tissue.
According to Coatoam et al. 7 (1981), the greatest loss in width of keratinized gingiva following orthodontic treatment occurred in lateral incisors.It was suggested in a former study that teeth that are lingually displaced often had the greatest width of keratinized gingiva and, once the tooth is brought into proper alignment with orthodontic therapy, the result is a decrease in this width. 22 reference was made to the specific tooth movement that was performed throughout the orthodontic period of treatment of the subjects in this study.However, in a recent study with an adult sample, the direction of tooth movement was not statistically related to the development or aggravation of gingival recession. 1he results of the present study are surprising in view of the established paradigm according to which the smaller the keratinized gingival dimensions, the more prompt teeth would be to gingival recessions.However, it should be taken into consideration that teeth that are orthodontically moved might be in a final position which allows a different gingival architecture.A normal amount of keratinized gingiva cannot be established, since this varies intra-and inter-individually.Understanding these findings is a challenge, but one could suppose that tooth movement could even permit a better gingival margin position, thus contradicting the need for pre-treatment gingival augmentation.Plaque accumulation and gingivitis, extrusive movements as well as buccolingual gingival dimensions could also account for these changes.
It should also be considered that orthodontic realignment might be an interesting factor related to gingival margin position.
Conclusions
In summary, considering its strengths and limitations, the findings of this study may lead to the conclusion that the mean amount of keratinized gingiva did not predispose lower incisors and canines to gingival recession.
photographic measured crown length
Table 2 -
Analysis of Variance, complemented by Tukey test, concerning the classification of the gingival margin alterations and the pre-treatment amount of keratinized gingiva.
Table 1 -
Initial amount (mm) of keratinized gingiva and gingival margin position -Tooth level analysis. | 2017-06-19T11:28:44.720Z | 2007-03-01T00:00:00.000 | {
"year": 2007,
"sha1": "95a09420a3da59fc10352a2b8bee03bc2b2cf814",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bor/a/SPBPDfgLZvwtg9ZRbTqLNwx/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "33ef357de454f18b64febfe091aba70948188811",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.